id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.04039
Refining the Responses of LLMs by Themselves
In this paper, we propose a simple yet efficient approach based on prompt engineering that leverages the large language model itself to optimize its answers without relying on auxiliary models. We introduce an iterative self-evaluating optimization mechanism, with the potential for improved output quality as iterations progress, removing the need for manual intervention. The experiment's findings indicate that utilizing our response refinement framework on the GPT-3.5 model yields results that are on par with, or even surpass, those generated by the cutting-edge GPT-4 model. Detailed implementation strategies and illustrative examples are provided to demonstrate the superiority of our proposed solution.
Tianqiang Yan, Tiansheng Xu
2023-05-06T13:03:45Z
http://arxiv.org/abs/2305.04039v1
# Refining the Responses of LLMs by Themselves ###### Abstract In the past few years, Large Language Models (LLMs) have generated unprecedented enthusiasm, with models like GPT providing human-like responses for a wide range of inquiries in nearly all domains. However, these models often fail to deliver satisfactory answers aligned with users' specific needs on their first attempt, necessitating multiple iterations and additional user input to refine the responses. This can lead to an unnecessary investment of time and effort from users. In this paper, we propose a simple yet efficient approach based on prompt engineering that leverages the large language model itself to optimize its answers without relying on auxiliary models. We introduce an iterative self-evaluating optimization mechanism, with the potential for improved output quality as iterations progress, removing the need for manual intervention. The experiment's findings indicate that utilizing our response refinement framework on the GPT-3.5 model yields results that are on par with, or even surpass, those generated by the cutting-edge GPT-4 model. Detailed implementation strategies and illustrative examples are provided to demonstrate the superiority of our proposed solution. ## 1 Introduction ### Revisiting Large Language Models Large Language Models (LLMs) have become a significant development in deep learning techniques, which allows them to understand and generate natural language using vast amounts of textual data. LLMs have shown exceptional potential and flexibility in various Natural Language Processing (NLP) and Natural Language Generation (NLG) tasks, such as text summary, machine translation, sentiment analysis, content creation, and conversational AI. These models are trained using a self-supervised learning approach where they learn from unlabeled data by predicting the following word or token in sequential data. This process enables LLMs to decipher the syntax, semantics, and general knowledge of human language while also retaining significant amounts of factual information retrieved from the training dataset. The emergence and evolution of LLMs were due to the advancements in transformer models, a type of neural network that utilizes attention mechanisms to encode and decode sequential data. Vaswani et al. first proposed transformer models [1], and since then, many variations, including BERT [2], GPT-3 [3], XLNet [4], and XLM-RoBERTa [5], have been developed, demonstrating unrivaled performance in several NLP benchmarks and tasks, further highlighting the potency and versatility of LLMs. OpenAI's ChatGPT stands as one of the most renowned LLMs in the field, of which the latest version is based on GPT-4 [6]. GPT-4 has demonstrated performance that rivals or exceeds human experts in numerous interdisciplinary tasks, while offering support for multimodal data and expanding the range of applications for large language models to a previously unattained scale. At present, the multimodal features of GPT-4's ChatGPT remain inaccessible to most users. Nonetheless, in regular conversations, the latest version of ChatGPT has exhibited considerable enhancements in understanding and response capabilities compared to its earlier iterations. In addition, the well-developed commercialization of ChatGPT, exemplified by Microsoft's GPT-4-based Chat with Bing and Office Copilot, as well as a multitude of third-party applications using GPT APIs, has facilitated the gradual permeation of LLM concepts and applications across diverse fields and demographics. This has established a significant milestone in the realm of computer science. The superiority of LLMs represent a pivotal advancement in NLP research, offering new prospects for language generation, dialogue systems, and creative writing. As LLMs continue to evolve, they are expected to play an increasingly critical role in dictating the direction of natural language processing and machine learning research. ### Studies on refining the responses of LLMs Despite their impressive capabilities, these models are not without limitations: obtaining a user's desired answer in a single attempt remains a challenging endeavor. Various factors contribute to this issue, such as biases inherent in training data and model architectures, which can result in incorrect or contextually inappropriate responses [7]. Moreover, the lack of explainability and transparency in the decision-making process of these black-box models further exacerbates the difficulty in optimizing model outputs for user needs [8]. This phenomenon is primarily attributed to the challenges faced by these models in comprehending nuanced and highly specialized contexts or adhering to specific writing styles and formats while generating responses, which often leads to inconsistencies and deviations from the desired output. [9, 10]. The Reinforcement Learning with Human Feedback (RLHF) mechanism is a recent advancement in the field of language learning models (LLMs) that aims to optimize their interactive responses to human users. This innovative approach is designed to incorporate human feedback in training LLMs to generate more effective, accurate, and contextually relevant responses, while mitigating potential pitfalls associated with traditional reinforcement-learning-based methods [3]. One significant advantage of the RLHF mechanism lies in its ability to leverage both expert demonstrations and preference comparisons to build a reward model for the LLM, which enables the model to adapt and improve its response generation based on human-provided feedback [11]. In order to train the RLHF mechanism to optimize the LLM's responses to human users, an initial set of demonstrations is provided by human experts who interact with the LLM, generating high-quality responses. These demonstrations are then utilized to perform supervised fine-tuning [12]. Building upon this foundation, the mechanism further incorporates user preference comparisons through a process wherein the LLM generates multiple candidate responses, and users are asked to rank or rate these responses according to their relevance, usefulness, and quality. To adjust and update the reward model accordingly, the RLHF mechanism employs an algorithm that computes the gradients of the reward model based on the aggregated user feedback [11]. Despite RLHF's numerous advantages in training LLMs, one major concern is the possibility of negative side effects from over-optimizing to human feedback, potentially leading the LLM to generate uninformative or excessively verbose responses in order to maximize its perceived reward [12]. Additionally, the reliance on human-generated demonstrations and feedback inherently introduces the potential for bias or inconsistency into the training process, which may influence the performance and behavior of the resulting LLM. In the meantime, the integration of RLHF in LLMs demands high quality volunteer or user responses, leading to increased time and monetary costs. Another extensively examined research study elucidated that LLMs exhibit a chain-of-thought cognition, and concurrently emphasized that deconstructing the inferential procedure of a problem via chain-of-thought prompting may serve to augment the model's proficiency in addressing intricate challenges [13]. Fundamentally, this concept necessitates users to meticulously dissect their queries prior to inputting them into comprehensive language models or to encourage the model to deliver responses accompanied by an elaborate reasoning process. If the RLHF optimization procedure is encapsulated into four stages: "User formulates a query, model profures a resolution, user evaluates the quality of the solution, model fine-tunes based on assessment," then chain-of-thoughts prompting embodies a more efficacious application-tier feedback optimization technique. This is attributed to its regardlessness of whether the model itself has undergone optimization, and its potential capacity to facilitate the model in producing dependable replies in a singular endeavor. Nonetheless, the challenge with the chain-of-thought theory is that if a user's query is manually dissected, the model's ability to generate a reliable response hinges on whether the user has accurately provided the decomposed version of the corresponding question. On the other hand, if the user requests the model to deliver step-by-step answers, but there is a discrepancy between the model's interpretation of the problem and the user's original purpose, the ultimate solution can also be entirely off-course. In summary, both RLHF and chain-of-thought prompting offer valuable advantages. As such, our aim is to integrate the strengths of these two methods and develop a fast to deploy, fully automated strategy to improve the performance of LLMs. In this study, we concentrate on the exploration of application mechanisms employed by prevalent LLMs. Our approach is predicated exclusively upon user inquiries, LLM responses, and judicious supplementary prompts, aiming to enhance the quality of LLM feedback. It's important to note that the term "quality" used here (including the same expression that appears later) encompasses multiple metrics, including but not limited to the accuracy, comprehensiveness, and conciseness of the answer. Explicitly, we propose a feasible methodology that obviates the necessity for ancillary models or supplementary manual interventions, enabling an LLM to autonomously refine its response to a query through a prompt-driven, adaptively iterative self-assessment and optimization process. With this paradigm as our goal, we design a viable, general-purpose optimization mechanism that is inspired the ideas of conversational reinforcement learning and chain-of-thought. Our contributions can be concluded as: * We provide a novel paradigm to refine LLMs' responses in an independent, application-level, and fully automatic way. * Our paradigm, along with the implementation on its basis, allows instant deployment with any available LLM APIs, while requiring nearly zero development knowledge. * The implementation is examined with possible daily inquiries, and the joint optimization scheme achieves overall the best outcome. The following content of this report is arranged as follows: In Section II, the proposed optimization scheme is introduced, and the three derived solutions are described. In Section III, the detailed settings and the results of our testing are demonstrated. Eventually, we conclude our study in Section IV. ## 2 The adaptive optimization paradigm In this paper, we design a highly-efficient, fully-automatic interaction mechanism grounded in the application layer of large language models (LLMs). Our approach enables adaptive optimization of AI-generated responses without necessitating human involvement, while simultaneously eschewing the need for fine-tuning the underlying language model or introducing supplementary models. Additionally, the framework's exclusive focus on the application layer makes it remarkably convenient for users to integrate it with LLM APIs. In this section, we will provide a thorough explanation of the optimization process. ### The overall framework The proposed optimization process can be outlined in the following steps: Figure 1: This is an illustration of using our iterative optimization framework to enhance the answers provided by LLM in response to user queries. The diagram depicts one iteration of the optimization process, involving three agents: the user (depicted as an avatar), a remote LLM server (represented by a robot), and a terminal (symbolized by a computer). The automation loop is enclosed within a dashed box. 1. User step: The user inputs a query into the terminal and sends it via an API to the remote LLM server, with a designated maximum number of optimization iterations. 2. Automatic step 1: The remote LLM server initially responds to the query by producing a model-generated answer and returning it to the terminal. 3. Automatic step 2: The terminal integrates the user query and the model's previous response to form a prompt, instructing the LLM model to analyze its initial answer, identify any limitations, and provide feedback accordingly. 4. Automatic step 3: The terminal generates an optimization prompt, combining the LLM's response, the user's query, and the deficiency analysis, and sends it to the remote server for improvement. The LLM model infers an updated answer and sends it back to the terminal. 5. Automatic step 4: The terminal receives the optimized response and generates a prompt that combines the optimized answer with the previous response and the user's query, asking the model to determine if the optimized answer is an improvement. The comparison result is then returned to the terminal. 6. Automatic step 5: If the comparison result shows that the optimized answer is better, the terminal utilizes a greedy strategy to repeat the automatic optimization process (from step 3, automatic step 2) until the maximum iterations have been reached. If the result is not improved, the terminal ends the optimization process and returns the previous response. Figure 1 offers an intuitive portrayal of our response refinement strategy, exemplified through a simulated optimization process. The optimization process as a whole is mainly automated through the terminal interactions with the model API, with the exception of the user setting an upper limit for the number of optimization loops. In practical applications, the specific optimization process is hidden from the user, and they will receive a refined response directly after inputting their question. It should be noted that the interaction logic chain throughout the entire enhance process is first-order, meaning that the scheme does not require the LLM to remember the entire previous optimization process. This is done to prevent large token costs and premature depletion of tokens. Furthermore, the optimization process we have designed can ensure the reliability of this first-order optimization mode. The specific reasons for this will be explained in 2.2. ### Refining the responses of LLMs by themselves We have proposed an iterative optimization paradigm that integrates ideas from self-supervised reinforcement learning and chain of thought. Upon revisiting the optimization strategy of RLHF, it is apparent that the two key steps in our refinement mechanism - namely, feedback defect analysis and defect-guiding optimization logic - are fundamentally akin to those of RLHF. However, there are notable distinctions between the two. Whereas RLHF is geared towards LLMs still in their development and debugging phase with a focus on "human feedback," our optimization approach leverages self-evaluation and self-optimization (SESO) through conversational self-interaction processes that predominantly rely on prompt engineering. Prompt engineering represents a milestone in the field of natural language processing that has emerged concomitantly with the rise of large language models. Enabled by the powerful contextual comprehension and reasoning capabilities inherent in contemporary LLMs, prompt engineering allows for the specification of human-readable prompts tailored to task objectives, thus empowering the model to deliver desired outputs. To facilitate an examination of potential deficiencies in an LLM's response to a given query, all that is required is the transmission of a pre-designed prompt to a remote server which is capable of conveying such a request and providing the desired feedback. To construct a proficient prompt, three critical constituents must be integrated: the initial inquiry posed by the user, the present response generated by the LLM in reference to the inquiry, and prompts that steer the model towards accurate comprehension, analysis, and feedback aligned with the desired objectives. Moreover, in an attempt to mitigate the potential inclusion of extraneous information in LLM outputs, prompts can be augmented with limiting cues. For example, the prompt may conclude with a directive such as "provide the analysis result only" to focus the model's output on specific aspects relevant to the query. As our optimization strategy relies on an iterative process, an essential element of our solution is an iterative self-termination mechanism based on a voting method utilizing LLM. When the optimization process advances to this stage, the terminal has already cached the user's question, the pre-optimized LLM response, and the post-optimized response. The essence of this iterative self-termination strategy is to compare the model's answers before and after optimization based on the user's question and select the one it considers superior. The reason for such a judgment mechanism, instead of simply iterating optimization until the user's maximum optimization times are reached, is that we cannot guarantee that the LLM output is 100% reliable. In simple terms, whether the optimized answer generated by the model truly achieves the goal of "optimization" is not entirely certain, depending on many factors, including whether the model correctly understands the user's intent, and whether it accurately digests the defects in the previous answer and makes targeted improvements. As the solution operates at the application level, and the model remains a black box, this self-termination strategy supports adaptive iterations while avoiding any potential negative consequences from further optimization beyond the optimal point. To summarize, this iterative optimization self-termination mechanism contributes to the stability of the output while supporting the process's adaptability. This mechanism is also prompt-driven. The terminal integrates the user's query, the current and previous answers from the model, and then employs a voting prompt to send the combined input to a remote server. A newly initialized model returns its judgment result. To facilitate the terminal in producing appropriate responses based on the voting result, it is critical to add an limiting instruction. Assuming that the labels of the responses before and after optimization are "1" and "2" respectively, the purpose of the limiting instruction to allow the remote model to only return content limited to be one of the two labels. When the model determines that the current response would better answer the user's query, it employs a greedy strategy by repeating the previously mentioned optimization steps. On the other hand, if the model determines that the previous answer is still the best response, it returns that answer to the user. Another crucial design aspect of our approach is that the response optimization mechanism of this process is not blind, which attributes to the defect-guided enhancement. The inspiration for this optimization method comes from our understanding of the concept of the chain of thought [13]. The idea fundamentally involves using explicit guidance or requiring the model to output the reasoning process in order to improve the robustness of the output of large language models as much as possible. The core purpose of this idea is to prevent the model from blindly searching and generating answers. In the appli cation scenario targeted by our solution, skipping the defect analysis and guided optimization steps would lead to a completely random optimization path, which will further result in the instability of the optimization process. For example, assuming the user's question is "Where were the 2012 Olympics held?" and the model's initial response is "The 2012 Olympics took place in London, UK, opening on July 27th and closing on August 12th." It can be challenging to optimize this answer, even for a human. However, by providing some guidance information to the model, such as "The original question only asks about the location of the Olympics, while the previous answer includes irrelevant time information," the model can focus its optimization effort and remove the unnecessary time information from the refined answer with high probability. We can summarize the entire process into a flow structure, as shown in Figure 2, based on the various optimization nodes mentioned above. The framework operates with first-order memory since it ensures that each iteration produces a results that is theoretically better than all previous optimization results. Thus, the procedure effectively avoids the accumulation of previous outcomes, which could lead to an increase in token consumption as the number of iterations increases. It's noteworthy that since prompts are based on natural language, the design of prompts involved in each module in the figure may vary. Our focus in this paper is to provide such a comprehensive response optimization framework. In Section 3, we present a series of experiment results to demonstrate the effectiveness of our scheme. These experiments showcase the superiority of our approach in refining the responses generated by large language models. ## 3 Testing and result analysis ### The implementation and experiment design We have publicly released the source code of an intelligent conversational programme on GitHub, implemented based on the reponse refinement paradigm discussed in this article, while following a flexible modular design as Figure 2. The project is available through the following link: [https://github.com/henryyantq/OptimALML](https://github.com/henryyantq/OptimALML). Here, we provide details of the model configurations and prompts used, as listed in Table 1. To date, public APIs for other large language models are not yet available. Thus, in this section, we will only use OpenAI's GPT as the target model for our optimization framework. As demonstrated in Table 1, the scheme is applied to the GPT-3.5-Turbo model (hereafter referred to as the _refined_ GPT-3.5), which presents a promising opportunity for improvement due to its status as an earlier version of the GPT family offering the chat completion interface. Additionally, the GPT-3.5-Turbo boasts advantages in terms of faster response generation and smaller computational overheads, making it a feasible choice even if multiple iterations are required to improve the quality of the original model's responses. The consumption of computational resources and time is fully manageable under these circumstances. \begin{table} \begin{tabular}{c c c} \hline **LLM** & **Module** & **Prompt used** \\ \hline \hline \multirow{4}{*}{GPT-3.5-Turbo} & Defect analysis & \begin{tabular}{c} Please list the defects of answer \(a\) to the question \(q\). List the defects in one sentence \\ instead of a list with line breaks! \\ \end{tabular} \\ \cline{2-3} & Guided optimisation & \begin{tabular}{c} The answer \(a\) to the question \(q\) is not optimal because that \(d\). Please refine the answer \\ providing a better one regarding the aforementioned flaw. You should provide nothing \\ but the answer. \\ \end{tabular} \\ \cline{2-3} & Voting for better & \begin{tabular}{c} The question is \(q\), to which there are two optimal answers, one is \(a\), the other one is \(a^{*}\). \\ Please answer either ”1” or ”2” if you think one of them is better, or ”0” if you think \\ they’re equally good. Do not reply anything else than a number! \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: Settings of the implemented structure of the response optimization Figure 2: The flowchart depicting the entire process of an adaptive iterative optimization mechanism. The deep blue rounded rectangles represent variables generated by the user, the black ones represent variables generated by the terminal, and the light blue ones represent variables generated by the remote model. A “\(\oplus\)” denotes the prompt combination operation. We have chosen five problems commonly faced in daily human life to evaluate the model, as presented in Table 2. Out of the five questions, questions 1, 4, and 5 are factual and questions 2 and 3 are inferential. Of the three factual questions, questions 1 and 5 themselves are somewhat misleading, particularly question 5, which requires the model to identify erroneous information within the question. As for the inferential questions, questions 2 and 3 have multiple correct answers. Therefore, the model is considered correct if it recognizes and provides any or all of the solutions that meet the criteria, with question 2 requiring the model to provide at least one accurate answer. \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt}} \hline **Question** & **Reference Answer** \\ \hline \hline How to replace the memory on a 2020 Apple M1 processor version MacBook Air? & In fact, the memory of this Macbook is NOT upgradable. \\ \hline How to use the four numbers 2, 2, 8, and 8 along with basic arithmetic operations to obtain 24, with each number used exactly once? & The answer varies. Any answer that meets the requirements is acceptable. \\ \hline The first five numbers in a sequence are 2, 3, 6, 15, and 45, respectively. If the sixth number has only one decimal place and the sequence is incremented, what may be the sixth number? & The answer can be 157.5, or more rigorously, multiple since this is an open question with limited conditions given. \\ \hline Who was the father of Shinkansen? & Shinji Sogó is credited with the creation of the first ”bullet train”, the Tokaidó Shinkansen. \\ \hline Why have Formula 1 racing cars adopted the design of halo since 2016? & F1 officially adopted the design in 2018, NOT 2016. It is for protecting the drivers from potential head damage. \\ \hline \end{tabular} \end{table} Table 2: List of the selected questions and the corresponding reference answers. The reference answers are given by the human expert. Figure 3: Answers of the five selected questions generated by the original GPT-3.5-Turbo, the original GPT-4, and the refined GPT-3.5 (horizontal comparison) The entire experiment procedure consists of two main stages. In the first stage (the horizontal comparison testing), we compare the responses provided by the primary GPT-3.5-Turbo, the refined GPT-3.5 and the GPT-4 when addressing identical questions. The GPT-4's response process will remain unoptimized, enabling us to assess whether an earlier LLM with our enhancement process applied can rival its original self as well as the current industry-leading model in terms of response quality. Besides, we compared the computational overhead incurred by the native GPT-4 model and the refined GPT-3.5 model when addressing the same questions, emphasizing the cost benefits of applying our peripheral optimization scheme over using a more advanced model for refined feedback. In the second stage (the longitudinal comparison testing), we carry out a comparative assessment of the influence of the optimization framework's completeness on the resulting response quality, using the same set of questions. In specific terms, we design two simplified versions based on the original optimization framework. The first simplified version is called "_blind_ refinement", which simplifies the original guided optimization mechanism by allowing the model to optimize without referencing any prompts in each loop. The second simplified version is named "_reckless_ refinement", whereby the voting mechanism is removed, ensuring that the optimization process never automatically stops before reaching the predetermined maximum number of iterations. The goal of this stage is to observe and analyze the influence of various modules within the framework on optimization outcomes, underscoring the significance of module integration. It's crucial to elaborate on the measuring approach we have taken. To begin with, all the questions we selected for comparative testing have correct answers, which minimizes the uncertainty of response output, reduces the subjectivity of the evaluation process, and ensures the reliability of the results presented. In addition, since humans still possess a superior understanding and evaluative ability in language and question-answering, the final evaluation of response quality for each question from different models is carried out by a human expert. The assessment of response quality is based on three criteria: accuracy, conciseness, and completeness. Accuracy measures how correct the responses are. Conciseness refers to whether the model's answers contain a significant amount of unnecessary information. Completeness measures whether the model can address all the key points raised in the question. The human expert takes all three aspects into consideration and provides a comprehensive evaluation accordingly. Overall, the aforementioned experiments can intuitively illustrates the comprehensiveness of our response optimization process, and highlights its potential for enhancing real-world large language models as well as competing against the very best. ### Experiment results and analysis The outcomes from both phase 1 horizontal comparative testing and phase 2 longitudinal comparative testing are presented in Figure 3 and Figure 4 respectively, meanwhile Table 5 presents a comprehensive assessment of all the response outcomes to each question, as evaluated by the human expert. To facilitate a more intuitive comparison for readers, we have not included the intermediate results generated during the iterative optimization process. If it is necessary to refer to the intermediate results during subsequent analysis, we will enumerate them at the corresponding location. Based on the comprehensive assessment results of the horizontal comparison test (see Table 3), our refined GPT-3.5 model achieved an astonishing 100% accuracy in answering these questions, a clear advantage over both the original GPT-4 and GPT-3.5-Turbo models without any response optimization strategies. In addition, the refined GPT-3.5 also surpassed all competitors regarding the answer conciseness. Yet in terms of answer completeness, although GPT-4 answered one question incorrectly, our refined GPT-3.5 model could not surpass its advantage in delivering more thorough explanations. Notably, the answers obtained from the refined GPT-3.5 are results of the iterative optimization of the response refinement framework proposed in this paper, built upon the initial answers provided by the native GPT-3.5-Turbo. As it is obvious that GPT-3.5-Turbo performed the worst on account of the results, this demonstrates that our paradigm is capable of refining the accuracy and redundancy of the initial answers to produce new ones with higher quality. Furthermore, although refined GPT-3.5 may require more time for iteration, but its total computational cost is much lower than that of GPT-4. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Quota** & **Q1** & **Q2** & **Q3** & **Q4** & **Q5** \\ \hline Accuracy & **refined GPT-3.5** = & **refined GPT-3.5** & All & **refined GPT-3.5** & **refined GPT-3.5** = \\ & GPT-4 & & & & GPT-4 \\ \hline Conciseness & **refined GPT-3.5** & **refined GPT-3.5** & hard to decide & **refined GPT-3.5** & **refined GPT-3.5** \\ \hline Completeness & GPT-4 & **refined GPT-3.5** & GPT-3.5-Turbo & **refined GPT-3.5** & GPT-4 \\ \hline \hline \end{tabular} \end{table} Table 3: The comprehensive assessment results provided by the human expert. The model name indicated in the corresponding row and column refers to the model whose generated answer is acknowledged as having the best performance on that specific question and quota. The evaluation targets only include the three models (GPT-3.5-Turbo, GPT-4, and refined GPT-3.5) in Figure 3. Figure 4: Answers of the five selected questions generated by the two pruned variations of the refined GPT-3.5, i.e. the _blind refinement_ version and the _reckless refinement_ version (longitudinal comparison). “NR” refers to that the process is essentially equivalent to the complete refined GPT-3.5 framework, so there’s no need for a duplicate experiment. Even when using the same number of tokens, refined GPT-3.5 with a maximum iteration limit of 3 can save 5 to 10 times the API usage consumption, thus greatly reducing the economic burden on users. The reason for the lower computational cost is not only due to the use of a more economical model, but also because of the contribution of the first-order memory of the optimization framework mentioned earlier,since such operation mode does not exponentially accumulate the number of tokens, and the resource cost of each iteration can be considered almost constant. The organic combination of each module in the optimization framework is also essential for obtaining high-quality responses. From the answers provided by variants of refined GPT-3.5 that have removed one key module for each question (as shown in Figure 4), it can be seen that both weakening the purpose of optimization (guided optimization mechanism) and removing the ability of self-review and self-stop (voting mechanism) significantly reduce optimization capabilities and even result in negative optimization. We found some pivotal clues based on the intermediate results to explain the reasons for the shortcomings. As an example of the iterative optimization process for the second question using the blind refinement framework, the initial response given by the native GPT-3.5-Turbo contained numerous incorrect steps and an incorrect equation: "\((8\times 8)+2\div 2-9=24\)". This answer is problematic because the original question does not include the number 9, does not allow for any numbers other than 8, 8, 2, 2 to be used in the calculation, and the equation on both sides of the equals sign is actually not equivalent. After the first round of iteration, the blind refinement mechanism removed only the cumbersome steps, keeping the erroneous equation as the new response, which was still clearly incorrect. Even after the second and third rounds of iteration, this incorrect equation remained a major part of the answer, resulting in a final optimized solution that was still incorrect. This suggests that allowing the model to optimize without a clear purpose can result in the model repeatedly rewriting the answer without actually improving it. At the same time, in the example of the reckless refinement framework that removes the voting module to deal with the same question, the model provided a correct answer by the end of the second iteration, but unexpectedly produced an incorrect answer by the third iteration. The lack of self-review and self-stop mechanisms means that the model continued to optimize beyond the point where it had already obtained the correct answer, and as a result, ended up outputting an incorrect answer that was generated in the final round of optimization. Both of these typical examples serve to underscore the integral roles played by these two modules throughout the optimization process. To sum up, We have substantiated through the conclusions of the above experiments that using the iterative optimization framework proposed in this paper on a large language model can significantly enhance the quality of the generated answers at a lower cost Furthermore, it highlights the contributions made by the guided optimization mechanism and the voting mechanism in improving optimization capabilities. ## 4 Conclusion In this paper, we introduce a fully-autonomous adaptive iterative response optimization paradigm, inspired by concepts from RLHF and the chain of thought. This approach relies solely on simple prompt engineering and the LLM API, without the need for manual intervention, auxiliary models, or access to internal structures and parameters of language models. Specifically, we present a detailed optimization framework utilizing an efficient modular design, applied to the GPT-3.5-Turbo. Our experiments show that our optimization mechanism enables a less-capable model to achieve response quality better than its original self, and even on par with one of the best current models while reducing resource consumption. Through this scheme, we demonstrate that in many situations, the existing question-answering interaction paradigm may not fully harness the potential of generative language models. Appropriately designing prompts and planning response interaction logic is a crucial approach to further unleash a model's potential.
2307.12022
A Flexible Framework for Incorporating Patient Preferences Into Q-Learning
In real-world healthcare problems, there are often multiple competing outcomes of interest, such as treatment efficacy and side effect severity. However, statistical methods for estimating dynamic treatment regimes (DTRs) usually assume a single outcome of interest, and the few methods that deal with composite outcomes suffer from important limitations. This includes restrictions to a single time point and two outcomes, the inability to incorporate self-reported patient preferences and limited theoretical guarantees. To this end, we propose a new method to address these limitations, which we dub Latent Utility Q-Learning (LUQ-Learning). LUQ-Learning uses a latent model approach to naturally extend Q-learning to the composite outcome setting and adopt the ideal trade-off between outcomes to each patient. Unlike previous approaches, our framework allows for an arbitrary number of time points and outcomes, incorporates stated preferences and achieves strong asymptotic performance with realistic assumptions on the data. We conduct simulation experiments based on an ongoing trial for low back pain as well as a well-known completed trial for schizophrenia. In all experiments, our method achieves highly competitive empirical performance compared to several alternative baselines.
Joshua P. Zitovsky, Leslie Wilson, Michael R. Kosorok
2023-07-22T08:58:07Z
http://arxiv.org/abs/2307.12022v1
# A Flexible Framework for Incorporating Patient Preferences Into Q-Learning ###### Abstract In real-world healthcare problems, there are often multiple competing outcomes of interest, such as treatment efficacy and side effect severity. However, statistical methods for estimating dynamic treatment regimes (DTRs) usually assume a single outcome of interest, and the few methods that deal with composite outcomes suffer from important limitations. This includes restrictions to a single time point and two outcomes, the inability to incorporate self-reported patient preferences and limited theoretical guarantees. To this end, we propose a new method to address these limitations, which we dub _Latent Utility Q-Learning (LUQ-Learning)_. LUQ-Learning uses a latent model approach to naturally extend Q-learning to the composite outcome setting and adopt the ideal trade-off between outcomes to each patient. Unlike previous approaches, our framework allows for an arbitrary number of time points and outcomes, incorporates stated preferences and achieves strong asymptotic performance with realistic assumptions on the data. We conduct simulation experiments based on an ongoing trial for low back pain as well as a well-known completed trial for schizophrenia. In all experiments, our method achieves highly competitive empirical performance compared to several alternative baselines. _Keywords:_ Dynamic Treatment Regime, Precision Medicine, Latent Variable Model, Multiple Outcomes Introduction Precision medicine (Kosorok and Laber, 2019) is a subfield of statistics and reinforcement learning concerned with estimating _dynamic treatment regimes (DTRs)_(Tsiatis et al., 2019), or a sequence of treatment rules at different time points that depend on a patient's evolving characteristics. Precision medicine allows researchers to leverage datasets collected from clinical trials, observational studies or electronic health records in order to provide data-driven support to clinicians and policy makers. It also has the potential to shape healthcare in settings where interaction with medical professionals is undesired or difficult, as could be the case, for example, with low-income patients (Wahl et al., 2018). In many healthcare settings, there are multiple outcomes of interest (Chan et al., 2013). For example, this work is motivated in part by the _Biomarkers for Evaluating Spine Treatments (BEST)_ study (U.S. National Library of Medicine, 2022), an ongoing NIH-funded sequential multiple assignment randomized trial (SMART) (Almirall et al., 2014) directed by researchers at UNC Chapel Hill as part of the Back Pain Consortium Research Program (Mauck et al., 2023). The goal of the BEST study is to estimate a DTR for patients suffering from chronic low back pain (Andersson, 1999). While a naive analysis would focus solely on reducing pain, maximizing pain relief may come at a cost of side effects on fatigue and cognition. A truly optimal DTR should account for both treatment efficacy and side effect severity. Over the last decade, methods have been proposed to estimate DTRs under a variety of settings, including settings with a single decision point (Zhang et al., 2012; Zhou et al., 2017), multiple decision points (Zhao et al., 2015; Liu et al., 2018), an infinite number of decision points (Luckett et al., 2020; Levine et al., 2020) and imperfect outcome measurements (Zhao et al., 2015; Wu et al., 2020). Most of these works assume a single, known outcome to maximize. In settings with multiple outcomes of interest, a common approach is to use a fixed summary measure of the outcomes known as a _utility function_, and apply standard approaches to maximize the resulting utilities (Hayes et al., 2022). However, specifying the ideal utility function in advance is not always feasible. Several approaches have been proposed to estimate the utility function from data. For example, Jiang et al. (2021) proposed a minimax approach whereby the utility was a convex combination of outcomes and the convex weights were scalars tuned so as to maximize the minimum estimated value among the multiple outcomes. However, such an approach does not account for the possibility that some outcomes are more important than others, nor does it account for individual-level variation in the utilities such as patient preferences. Luckett et al. (2021) proposed using inverse reinforcement learning (Osa et al., 2018) to learn a patient-specific utility function from decisions of expert clinicians. However, in many healthcare datasets, the observed decisions are made randomly (Kosorok and Moodie, 2015), by the patients themselves, or by clinicians who act suboptimally (Dehon et al., 2017). Moreover, their methodology does not directly account for patient preferences. Butler et al. (2018) allows the utility of patients to be random and uses a latent variable model to estimate it. Their framework does not assume access to expert-level data and directly incorporates stated preference surveys. However, the framework is restricted to two outcomes and a single decision point, and does not incorporate post-study satisfaction surveys. Moreover, consistency of their latent variable model was left as an assumption and its asymptotic distribution was not derived. In her dissertation, Butler (2016) extended the work of Butler et al. (2018) to datasets with multiple decision points and measures of patient satisfaction. However, the extension assumes two proximal outcomes are measured after every time point, and chooses actions that maximize only the immediate next proximal outcome without accounting for outcomes occurring later in time. It also assumes a binary measure of satisfaction and a two-dimensional outcome vector at each time point, and has no theoretical guarantees. Discrete choice experiments (Reed Johnson et al., 2013) and conjoint analysis (Bridges et al., 2011) aim to extract underlying treatment preferences from stated-preference surveys, and many discrete choice models can be interpreted as estimating latent utilities which drive the choices made by respondents (Mcfadden, 1974). This is slightly different from our goal however, which is is to estimate latent utilities that measure underlying contentedness with experienced outcomes and occur _after_ treatment is given. Moreover, while discrete choice experiments usually focus on determining the importance of different factors in predicting the utility (Hauber et al., 2016), our primary interest is to accurately predict the utility directly so as to estimate an optimal DTR. Our goal is also closely related to that of preference-based deep reinforcement learning (Christiano et al., 2017; Ibarz et al., 2018), though such works usually assume an interactive process that is incompatible with learning from static datasets. To this end, we propose a novel framework, _Latent Utility Q-Learning (LUQ-Learning)_, that incorporates multiple outcomes into the Q-learning algorithm (Schulte et al., 2014) via a latent model approach. Unlike previous approaches, our framework allows for an arbitrary finite number of decision points, outcomes of interest and treatment possibilities. Our framework also incorporates both discrete choice stated preference questionnaires and patient satisfaction questionnaires. We then derive theoretical properties of LUQ-Learning while making only modest assumptions, giving our framework strong theoretical guarantees. Finally, we apply LUQ-Learning to simulated patients from chronic low back pain and schizophrenia studies, and achieve excellent empirical performance. ## 2 Background ### Traditional Data Setup In the traditional precision medicine setup (Kosorok and Laber, 2019) with two decision points, the observed data consists of \(N\) iid trajectories \(\mathcal{D}=\{(\mathbf{X}_{1}^{i},A_{1}^{i},\mathbf{X}_{2}^{i},A_{2}^{i},Y^{i} )\}_{i=1}^{N}\) where \(\mathbf{X}_{1}^{i}\in\mathcal{X}_{1}\) are the covariates measured prior to the first treatment, \(A_{1}^{i}\in\mathcal{A}_{1}\) is the first treatment assigned, \(\mathbf{X}_{2}^{i}\in\mathcal{X}_{2}\) are the covariates measured between the first and second treatments, \(A_{2}^{i}\in\mathcal{A}_{2}\) is the second treatment assigned, and \(Y\in\mathcal{Y}\subset\mathbb{R}\) is the observed _utility_ or _reward_ scaled so that higher values are better. We also define the observation history prior to the first and second treatments as \(\mathbf{H}_{1}=\mathbf{X}_{1}\in\mathcal{H}_{1}\) and \(\mathbf{H}_{2}=(\mathbf{X}_{1},A_{1},\mathbf{X}_{2})\in\mathcal{H}_{2}\), respectively. The set of possible actions may depend on observation history, and we will often denote the action spaces as \(\mathcal{A}_{\mathbf{H}_{1}}\) and \(\mathcal{A}_{\mathbf{H}_{2}}\) to reflect this fact. The goal is to estimate a DTR, or sequence of decision rules \(\pi=(\pi_{1},\pi_{2})\) where \(\pi_{1}:\mathcal{H}_{1}\rightarrow\mathcal{A}_{1}\) and \(\pi_{2}:\mathcal{H}_{2}\rightarrow\mathcal{A}_{2}\). \(V(\pi)=\mathbb{E}_{\pi}[Y]\) is known as the _value_ of DTR \(\pi\), where \(\mathbb{E}_{\pi}[Y]\) is the expected utility that would be observed if the patient population were treated according to \(\pi\), and the optimal DTR \(\pi^{*}=(\pi_{1}^{*},\pi_{2}^{*})\) satisfies \(V(\pi^{*})\geq V(\pi)\) for all other DTRs \(\pi\). While many previous works assume two outcomes \(Y_{1}\) and \(Y_{2}\) measured after each decision point and define the value as \(V(\pi)=\mathbb{E}_{\pi}[Y_{1}+Y_{2}]\), this is just a special case of our setup with \(Y=Y_{1}+Y_{2}\). Define the _Q-functions_ as \(Q_{2}(\mathbf{h}_{2},a_{2})=\mathbb{E}[Y|\mathbf{H}_{2}=\mathbf{h}_{2},A_{2}=a _{2}]\) and \(Q_{1}(\mathbf{h}_{1},a_{1})=\mathbb{E}[\max_{a_{2}}Q_{2}(\mathbf{H}_{2},a_{2} )|\mathbf{H}_{1}=\mathbf{h}_{1},A_{1}=a_{1}]\). Under standard casual inference assumptions to be discussed shortly, it is easy to show that \(\pi_{2}^{*}(\mathbf{h}_{2})=\max_{a_{2}}Q_{2}(\mathbf{h}_{2},a_{2})\) and \(\pi_{1}^{*}(\mathbf{h}_{1})=\max_{a_{1}}Q_{1}(\mathbf{h}_{1},a_{1})\)(Bertsekas, 2012). To this end, the _Q-learning_ algorithm (Schulte et al., 2014) approximates \(Q_{2}\) as \(\hat{Q}_{2}\) by using a regression algorithm with covariates \((\mathbf{H}_{2},A_{2})\) and responses \(Y\). \(Q_{1}\) is then estimated as \(\hat{Q}_{1}\) by using a regression algorithm with covariates \((\mathbf{H}_{1},A_{1})\) and responses \(\max_{a_{2}}\hat{Q}_{2}(\mathbf{H}_{2},a_{2})\). Finally, \(\pi^{*}\) is estimated as \(\hat{\pi}=(\hat{\pi}_{1},\hat{\pi}_{2})\) where \(\hat{\pi}_{1}(\mathbf{h}_{1})=\max_{a_{1}}\hat{Q}_{1}(\mathbf{h}_{1},a_{1})\) and \(\hat{\pi}_{2}(\mathbf{h}_{2})=\max_{a_{2}}\hat{Q}_{2}(\mathbf{h}_{2},a_{2})\). While several alternatives to Q-learning have been proposed (Liu et al., 2018; Shi et al., 2018), our focus will be on Q-learning. Moreover, while we have assumed two decision points, the setup easily extends to more than two time points as well (Schulte et al., 2014). ### Our Data Setup We assume access to a dataset from a SMART with two time points and a discrete action space for simplicity. It is fairly straightforward to extend our methodological framework and theoretical results to observational data with more than two time points, though extensions to continuous action spaces would be less trivial (Antos et al., 2007). In contrast to the traditional setup, we assume access to patient preference and satisfaction questionnaire responses and a vector of outcomes. Specifically, let \(\mathbf{X}_{1}\in\mathcal{X}_{1}\) be covariate information (excluding questionnaire data) and \(\mathbf{W}_{1}\in\mathcal{W}_{1}\) be stated preference questionnaire responses collected prior to the first treatment; \(A_{1}\in\mathcal{A}_{\mathbf{H}_{1}}\) be the first treatment assigned; \(B_{1}\in\mathcal{B}_{1}\subset\mathbb{R}\) be a self-reported satisfaction measure about \(A_{1}\) assessed prior to the second treatment, with higher values indicating greater satisfaction; \(\mathbf{X}_{2}\in\mathcal{X}_{2}\) be covariate information and \(\mathbf{W}_{2}\in\mathcal{W}_{2}\) be stated preferences collected between the first and second treatments; \(A_{2}\in\mathcal{A}_{\mathbf{H}_{2}}\) be the second treatment assigned; \(\mathbf{Y}\in\mathcal{Y}\subset\mathbb{R}^{q}\) be a vector of outcomes measured after \(A_{2}\), each of which is better when higher; and \(B_{2}\in\mathcal{B}_{2}\subset\mathbb{R}\) be a self-reported satisfaction measure recorded at the end of the study. We assume our observed data consists of \(N\) iid trajectories \(\mathcal{D}=\{(\mathbf{X}_{1}^{i},\mathbf{W}_{1}^{i},A_{1}^{i},B_{1}^{i}, \mathbf{X}_{2}^{i},\mathbf{W}_{2}^{i},A_{2}^{i},\mathbf{Y}^{i},B_{2}^{i})\}_ {i=1}^{N}\). Moreover, we define our observation histories as \(\mathbf{H}_{1}=(\mathbf{X}_{1},\mathbf{W}_{1})\in\mathcal{H}_{1}\), \(\mathbf{H}_{2}=(\mathbf{H}_{1},A_{1},B_{1},\mathbf{X}_{2},\mathbf{W}_{2})\in \mathcal{H}_{2}\) and \(\mathbf{H}_{3}=(\mathbf{H}_{2},A_{2},\mathbf{Y},B_{2})\in\mathcal{H}_{3}\). In Figure 1 we summarize the temporal sequence in which relevant variables are observed. Let \(\mathbf{A}=(A_{1},A_{2})\), \(\pi(\mathbf{H}_{1},\mathbf{H}_{2})=(\pi_{1}(\mathbf{H}_{1}),\pi_{2}(\mathbf{H }_{2}))\) and \(\mathbf{Y}^{*}(\mathbf{a})\) represent the outcome vector that would be observed for a patient if they received treatment sequence \(\mathbf{A}=\mathbf{a}\). Note that \(\mathbf{Y}=\mathbf{Y}^{*}(\mathbf{A})\) and \(V(\pi)=\mathbb{E}\left[\sum_{\mathbf{a}\in\mathcal{A}_{\mathbf{H}_{1}}\times \mathcal{A}_{\mathbf{H}_{2}}}I(\mathbf{a}=\pi(\mathbf{H}_{1},\mathbf{H}_{2})) \mathbf{Y}^{*}(\mathbf{a})\right]\). We assume that \(\mathbf{Y}^{*}(\mathbf{a})\perp\!\!\!\perp A_{1}|\mathbf{H}_{1}\) and \(\mathbf{Y}^{*}(\mathbf{a})\perp\!\!\!\perp A_{2}|\mathbf{H}_{2}\) for all \(\mathbf{a}\). This is satisfied when there are no unmeasured variables that effect both \(\mathbf{A}\) and \(\mathbf{Y}\). A major purpose of SMARTs is to avoid unmeasured confounders by randomizing treatments. ### The BEST Study As our framework is partly motivated by the BEST study, we briefly describe the expected data structure from BEST here. The study will last for 26 weeks: after an initial screening visit, covariates will be assessed over a two-week run-in period, followed by two 12-week treatment periods, the start of which assigns treatments randomly to patients, followed by assessments at the end of the study. We expect complete data for at least 600 patients. \(\mathbf{X}_{1}\) consists of baselines related to back pain history and severity, demographics, financial security and lab work results. \(\mathbf{W}_{1}\) (as well as \(\mathbf{W}_{2}\)) consists of responses to 12 binary questions, each of which asks patients to choose one set of outcome measurements over another based on seven outcome variables, as well as a question asking patients to rank the outcomes by preference. The 12 binary questions were generated using a tool similar to CAPER Treatment (Wilson et al., 2023), but with different attributes so as to focus on outcome preferences instead of treatment preferences. \(\mathcal{A}_{1}=\{1,2,3,4\}\) consists of four possible categorical treatments Figure 1: Summary of Observed Data. The rectangles below each time point list the variables observed at that time point, while the circles represent actions that were taken afterwords. For example, we can see that \(\mathbf{X}_{1},\mathbf{W}_{1}\) was observed at the first time point, and then action \(A_{1}\) was taken, and then \(B_{1},\mathbf{X}_{2},\mathbf{W}_{2}\) was observed. \(\mathbf{H}_{1}\) is all information observed prior to \(A_{1}\) while \(\mathbf{H}_{2}\) is all information observed prior to \(A_{2}\). corresponding to acceptance and commitment therapy, duloxetine (Dhaliwal et al., 2022), enhanced self-care, and evidence-based exercise and manual therapy, respectively. The details of these treatments are not relevant to our paper and are not discussed further. \(\mathbf{X}_{2}\) consists of a number of variables as well, including responses to the questionnaires Patient's Global Impression of Change (PGIC) (Hurst and Bolton, 2004) and Pain, Enjoyment and General Activity (PEG) (Krebs et al., 2009). Based on these responses, patients will be grouped into response class \(C\in\mathcal{C}=\{1,2,3,4\}\). Patients will remain on current treatment for \(C=1\), augment current treatment with an additional random treatment if \(C=2\), randomly augment or switch treatment if \(C=3\) and switch to a different random treatment if \(C=4\). The exception is when \(A_{1}=1,C\in\{3,4\}\), in which case a patient will always augment current treatment instead of switching. We thus have \(\mathcal{A}_{2}=\{x,x+y|x,y\in\{1,2,3,4\},x<y\}\) where \(A_{2}=x+y\) means the patient is taking medications \(x\) and \(y\) simultaneously. For simplicity, we will assume \(\mathbf{Y}\) consists of three of the seven outcomes assessed in the stated preference surveys, namely fatigue, cognition and pain after 26 weeks. \(B_{1}\) and \(B_{2}\) are ordinal (\(\mathcal{B}_{1},\mathcal{B}_{2}=\{1,2,....,7\}\)) and assess satisfaction with the set of these three outcomes jointly. As each component of \(\mathbf{Y}\) involves some summary of a small number of ordinal and binary questions, we will assume the components of \(\mathbf{Y}\) take on a discrete number of ordinal values. As before, the goal is to estimate a DTR \(\pi=(\pi_{1},\pi_{2})\) where \(\pi_{1}:\mathcal{H}_{1}\rightarrow\mathcal{A}_{1}\) is applied to choose treatment \(A_{1}\) after covariates \(\mathbf{X}_{1}\) and stated preferences \(\mathbf{W}_{1}\) are measured for the first time point, and \(\pi_{2}:\mathcal{H}_{2}\rightarrow\mathcal{A}_{2}\) is applied to choose treatment \(A_{2}\) after covariates \(\mathbf{X}_{2}\), stated preferences \(\mathbf{W}_{2}\) and stated satisfaction \(B_{1}\) are measured for the second time point (see Figure 1). \(\pi\) should be estimated to optimize all of the outcomes in \(\mathbf{Y}\) simultaneously, with the relative importance of each outcome tailored to patients based on their preferences. ## 3 Methodology ### Latent Utility Q-Learning We assume there exists an unobserved vector \(\mathbf{E}\in\mathcal{E}\) for each patient, where \(\mathcal{E}\) is the \(q-1\)-dimensional probability simplex, such that a patient with \(\mathbf{E}=\mathbf{e}\) will end up preferring \(\mathbf{Y}=\mathbf{y}_{1}\) over \(\mathbf{Y}=\mathbf{y}_{2}\) if \(\mathbf{e}^{T}\mathbf{y}_{1}>\mathbf{e}^{T}\mathbf{y}_{2}\). Define the random, unobserved utility of a patient as \(U=\mathbf{E}^{T}\mathbf{Y}\in\mathbb{R}\). We assume that \(\mathbb{E}[B_{2}|\mathbf{H}_{2},A_{2}]\) has a monotonic relationship with \(\mathbb{E}[U|\mathbf{H}_{2},A_{2}]\). Define \(\pi^{\text{opt}}=(\pi^{\text{opt}}_{1},\pi^{\text{opt}}_{2})\) as the optimal DTR if and only if \(\mathbb{E}_{\pi^{\text{opt}}}[U]\geq\mathbb{E}_{\pi}[U]\) for all other DTRs \(\pi\). Then by using the same theory as that behind Q-learning, it is easy to see that \(\pi^{\text{opt}}_{2}(\mathbf{h}_{2})=\operatorname*{argmax}_{a_{2}\in \mathcal{A}_{\mathbf{h}_{2}}}Q_{2}(\mathbf{h}_{2},a_{2})\) and \(\pi^{\text{opt}}_{1}(\mathbf{h}_{1})=\operatorname*{argmax}_{a_{1}\in \mathcal{A}_{\mathbf{h}_{1}}}Q_{1}(\mathbf{h}_{1},a_{1})\) where \(Q_{2}(\mathbf{H}_{2},A_{2})=\mathbb{E}[U|\mathbf{H}_{2},A_{2}]\) and \(Q_{1}(\mathbf{H}_{1},A_{1})=\mathbb{E}\left[\max_{a_{2}\in\mathcal{A}_{ \mathbf{H}_{2}}}Q_{2}(\mathbf{H}_{2},a_{2})|\mathbf{H}_{1},A_{1}\right]\). We also assume that \((\mathbf{Y},A_{2})\perp\!\!\!\perp\mathbf{E}|\mathbf{H}_{2}\), following Butler et al. (2018). Under this assumption, \(\mathbb{E}[\mathbf{E}^{T}\mathbf{Y}|\mathbf{H}_{2},A_{2}]=\mathbb{E}[\mathbf{ E}|\mathbf{H}_{2}]^{T}\mathbb{E}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\). To estimate \(\pi^{\text{opt}}\), LUQ-Learning first estimates \(\mathbb{E}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\) as \(\widehat{\mathbb{E}}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\) using a regression algorithm and estimates \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\) as \(\widehat{\mathbb{E}}[\mathbf{E}|\mathbf{H}_{2}]\) using a methodology to be discussed shortly. We can then estimate \(Q_{2}\) as \(\widehat{Q}_{2}(\mathbf{H}_{2},A_{2})=\widehat{\mathbb{E}}[\mathbf{E}|\mathbf{ H}_{2}]^{T}\widehat{\mathbb{E}}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\) and \(\pi^{\text{opt}}\) using a typical Q-learning approach. The most complicated component is estimating \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\). One way to estimate this quantity is by proposing a parametric model directly for \(\Pr(\mathbf{H}_{2}|\mathbf{E})\), similar to Butler et al. (2018). However, this approach fails to utilize \(B_{2}\) in the estimation. LUQ-Learning instead proposes a model \(\Pr_{\theta}(\mathbf{H}_{3}|\mathbf{E})\) for \(\Pr(\mathbf{H}_{3}|\mathbf{E})\) where \(\theta\) is a parameter vector with imposed prior \(\Pr(\theta)\), and estimates \(\theta\) by maximizing the observed log-posterior algorithm (Givens and Hoeting 2012). With the right parametrization, estimating \(\Pr(\mathbf{H}_{2}|\mathbf{E})\) will be straightforward once \(\theta\) is estimated, and \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\) can then be estimated using an off-the-shelf random sampling algorithm (Givens and Hoeting 2012). We briefly elaborate on the differences between \(U\) and \(B_{2}\). \(U\) is a quantity measuring the true "goodness" of an outcome vector according to a patient's preferences, whereas \(B_{2}\) only gives self-reported satisfaction with those outcomes. While \(U\) could be continuous, \(B_{2}\) will oftentimes be a noisy and discretized approximation of \(U\), and estimating a DTR solely based on \(B_{2}\) may not lead to optimal decisions. For example, suppose \(B_{2}\) was binary and related to a continuous-valued \(U\) via the threshold function \(B_{2}=I(U>0)\). In other words, patients report they are satisfied (\(B_{2}=1\)) if \(U\) is sufficiently large and report they are unsatisfied (\(B_{2}=0\)) otherwise. In this case, a DTR optimized solely based on \(B_{2}\) would increase the chances that \(U>0\), but such a DTR may fail to yield large positive \(U\) (as it only cares whether \(U>0\) and not on how large \(U\) is more generally). Moreover, even in cases where optimizing \(\mathbb{E}_{\pi}[U]\) and \(\mathbb{E}_{\pi}[B_{2}]\) yield the same DTR asymptotically, using only \(B_{2}\) in finite sample settings could lead to a significant drop in statistical efficiency. Finally, LUQ-Learning allows for estimation of a posterior distribution of \(U\), which is useful for predicting patient preferences more generally as well as studying the relationship between preferences and measured covariates (Muhlbacher and Johnson, 2016; Hollin et al., 2020). ### Our Generative Model for BEST We apply LUQ-Learning to a generative model motivated by the BEST study design. We describe a few relevant components below: \[\mathbf{V}\sim\mathcal{N}_{2}(0,\mathbf{I}),\] \[\mathbf{E}=\text{SoftMax}(\mathbf{V})=\frac{(\exp(\mathbf{V}),1) }{1+\text{sum}(\exp(\mathbf{V}))},\] \[W_{1j}\overset{ind}{\sim}\text{Bernoulli}(p=\beta_{0,j,1}+ \beta_{1,j,1}^{T}\mathbf{V})\quad(1\leq j\leq 12),\] \[\text{Pr}(\mathbf{W}_{1}^{R}=\mathbf{w})=\frac{\exp(-\lambda_{1} T(\mathbf{w},\mathbf{E}^{R}))}{\sum_{\mathbf{v}\in\mathcal{P}}\exp(-\lambda_{1} T(\mathbf{v},\mathbf{E}^{R}))},\] \[W_{2j}\overset{ind}{\sim}\text{Bernoulli}(p=\sigma(\beta_{0,j,2 }+\beta_{1,j,2}^{T}\mathbf{V}))\quad(1\leq j\leq 12),\] \[\text{Pr}(\mathbf{W}_{2}^{R}=\mathbf{w})=\frac{\exp(-\lambda_{2} T(\mathbf{w},\mathbf{E}^{R}))}{\sum_{\mathbf{v}\in\mathcal{P}}\exp(-\lambda_{2} T(\mathbf{v},\mathbf{E}^{R}))},\] \[\text{Pr}(B_{1}\leq k)=\sigma(\alpha_{0,k,1}-\alpha_{1,1}\mathbf{ E}^{T}\mathbf{X}_{2})\quad(1\leq k\leq 6),\text{ and}\] \[\text{Pr}(B_{2}\leq k)=\sigma(\alpha_{0,k,2}-\alpha_{1,2}\mathbf{ E}^{T}\mathbf{Y})\quad(1\leq k\leq 6).\] For computational tractability, we assume that \(\mathbf{E}=\text{SoftMax}(\mathbf{V})\) where \(\mathbf{V}\in\mathbb{R}^{2}\) is a standard normally distributed latent vector, similar to assumptions made by Butler et al. (2018). Also similar to Butler et al. (2018), we assume the binary preference questions \(\mathbf{W}_{1},\mathbf{W}_{2}\in[0,1]\)12 are related to the latent factors \(\mathbf{V}\) via independent logistic regression models1. \(\mathbf{W}_{1}^{R},\mathbf{W}_{2}^{R}\in\mathcal{P}\) are the stated outcome rankings at each time point where \(\mathcal{P}\) is the set of permutations of \(\{1,2,3\}\). Our assumed models for \(\text{Pr}(\mathbf{W}_{1}^{R}|\mathbf{E}^{R})\) and \(\text{Pr}(\mathbf{W}_{2}^{R}|\mathbf{E}^{R})\) are Mallow's \(\phi\) models (Tang, 2019) where \(\mathbf{E}^{R}\in\mathcal{P}\) denotes the true ranks of the components of \(\mathbf{E}\) and \(T\) denotes Kendall's tau metric. While the BEST study allows for tied ranks, our distribution assumes no tied ranks for simplicity (though it is not difficult to extend the distribution to allow for ties). \(B_{1}\) and \(B_{2}\) are assumed to be positively related to the preference-weighted outcomes \(\mathbf{E}^{T}\mathbf{X}_{2}\) and \(\mathbf{E}^{T}\mathbf{Y}\) via proportional-odds logistic regression models (\(\mathbf{X}_{2}\) is assumed to measure the same variables in \(\mathbf{Y}\) for simplicity). Full details of the generative model for \((A_{1},A_{2},\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{Y})\) as well as relevant model parameters \(\theta=(\beta,\alpha,\lambda)\) where \(\beta=(\beta_{i,j,k})_{i,j,k=1}^{i=2,j=12,k=2}\), \(\alpha=(\alpha_{1,t},\alpha_{0,k,t})_{k,t=1}^{k=6,t=2}\) and \(\lambda=(\lambda_{1},\lambda_{2})\) are specified in Appendix B. The model makes several simplifying assumptions. For example, we assume \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\) consists solely of the three outcomes of interest but measured prior to the first and second randomization, respectively, and each outcome can only take values in \(\{0,1,...,10\}\). We also assume \((\mathbf{X}_{1},A_{1},\mathbf{X}_{2})\perp\!\!\!\perp\mathbf{V}|\mathbf{H}_{2}\) similar to Butler et al. (2018). So long as this latter assumption is satisfied, any other assumptions made about \((A_{1},A_{2},\mathbf{X}_{1},\mathbf{X}_{2},\mathbf{Y})\) will not affect estimation of \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\). ### Fitting Our Model to Simulated BEST Patients For our experiments, we assume our specified parametric model \(\Pr_{\theta}(\mathbf{H}_{3}|\mathbf{V})\) for \(\Pr(\mathbf{H}_{3}|\mathbf{V})\) is correct but \(\theta\) is unknown. In Appendix A, we discuss how we can relax this assumption and select among multiple proposed parametric models if needed. Under our assumptions: \[\Pr(\mathbf{H}_{3}|\mathbf{V})= \Pr(\mathbf{W}_{1}|\mathbf{V})\Pr(\mathbf{W}_{1}^{R}|\mathbf{E}^ {R})\Pr(B_{1}|\mathbf{E}^{T}\mathbf{X}_{2})\] \[\times \Pr(\mathbf{W}_{2}|\mathbf{V})\Pr(\mathbf{W}_{2}^{R}|\mathbf{E}^ {R})\Pr(B_{2}|\mathbf{E}^{T}\mathbf{Y})g(\mathbf{H}_{3}),\] where \(g(\mathbf{H}_{3})\) is some function of \(\mathbf{H}_{3}\). We also assume an improper uniform prior for \(\theta\) over \(\beta_{0,k,1},\beta_{0,k,2}\in\mathbb{R}\), \(\alpha_{1,t},\lambda_{t}>0\) and \(\alpha_{0,k,t}-\alpha_{0,k-1,t}>0\). Such a prior will make the posterior proportional to the log-likelihood and its mode equal to the MLE. Denote the space over which the prior for \(\theta\) has positive support as \(\Theta\), \(B(x|p)\) as the Bernoulli density function with probability parameter \(p\) and \(C_{d}(x|p)\) as the \(d\)-dimensional categorical density function with probability vector \(p\). Let \(\mathbf{X}_{3}=\mathbf{Y}\), \(\mathbf{V}_{\mathrm{MC}}^{(1)},...,\mathbf{V}_{\mathrm{MC}}^{(N_{\mathrm{sim}} )}\stackrel{{ iid}}{{\sim}}\mathcal{N}_{2}(0,\mathbf{I})\) and \(\Pr(B_{t}|\mathbf{X}_{t+1}^{T}\mathbf{E},\alpha)\) be a length-6 probability vector calculated from the cumulative distribution function \(F(k|\mathbf{X}_{t+1}^{T}\mathbf{E},\alpha)=\sigma(\alpha_{0,k,t}-\alpha_{1,t} \mathbf{X}_{t+1}^{T}\mathbf{E})\), \(k\in\{1,2,...,6\}\). Under our assumptions: \[\Pr(\theta|\mathcal{D})\propto \sum_{i=1}^{N}\log\int_{\mathbb{R}^{2}}f_{\theta}(H_{3}^{i}|V) \Pr(\mathbf{V})d\mathbf{V}\approx\sum_{i=1}^{N}\log\sum_{j=1}^{N_{\mathrm{ sim}}}f_{\theta}(H_{3}^{i}|V_{MC}^{(j)}),\] where: \[f_{\theta}(H_{3}^{i}|V)= \prod_{t=1}^{2}\left[\Pr_{\theta}(\mathbf{W}_{t}^{i}|\mathbf{V}) \Pr_{\theta}(\mathbf{W}_{t}^{R,i}|\mathbf{E}^{R})\Pr_{\theta}(B_{t}^{i}| \mathbf{E}^{T}\mathbf{X}_{t}^{i})\right]\] \[\Pr_{\theta}(\mathbf{W}_{t}^{i}|\mathbf{V})= \prod_{k=1}^{12}\left\{B(W_{t,k}^{i}|\sigma(\beta_{0,k,t}+\beta_{ 1,k,t}^{T}\mathbf{V}))\right\},\] \[\Pr_{\theta}(B_{t}^{i}|\mathbf{E}^{T}\mathbf{X}_{t}^{i})= C_{7}(B_{t}^{i}|p_{t}(\mathbf{E}^{T}\mathbf{X}_{t+1}^{i}|\alpha)),\text{ and}\] \[\Pr_{\theta}(\mathbf{W}_{t}^{R,i}|\mathbf{E}^{R})= \frac{\exp(-\lambda_{t}T(\mathbf{W}_{t}^{R,i},\mathbf{E}^{R}))}{ \sum_{\mathbf{v}\in\mathcal{P}}\exp(-\lambda_{t}T(\mathbf{v},\mathbf{E}^{R}) )}.\] We approximate \(\log\Pr(\theta|\mathcal{D})\) using MC integration with \(N_{\text{sim}}=1000\), which reduces approximation of \(\log\Pr(\theta|\mathcal{D})\) to \(N\times N_{\text{sim}}\times\text{dim}(\mathcal{H}_{3})\) independent computations. We run these computations on a GPU using TensorFlow ( Abadi et al., 2015). We also use reverse-mode automatic differentiation (Geron, 2019) implemented in TensorFlow to compute \(\nabla_{\theta}\log\Pr(\theta|\mathcal{D})\), and conducted optimization using the L-BFGS algorithm (Liu and Nocedal, 1989). To deal with the non-convexity of our objective, we run L-BFGS from five random starting points and chose the solution with the largest observed log-likelihood. We also perform 500 simple gradient descent steps with a small learning rate prior to applying L-BFGS to improve stability. Finally, to constrain the optimization appropriately, we added the penalty \(-\sum_{i}(1/100)e^{-100c_{i}}\) to the objective where \(c\) is the vector of linear combinations of \(\theta\) assumed to be positive in \(\Theta\). Such an objective can be considered a continuous analogue of the hard constraint \(-\infty I(\min(c)<0)\) or \(-\infty I(\theta\notin\Theta)\). ### Other Implementation Details From our fitted parameter vector \(\hat{\theta}\), we estimate \(\mathbb{E}\left[\mathbf{E}|\mathbf{H}_{2}\right]\) using MC integration as: \[\mathbb{E}_{\widehat{\theta}}[\mathbf{E}|\mathbf{H}_{2}]=\frac{\sum_{B_{2}\in \mathcal{B}_{2}}\sum_{j=1}^{N_{\text{sim}}}E_{\text{MC}}^{(j)}f_{\widehat{ \theta}}(H_{2},B_{2}|V_{\text{MC}}^{(j)})}{\sum_{B_{2}\in\mathcal{B}_{2}}\sum _{j=1}^{N_{\text{sim}}}f_{\widehat{\theta}}(H_{2},B_{2}|V_{\text{MC}}^{(j)})}.\] As \(\mathbf{Y}\in\{0,1,...,10\}^{3}\), we estimate \(\mathbb{E}\left[\mathbf{Y}_{j}|\mathbf{H}_{2},A_{2}\right],j\in\{1,2,3\}\) as \(\widehat{\mathbb{E}}\left[\mathbf{Y}_{j}|\mathbf{H}_{2},A_{2}\right]\) using beta-binomial logistic regression models with the design matrix including two-way interactions between \(\mathbf{X}_{2}\) and \(A_{2}\). We then estimate \(Q_{2}\) as \(\widehat{Q}_{2}=\mathbb{E}_{\widehat{\theta}}[\mathbf{E}|\mathbf{H}_{2}]^{T} \widehat{\mathbb{E}}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\). \(Q_{1}\) is then estimated as \(\widehat{Q}_{1}\) using bagged trees with covariates \(\{({\bf H}_{1}^{i},A_{1}^{i})\}_{1\leq i\leq N}\), fitted responses \(\{\max_{a_{2}\in{\cal A}_{{\bf H}_{2}^{i}}}\widehat{Q}_{2}({\bf H}_{2}^{i},a_{2}) \}_{1\leq i\leq N}\) and minimum node size \(n_{\min}=25\). Finally, \(\pi^{\rm opt}\) is estimated as \(\widehat{\pi}=(\widehat{\pi}_{1},\widehat{\pi}_{2})\) where \(\widehat{\pi}_{2}({\bf h}_{2})=\mbox{argmax}_{a_{2}\in{\cal A}_{{\bf h}_{2}}} \widehat{Q}_{2}({\bf h}_{2},a_{2})\) and \(\widehat{\pi}_{1}({\bf h}_{1})=\mbox{argmax}_{a_{1}\in{\cal A}_{{\bf h}_{1}}} \widehat{Q}_{1}({\bf h}_{1},a_{1})\). ## 4 Theoretical Results The proofs for all of our theoretical results can be found in Appendix C. Let \(M_{\theta}({\bf H}_{3}|{\bf E})\) be a parametric model for density \(\Pr({\bf H}_{3}|{\bf E})\) and \(\Pr({\bf E})\) be known. Let \(\hat{\theta}_{n}=\mbox{argmax}_{\theta\in\Theta}\sum_{i=1}^{n}\log M_{\theta} ({\bf H}_{3}^{i})\) where \(M_{\theta}({\bf H}_{3})=\int_{\cal E}M_{\theta}({\bf H}_{3}|{\bf E})\Pr({\bf E })d{\bf E}\) and \(\Theta\) is compact. As we will soon discuss, there are many cases where \(M_{\theta}({\bf H}_{3}|{\bf E})=f_{\theta}({\bf H}_{3}|{\bf E})g({\bf H}_{3})\) and \(g({\bf H}_{3})\) is unknown, but the obscurity of \(g\) is not important because it is independent of both \(\theta\) and \({\bf E}\). In these cases, \(\hat{\theta}_{n}\) can be considered a maximum partial likelihood estimator (Klein and Moeschberger 2003). Let \(I(\theta)\) be the information matrix for \(M_{\theta}({\bf H}_{3})\). Let \(\mathbb{E}_{\theta_{0}}[f({\bf H}_{3})]=\int_{{\cal H}_{3}}f({\bf H}_{3})dP_{ \theta_{0}}({\bf H}_{3})\), \(\mathbb{E}_{\bf E}[f({\bf E})]=\int_{\cal E}f({\bf E})dP_{\bf E}({\bf E})\) and \(\mathbb{E}_{\theta_{0},E}[f({\bf H}_{3},{\bf E})]=\int_{{\cal E}\times{\cal H} _{3}}f({\bf H}_{3},{\bf E})dP_{\bf E}({\bf E})dP_{\theta_{0}}({\bf H}_{3})\) where \(P_{\theta}\) and \(P_{\bf E}\) are the probability measures associated with \(M_{\theta}({\bf H}_{3})\) and \(\Pr({\bf E})\), respectively. Our first lemma outlines conditions under which consistency and asymptotic normality hold for various possible latent variable models, and is based on standard theory of M-estimators (Kosorok 2008). **Theorem 4.1**.: _Assume w.p. one and all \(\theta\in\Theta\): (C1) \(\Pr({\bf H}_{3}|{\bf E})=M_{\theta_{0}}({\bf H}_{3}|{\bf E})\) for some \(\theta_{0}\in\Theta\); (C2) \(M_{\theta}({\bf H}_{3}|{\bf E})\) is continuous in \(\theta\); (C3) \(|M_{\theta}({\bf H}_{3}|{\bf E})|<F({\bf H}_{3}|{\bf E})\) for some \(\mathbb{E}_{\theta_{0}}[F({\bf H}_{3}|{\bf E})]<\infty\); (C4) \(|\log M_{\theta}({\bf H}_{3})|\leq F({\bf H}_{3})\) for some \(\mathbb{E}_{\theta_{0}}[F({\bf H}_{3})]<\infty\); (C5) \(M_{\theta_{0}}({\bf H}_{3})\neq M_{\theta}({\bf H}_{3})\) for all \(\theta\neq\theta_{0}\). Then \(\hat{\theta}_{n}\rightarrow_{p}\theta_{0}\). Moreover, assume w.p. one and all \(\theta_{1},\theta_{2}\in N_{\epsilon}(\theta_{0})=\{\theta:||\theta-\theta_{0}| |_{2}<\epsilon\}\): (N1) \(I(\theta_{0})\) is nonsingular; (N2) \(|M_{\theta_{1}}({\bf H}_{3}|{\bf E})-M_{\theta_{2}}({\bf H}_{3}|{\bf E})|\leq F ({\bf H}_{3}|{\bf E})||\theta_{1}-\theta_{2}||\) for some \(\mathbb{E}_{\theta_{0},{\bf E}}[F^{2}({\bf H}_{3}|{\bf E})]<\infty\); (N3) \(M_{\theta_{1}}({\bf H}_{3})>c\) for some \(c>0\); (N4) \(M_{\theta_{1}}({\bf H}_{3}|{\bf E})\) is twice differentiable; (N5) \(||\nabla_{\theta}M_{\theta_{1}}({\bf H}_{3}|{\bf E})||_{\infty}<G({\bf H}_{3}|{ \bf E})\) for some \(\mathbb{E}_{\theta_{0},E}[G^{2}({\bf H}_{3}|{\bf E})]<\infty\). Then \(\sqrt{n}(\hat{\theta}_{n}-\theta_{0})\rightarrow_{d}{\cal N}(0,I(\theta_{0})^ {-1})\)._ Most conditions can be verified directly using \(M_{\theta}({\bf H}_{3}|{\bf E})\), without needing to worry about the integral \(M_{\theta}({\bf H}_{3})=\int M_{\theta}({\bf H}_{3}|{\bf E})\Pr({\bf E})d{\bf E}\). However, conditions (C5) and (N1) cannot easily reduce to corresponding conditions on \(M_{\theta}({\bf H}_{3}|{\bf E})\), as even if \(M_{\theta}({\bf H}_{3}|{\bf E})\) is identifiable and has a nonsingular information matrix, this does not imply identifiability and information matrix nonsingularity of \(M_{\theta}(\mathbf{H}_{3})\). Conditions (C4) and (N3) also relate to the integral \(M_{\theta}(\mathbf{H}_{3})\), though these conditions are much easier to verify. For example, condition (N3) will hold for many model classes when \(\Theta\) and \(\mathcal{H}_{3}\) is compact. Moreover, under such conditions there will often exist constants \(0<C_{1}<C_{2}<\infty\) such that for all \(\theta\in\Theta\) and \(\mathbf{H}_{3}\in\mathcal{H}_{3}\), \(C_{1}<M_{\theta}(\mathbf{H}_{3})<C_{2}\), in which case condition (C4) will also hold. Let \(\hat{\pi}_{n}\) be the DTR estimated from LUQ-Learning with \(M_{\theta}(\mathbf{H}_{3}|\mathbf{E})\) the specified model for \(\Pr(\mathbf{H}_{3}|\mathbf{E})\), \(\widehat{\mathbb{E}}_{n}[\mathbf{E}|\mathbf{H}_{2}]\) the estimate of \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\) from our MLE \(M_{\hat{\theta}_{n}}(\mathbf{H}_{3}|\mathbf{E})\), \(\widehat{\mathbb{E}}_{n}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\) a bounded regression estimator of \(\mathbb{E}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\), \(\widehat{Q}_{n,2}(\mathbf{H}_{2},A_{2})=\widehat{\mathbb{E}}_{n}[\mathbf{E}| \mathbf{H}_{2}]^{T}\widehat{\mathbb{E}}_{n}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]\) and \(\widehat{\mathbb{E}}_{n}[\max_{a_{2}}\widehat{Q}_{n,2}(\mathbf{H}_{2},a_{2})| \mathbf{H}_{1},A_{1}]\) a bounded estimator for \(\mathbb{E}[\max_{a_{2}}\widehat{Q}_{n,2}(\mathbf{H}_{2},a_{2})|\mathbf{H}_{1},A_{1}]\). Let \(||f(\mathbf{H}_{3})||_{P_{\theta_{0}}}=\sqrt{\mathbb{E}_{\theta_{0}}[f^{2}( \mathbf{H}_{3})]}\). Our next theorem outlines conditions under which consistency holds for the estimated DTR. **Theorem 4.2**.: _Assume the following w.p. one, where \(\epsilon>0\) is some constant:_ \[||\widehat{\mathbb{E}}_{n}[\mathbf{E}|\mathbf{H}_{2}]-\mathbb{E} [\mathbf{E}|\mathbf{H}_{2}]||_{P_{\theta_{0}}}\rightarrow_{p}0, (V1)\] \[||\widehat{\mathbb{E}}_{n}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]- \mathbb{E}[\mathbf{Y}|\mathbf{H}_{2},A_{2}]||_{P_{\theta_{0}}}\rightarrow_{p }0, (V2)\] \[||\widehat{\mathbb{E}}_{n}[\max_{a_{2}}\widehat{Q}_{n,2}(\mathbf{ H}_{2},a_{2})|\mathbf{H}_{1},A_{1}],\] \[-\mathbb{E}[\max_{a_{2}}\widehat{Q}_{n,2}(\mathbf{H}_{2},a_{2})| \mathbf{H}_{1},A_{1}]||_{P_{\theta_{0}}}\rightarrow_{p}0,\text{ and } (V3)\] \[\Pr(A_{2}|\mathbf{H}_{2}),\Pr(A_{1}|\mathbf{H}_{1})>\epsilon. (V4)\] _Then \(V(\widehat{\pi}_{n})-V(\pi^{*})\rightarrow_{p}0\)._ The term \(||\widehat{\mathbb{E}}_{n}[\mathbf{E}|\mathbf{H}_{2}]-\mathbb{E}[\mathbf{E}| \mathbf{H}_{2}]||_{P_{\theta_{0}}}^{2}\) present in (V1) is equal to \(\mathbb{E}_{\mathbf{H}_{2}\sim P_{\theta_{0}}}[\widehat{\mathbb{E}}_{n}[ \mathbf{E}|\mathbf{H}_{2}]-\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]]\). This term is random with the expectation over the arguments of \(\widehat{\mathbb{E}}_{n}[\mathbf{E}|\mathbf{H}_{2}]\) and not the estimator itself, as is the terms present in (V2) and (V3). (V1) and (V2) are similar to standard MLE consistency, except rather than requiring convergence of the parameters, we require convergence of the excess risk. Sufficient conditions for such convergence are discussed in Vapnik (1998) and Gine and Nickl (2016). (V3) is bit more complicated: A sufficient condition is that the regret of \(\widehat{\mathbb{E}}_{n}[Q(\mathbf{H}_{2})|\mathbf{H}_{1},A_{1}]\) converges to zero uniformly over \(Q\in\mathcal{Q}\) where \(\max_{A_{2}}\widehat{Q}_{n,2}(\cdot,A_{2})\in\mathcal{Q}\) almost surely. Such an assumption would be reasonable when \(\mathcal{Q}\) has finite complexity (Vapnik 1998). For the BEST application, our specified log-parametric model is denoted as \(\log p_{\theta}(\mathbf{H}_{3}|\mathbf{V})\) and is equal to: \[\log\left[\prod_{t=1}^{2}\Pr_{\theta}(\mathbf{W}_{t}|\mathbf{V})\Pr _{\theta}(\mathbf{W}_{t}^{R}|\mathbf{E}^{R})\Pr_{\theta}(B_{t}|\mathbf{E}^{T} \mathbf{X}_{t+1})\right]\] \[\quad+\log\big{[}\Pr(\mathbf{X}_{1})\Pr(A_{1}|\mathbf{H}_{1})\Pr (\mathbf{X}_{2}|\mathbf{H}_{1},A_{1})\Pr(A_{2}|\mathbf{H}_{2})\Pr(\mathbf{Y}| \mathbf{H}_{2},A_{2})\big{]}\] \[=: \log f_{\theta}(\mathbf{H}_{3}|\mathbf{V})+\log g(\mathbf{H}_{3}).\] While \(g(\mathbf{H}_{3})\) (and thus \(\log p_{\theta}(\mathbf{H}_{3}|\mathbf{V})\)) is unknown, it is independent to both \(\theta\) and \(\mathbf{V}\), allowing us to calculate \(\operatorname*{argmax}_{\theta}\log p_{\theta}(\mathbf{H}_{3}|\mathbf{V})\) and estimate \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{2}]\) regardless. Let \(\hat{\theta}_{n}=\operatorname*{argmax}_{\theta\in\Theta}\sum_{i=1}^{n}\log p _{\theta}(\mathbf{H}_{3}^{i})=\operatorname*{argmax}_{\theta\in\Theta}\sum_{ i=1}^{n}\log f_{\theta}(\mathbf{H}_{3}^{i})\) where \(p_{\theta}(\mathbf{H}_{3})=\int_{\mathbb{R}^{2}}p_{\theta}(\mathbf{H}_{3}| \mathbf{V})\Pr(\mathbf{V})d\mathbf{V}\), \(f_{\theta}(\mathbf{H}_{3})=\int_{\mathbb{R}^{2}}f_{\theta}(\mathbf{H}_{3}| \mathbf{V})\Pr(\mathbf{V})d\mathbf{V}\) and \(\Theta=\{\theta:||\theta||_{\infty}\leq B,\lambda_{t},\alpha_{1,t},\alpha_{0,j +1,t}-\alpha_{0,j,t}\geq\epsilon\}\). We can treat the specification of constants \(B\) and \(\epsilon\) as part of our specified model. Note that \(\Theta\) is compact. Our next theorem verifies all the conditions of the previous two theorems for our latent variable model proposed for BEST. **Theorem 4.3**.: _Assume w.p. one that \(p_{\theta_{0}}(\mathbf{H}_{3}|\mathbf{V})=\Pr(\mathbf{H}_{3}|\mathbf{V})\) for some \(\theta_{0}\in\Theta\), \(p_{\theta_{0}}(\mathbf{H}_{3})\neq p_{\theta}(\mathbf{H}_{3})\) for all \(\theta\neq\theta_{0}\) and \(g(\mathbf{H}_{3})>c\) for some \(c>0\). Then \(\hat{\theta}_{n}\rightarrow_{p}\theta_{0}\) and \(||\widehat{\mathbb{E}}_{n}[\mathbf{E}|\mathbf{H}_{2}]-\mathbb{E}[\mathbf{E}| \mathbf{H}_{2}]||_{P_{\theta_{0}}}\rightarrow_{p}0\). If \(I(\theta_{0})\) is also nonsingular, then \(\sqrt{n}(\hat{\theta}-\theta_{0})\rightarrow_{d}\mathcal{N}(0,I(\theta_{0})^{ -1})\)_ Due to the difficulty of verifying identifiability of models involving integrals, it is standard to assume identifiability when deriving theoretical results for latent variable models (Breslow and Clayton 1993; Bianconcini 2014; Butler et al. 2018). Assuming nonsingularity of the information matrix or a similar matrix is also standard (McCullagh and Nelder 1989), as satisfying this assumption usually requires making corresponding assumptions on the unknown data-generating distribution. ## 5 Empirical Results ### Latent Model Accuracy and Optimization Performance We first apply LUQ-Learning to simulated patients from the BEST study. Recall that the most difficult component of our framework is estimating the latent variable model \(\Pr(\mathbf{H}_{3}|\mathbf{V})\). Denote our parametric model for \(\Pr(\mathbf{H}_{3}|\mathbf{V})\) as \(\Pr_{\theta}(\mathbf{H}_{3}|\mathbf{V})\) with trainable parameter vector \(\theta\). Denote \(\theta_{0}\) as the true value of \(\theta\) and \(\hat{\theta}\) as the estimated parameter vector from maximizing the observed log-posterior \(\Pr(\theta|\mathcal{D})\) (which in this case is proportional to the log-likelihood \(\Pr(\mathcal{D}|\theta)\)). We calculate \(\hat{\theta}\) and plot the mean absolute error \(\dim(\theta)^{-1}||\hat{\theta}-\theta_{0}||_{1}\) for varying sample sizes in Figure 2. For each sample size, results were averaged over 10 random seeds to adjust for parameter and sample variability (see Appendix B for details). We can see that model error declines with sample size at an approximately linear rate, verifying the results given in Theorem 4.3. We also note that the identifiability assumption made in Theorem 4.3 is a necessary condition for consistency. As our algorithm converges to values close to the true parameter vector for large sample sizes across multiple seeds, this suggests that our model is identifiable. Our optimization algorithm also performed well. Across sample sizes and seeds, we consistently observed \(\log\Pr(\widehat{\theta}|\mathcal{D})\geq\log\Pr(\theta_{0}|\mathcal{D})\) and \(||\nabla_{\theta}\log\Pr(\widehat{\theta}|\mathcal{D})||_{\infty}<10^{-7}\), indicating high convergence quality. Computation times for model fitting with varying sample sizes is reported in Figure D.1. With \(N=600\) simulated patients (the anticipated sample size for the BEST study), model fitting took around 100 seconds on average. Even with \(10,000\) simulated patients, model fitting took under 900 seconds on average. Computational performance can further be improved if needed by reducing the number of starting points and gradient descent iterations used as a warmup for L-BFGS. These results demonstrate the efficiency and scalability of our optimization algorithm. Figure 2: Mean absolute error \(||\hat{\theta}-\theta_{0}||_{1}/\dim(\theta)\) for our fitted model \(\hat{\theta}\) across sample sizes. For each sample size, we plot the average performance across 10 seeds with standard deviation bars. GPU computing and TensorFlow is usually used for optimizing deep learning models and is more common in computer science. It is less commonly used in the statistical literature to implement MC integration and quasi-Newton algorithms. Instead, other integration and optimization algorithms such as (adaptive) Gauss-Hermite quadrature, Markov Chain Monte Carlo (MCMC) or expectation-maximization (EM) combined with CPU computing are more popular (Givens and Hoeting, 2012; Institute, 2018; Butler et al., 2018), all of which would have taken significantly more time for our setting. We hope that our results will motivate better computational approaches in the statistical literature moving forward. ### Performance of the Estimated DTR Performance for LUQ-Learning's DTR \(\hat{\pi}_{\text{LUQL}}\) as well as various baselines is given on the left column of Table 1. For each DTR, we report the average (std) of policy values across 10 seeds. Here \(\pi_{\text{obs}}\) is the policy that generated the observed data, \(\hat{\pi}_{\text{known}}\) ablates our framework by replacing \(\mathbb{E}_{\widehat{\theta}}[\mathbf{E}|\mathbf{H}_{2}]\) with the true weights \(\mathbf{E}\), \(\hat{\pi}_{B_{2}}\) comes from Q-learning with \(B_{2}\) treated as the outcome, and \(\hat{\pi}_{\text{naive}}\) ablates our framework by replacing \(\mathbb{E}_{\widehat{\theta}}[\mathbf{E}|\mathbf{H}_{2}]\) with \((1/3,1/3,1/3)\). We can see that LUQ-Learning yields significant improvement over the observational policy. We can also see that it performs nearly as well as \(\hat{\pi}_{\text{known}}\), indicating that our method is almost as efficient as if the utilities \(U=\mathbf{E}^{T}\mathbf{Y}\) were observed. Finally, we can see that \(\hat{\pi}_{\text{LUQL}}\) performs much better than \(\hat{\pi}_{B_{2}}\): \(B_{2}\) is a discretized and noisy approximation of the true utilities \(U\), and running Q-learning with \(B_{2}\) as the outcome leads to excess variance. Moreover, while in this case optimizing \(\mathbb{E}_{\pi}[B_{2}]\) and \(\mathbb{E}_{\pi}[U]\) would lead to the same DTRs asymptotically, this is not true in general (see Section 3.1). While the previous generative model demonstrates LUQ-Learning's effectiveness in es \begin{table} \begin{tabular}{l l l} \hline \hline DTR & Standard Model & Ablation Model \\ \hline \(\pi_{\text{obs}}\) & 4.79 (0.41) & 4.97 (0.06) \\ \(\hat{\pi}_{\text{LUQL}}\) & 6.87 (0.61) & 5.57 (0.12) \\ \(\hat{\pi}_{\text{known}}\) & 6.90 (0.60) & 5.60 (0.12) \\ \(\hat{\pi}_{B_{2}}\) & 6.04 (0.63) & 5.23 (0.13) \\ \(\hat{\pi}_{\text{naive}}\) & 6.78 (0.63) & 5.00 (0.04) \\ \hline \hline \end{tabular} \end{table} Table 1: Value of Different DTRs on Two BEST Generative Models. timating the true utilities, it fails to demonstrate the importance of incorporating such utilities in the first place. For example, we can see that \(\hat{\pi}_{\text{naive}}\) performs nearly as well as \(\hat{\pi}_{\text{LUQL}}\), where \(\widehat{\pi}_{\text{naive}}\) naively assumes all outcomes are equally important for all subjects. We therefore considered an ablated generative model where \(A_{2}\) had opposing effects on the components of \(\mathbf{Y}\), and such effects no longer depended on \(\mathbf{X}_{2}\) (see Appendix B for details). As \(\mathbf{Y}\) now consists of competing outcomes, \(\widehat{\pi}_{\text{naive}}\) should now be expected to perform much worse. Moreover, though there is no longer heterogeneity in treatment effects, the covariates are still useful in inferring patient preferences. This ablation allows us to isolate the utility of our latent variable model in a precision medicine framework. Results can be found on the right column of Table 1. We can see that now \(\hat{\pi}_{\text{LUQL}}\) performs much better than \(\hat{\pi}_{\text{naive}}\), showing the benefits of both LUQ-Learning and of incorporating patient preferences more generally. ### Application to Simulated Schizophrenia Patients To demonstrate the broad applicability of LUQ-Learning, we now consider a modified version of the simulated datasets of Butler et al. (2018). Their simulation was loosely inspired by the first phase of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) trial (Stroup et al., 2003). There is one decision point and five treatments, which the authors dichotomize into traditional and atypical antipsychotics. There are two continuous outcomes representing treatment efficacy and side effect burden. The questionnaire given to elicit patient preferences consists of 10 questions from the Drug Attitude Inventory (Hogan et al., 1983). There are assumed to be five continuous covariates that have treatment-specific effects on the outcomes. The number of simulated patients (\(N=200\)) is smaller than our previous simulation study. We appended their generative model with a Poisson-distributed surrogate for patient satisfaction. The entire generative model is summarized below: \[V\sim N(0,1),\] \[\mathbf{E}=(\Phi(V),1-\Phi(V)),\] \[X_{j}\overset{iid}{\sim}N(0,1)\quad(1\leq j\leq 5),\] \[W_{j}\overset{ind}{\sim}\text{Bernoulli}(p=\beta_{0,j}+\beta_{1,j}V)\quad(1\leq j\leq 10),\] \[A\sim\text{Bernoulli}(p=0.5),\] \[\epsilon_{j}\stackrel{{ iid}}{{\sim}}N(0,1)\quad(1 \leq j\leq 2),\] \[Y_{j}=\mathbf{X}_{*}^{T}\gamma_{j,0}+A\mathbf{X}_{*}^{T}\gamma_{j,1}+\epsilon_{j}\quad(1\leq j\leq 2),\text{ and}\] \[B\sim\text{Pois}\left(\lambda=\exp(\alpha_{0}+\alpha_{1}U)\right).\] Here \(\mathbf{X}_{*}=(1,\mathbf{X})\) and \(\Phi(\cdot)\) is the standard normal CDF. We also let \(\mathbf{H}_{1}=(\mathbf{W},\mathbf{X})\) and \(\mathbf{H}_{2}=(\mathbf{W},\mathbf{X},A,\mathbf{Y},B)\). \(\gamma=(\gamma_{i,j})_{i,j=1}^{2}\) was fixed as in Butler et al. (2018) to make the outcomes competitive, though we did allow \(\beta=(\beta_{i,j})_{i,j=1}^{i=2,j=10}\) to vary between seeds (for each seed, we set \(\beta_{0,j}=0\) and drew \(\beta_{1,j}\sim N(0,1)\)). Finally, for a given seed we set \(\alpha_{1}=(1/6)(\max_{n}(U)-\min_{n}(U))\) and \(\alpha_{0}=-\alpha_{1}\min_{n}(U)\) where \(\max_{n}(U),\min_{n}(U)\) are the maximum and minimum latent utility for patients in the observed dataset, so that the maximum observed value of \(B\) would be around \(\exp(3)\). As before, we assume a correctly-specified model for \(\Pr(\mathbf{H}_{2}|\mathbf{V})\) and estimate model parameters \(\theta=(\beta,\alpha)\) via partial maximum likelihood. Also like before, \(\mathbb{E}[\mathbf{Y}|\mathbf{H}_{1},A]\) is estimated via a generalized linear model with a correctly-specified design matrix, response distribution and link function. As the simulated datasets have a single decision point and two outcomes, the methodology of Butler et al. (2018) could also be applied here. With a single decision point and two outcomes, the Butler method reduces to LUQ-Learning when measures of patient satisfaction are not incorporated into the partial likelihood and the optimizer is an EM algorithm that constraints the \(\beta_{1,j}\)'s to be positive. We shall use the same optimization algorithm for both methods, so that the only difference is whether satisfaction \(B\) is incorporated into the model. Results are given in Table 2. As in the BEST simulations, LUQ-Learning accurately estimates the expected preference weights and estimates a high-performing DTR. On the other hand, the Butler method has very high estimation error on average. For context, naively estimating \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{1}]\) as 0.5 for all patients yields an estimation error of only 0.2 on average. As a result, the estimated DTR in some cases performs worse than even the observational policy. \begin{table} \begin{tabular}{l l l} \hline Statistic & Our Method & Butler Method \\ \hline \(\mathbb{E}\left[|\mathbb{E}[\mathbf{E}|\mathbf{H}_{1}]-\widehat{\mathbb{E}}[ \mathbf{E}|\mathbf{H}_{1}]|\right]\) & 0.03 (0.02 – 0.03) & 0.25 (0.03 – 0.41) \\ \(V(\hat{\pi})-V(\pi_{\text{obs}})\) & 0.22 (0.19 – 0.25) & 0.07 (-0.02 – 0.17) \\ \hline \end{tabular} \end{table} Table 2: Mean (IQR) of Statistics across 10 Seeds for CATIE simulations. While we previously explored bias and efficiency issues with estimating a DTR _solely_ from reported patient satisfaction, these results show that failing to incorporate reported patient satisfaction _at all_ also yields subpar results. A more narrow posterior of a latent variable oftentimes corresponds to a more narrow posterior of the parameters of the latent variable model. For example, Butler et al. (2018) found that increasing \(\dim(\mathbf{W})\) actually led to lower estimation variance, even though \(\dim(\theta)\) increased. This is in contrast to complete data log-likelihoods, where more parameters usually results in increased estimation variance. While the quantity we wish to estimate, \(\mathbb{E}[\mathbf{E}|\mathbf{H}_{1}]\), does not depend on \(B\), not using \(B\) leads to estimating a model for \(\Pr(\mathbf{W}|\mathbf{E})\) instead of \(\Pr(\mathbf{W},B|\mathbf{E})\). As \(\Pr(\mathbf{E}|\mathbf{W})\) has a wider distribution than \(\Pr(\mathbf{E}|\mathbf{W},B)\), the Butler method leads to greater estimation variance, even among those parameters solely related to \(\Pr(\mathbf{E}|\mathbf{W})\). It is worth noting that Theorem 4.3 can easily be extended to the latent variable model proposed here under mild assumptions, such as that \(|B|,|\mathbf{Y}|<C\) w.p. one for some \(C\in\mathbb{R}\). ## 6 Conclusions Despite the prevalence of healthcare decision-making problems with multiple outcomes of interest, the few applicable solutions from previous work suffer from various limitations that hinder applicability to many real-world settings. To this end, we have developed a new framework, LUQ-Learning, that treats the utility function to optimize as a latent variable, and incorporates the latent variable into Q-learning using a a latent model approach. Unlike previous approaches, LUQ-Learning allows for an arbitrary number of time points and outcomes, uses questionnaire responses for both outcome preferences and satisfaction to maximize estimation accuracy of the latent utilities, and does not require access to expert-level data or extensive domain knowledge. Theoretical performance of our approach was investigated, where we demonstrated that our application to the BEST study achieves consistency and asymptotic normality under much more mild assumptions than those made by many previous methods. Our theoretical results extend easily to other proposed latent models as well, such as that proposed for the CATIE study, under mild conditions. We also demonstrated the flexibility and robust performance of our method on a diverse set of simulated datasets. In contrast, DTRs estimated to optimize over more naive utilities, such as self-reported satisfaction or a simple mean of the outcomes, perform significantly worse. Finally, the computational performance for our proposed optimization procedure demonstrates the potential of deep learning libraries like TensorFlow for fast numerical integration and optimization of statistical models, and adds to a growing body of recent literature arguing the utility of such libraries for solving statistical problems (Grogan, 2020). Despite our work's progress in multi-objective, preference-based precision medicine, there remain many promising areas of future work to explore. For example, while our theoretical results make fewer assumptions than those of many previous approaches, they still assume identifiability of the latent model. Establishing identifiability of latent variable models is difficult due to the presence of the integral in the objective function, and there has not been much previous work investigating how to do this. Developing new theoretical results and proof techniques to establish identifiability of likelihoods with integrals would be helpful not only for our method, but also for other kinds of latent variable models as well as hierarchical Bayes models (Givens and Hoeting, 2012). Moreover, while we established a theoretical framework for how to adopt the proposed model of latent utilities to the data in Appendix A, the framework has yet to be empirically investigated. A promising avenue for future work would be to combine nonparametric approaches with LUQ-Learning and demonstrate its adoptability to complex data-generating distributions. Finally, extending our approach to inverse probability weighting, nonlinear utility functions and censored outcomes would all constitute meaningful future work as well. ## 7 Acknowledgements The authors thank John Sperger for relevant references and helpful discussion. ## 8 Funding This research was supported by the National Institutes of Health (NIH) through the NIH HEAL Initiative under award number 1U24 AR076730-01 and is part of the Back Pain Consortium (BACPAC). The BACPAC Research Program is administered by the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or its NIH HEAL Initiative. ## 9 Disclosure Statement The authors report there are no competing interests to declare.
2306.05677
A fast reduced order method for linear parabolic inverse source problems
In this paper, we propose a novel, computationally efficient reduced order method to solve linear parabolic inverse source problems. Our approach provides accurate numerical solutions without relying on specific training data. The forward solution is constructed using a Krylov sequence, while the source term is recovered via the conjugate gradient (CG) method. Under a weak regularity assumption on the solution of the parabolic partial differential equations (PDEs), we establish convergence of the forward solution and provide a rigorous error estimate for our method. Numerical results demonstrate that our approach offers substantial computational savings compared to the traditional finite element method (FEM) and retains equivalent accuracy.
Yuxuan Huang, Yangwen Zhang
2023-06-09T05:25:54Z
http://arxiv.org/abs/2306.05677v1
# A fast reduced order method for linear parabolic inverse source problems ###### Abstract In this paper, we propose a novel, computationally efficient reduced order method to solve linear parabolic inverse source problems. Our approach provides accurate numerical solutions without relying on specific training data. The forward solution is constructed using a Krylov sequence, while the source term is recovered via the conjugate gradient (CG) method. Under a weak regularity assumption on the solution of the parabolic partial differential equations (PDEs), we establish convergence of the forward solution and provide a rigorous error estimate for our method. Numerical results demonstrate that our approach offers substantial computational savings compared to the traditional finite element method (FEM) and retains equivalent accuracy. Inverse source problems heat equation reduced order method finite element method stochastic error estimate ## 1 Introduction The study of parabolic inverse source problems has received considerable attention in the field of numerical analysis over the past decades. These problems aim to determine an unknown source term from final time observations of the solution of a time-dependent partial differential equation (PDE). Canonically, the forward problem is to determine the solution of the parabolic PDE given the source term and boundary conditions, while the inverse problem is to determine the source term from final time observations of the solution. To solve an inverse problem numerically, we need to compute many forward problems in the process. The parabolic inverse source problem arises in many applications, including contaminant transport in porous media [4], heat conduction in materials [1], and tumor growth modeling [9]. However, obtaining numerical solutions is challenging due to the ill-posedness of the inverse problem [1] and the high computational cost of solving the forward problem [2]. Traditionally, finite element methods (FEM) have been widely used to solve the parabolic inverse source problem. FEM discretizes the PDE and its solution into a system of algebraic equations, which can be solved using iterative methods such as the conjugate gradient method [10]. Although FEM can achieve accurate results, it requires many unknowns and leads to high computational costs and memory storage requirements. Recently, reduced order methods (ROMs) have been proposed as an alternative approach to solving the parabolic inverse source problem. The main idea of ROMs is to find a low-dimensional basis for the solution space and use this basis to represent the solution of the PDE. This leads to significant computational savings compared to FEM [12]. One popular ROM for the parabolic inverse source problem is the proper orthogonal decomposition (POD) method. POD approximates the solution space by constructing an orthonormal basis from a set of snapshots of the solution. This basis can be used to represent the solution of the PDE and reduce the computational cost. But POD's reduced order basis heavily depends on the specific PDE data. In an inverse problem where the actual source term is very different from the example used to construct the POD basis, the results can be either a reasonable approximation or an absurd deviation [7]. To compensate for the narrow specificity of POD, hybrid ROMs are developed to take advantage of both POD and full-order methods (POMs) such as FEM [5]. Nevertheless, their computational savings are not remarkable in the parabolic inverse source problems. To address the aforementioned deficits in different methods, we offer a novel approach that is computationally economical and insensitive to changes in data. In this work, we study our proposed method under a practically physical scenario. The rest of the paper is organized as follows. In Section 2, we introduce the setting of the linear parabolic inverse source problem. In Section 3, we provide the specific implementation of the finite element method and our reduced order method. In section 4, we prove the convergence of the forward solution and present a stochastic error estimate. In section 5, we demonstrate numerical results to compare our method with standard FEM. Lastly, we discuss the limitation and future directions of our work in Section 6. ## 2 Formulation of the inverse problem Because linear parabolic inverse problems are well-studied and suitable for comparison purposes, we choose to focus on this type of inverse source problems in this paper. We consider the PDE system governed by the following state equation: \[\begin{cases}u_{t}-\Delta u=f,&\text{in }\Omega\times(0,T]\\ u(\cdot,t)=0,&\text{on }\partial\Omega\times(0,T]\\ u(\cdot,0)=0,&\text{in }\Omega\end{cases} \tag{2.1}\] Where \(\emptyset\neq\Omega\subset\mathbb{R}^{d}(d=1,2,3)\) is a bounded domain, \(T>0\) is the terminal time. Let \(u\) be the solution and \(f\) be a time-independent source term of the state equation (2.1). We define the forward operator \(\mathcal{S}:L^{2}(\Omega)\to L^{2}(\Omega)\) by \(\mathcal{S}f=u(\cdot,T)\). In the linear parabolic inverse source problem, \(f\in L^{2}(\Omega)\) is an unknown source term that we want to reconstruct based on the measurement of final time solution \(u(\cdot,T)\). In this work, we assume that the measurement of \(u(\cdot,T)\) are collected point-wisely over a set of uniformly distributed sensors located at \(\{x_{i}\}_{i=1}^{n}\) over \(\Omega\) (e.g. [ref]). To account for the uncertainty of natural noise and measurement errors, we apply independent Gaussian random noise to each sensor. We denote the noise as \(\{e_{i}\}_{i=1}^{n}\). They are independent and identically distributed (i.i.d.) random variables with Gaussian distribution \(N(0,\epsilon^{2})\) for some small \(\epsilon>0\). So the actual measurement takes the form \(m_{i}=\mathcal{S}f^{*}(x_{i})+e_{i},\ i=1,2,\cdots,n\), where \(f^{*}\in L^{2}(\Omega)\) is the true source term of the problem. For any \(u,v\in C(\bar{\Omega})\) and \(y\in\mathbb{R}^{n}\) we define the inner product \[(u,v)_{n}=\frac{1}{n}\sum_{i=1}^{n}u(x_{i})v(x_{i}),\ \ (y,v)_{n}=\frac{1}{n} \sum_{i=1}^{n}y_{i}v(x_{i})\] and the empirical norm \[\left\|u\right\|_{n}=(\frac{1}{n}\sum_{i=1}^{n}u^{2}(x_{i}))^{1/2}\] Now, we define the linear parabolic inverse source problem as to reconstruct a unknown source term \(f^{*}\) from the noisy final time measurement data \(m=(m_{1},m_{2},\cdots,m_{n})^{T}\in\mathbb{R}^{n}\). We implement a realistic approach for this problem by optimizing the mean-square error with a regularization term. The approximation solution is hence computed from the following optimization problem: \[\underset{f\in L^{2}(\Omega)}{\text{min}}\frac{1}{2}\left\|\mathcal{S}f-m \right\|_{n}^{2}+\frac{\lambda_{n}}{2}\left\|f\right\|_{L^{2}(\Omega)}^{2} \tag{2.2}\] where \(\lambda_{n}\) is the regularization parameter. Additionally, to solve the inverse problem we need the adjoint equation \[\begin{cases}y_{t}-\Delta y=g,&\text{in }\Omega\times(0,T]\\ y(\cdot,t)=0,&\text{on }\partial\Omega\times(0,T]\\ y(\cdot,0)=0,&\text{in }\Omega\end{cases} \tag{2.3}\] where \(g\in L^{2}(\Omega)\) is time-independent. This adjoint equation is in accordance with the work of Johansson and Lesnic [8]. The elliptic operator in their assumption \(\mathcal{L}u=-\sum_{i,j=1}^{d}\partial_{x_{i}}(a_{i,j}(x)\partial_{x_{j}}u)+ \sum_{i=1}^{d}b_{i}(x)\partial_{x_{i}}u+c(x)u\) is set to be the negative Laplace operator here. Then we define the adjoint forward operator \(\mathcal{S}^{*}:L^{2}(\Omega)\to L^{2}(\Omega)\) by \(\mathcal{S}^{*}g=y(\cdot,T)\). Note that the adjoint equation is the same as the state equation under our context. In a previous work done by Chen et al. [3], they studied the optimal stochastic convergence of regularized finite element solutions to a more general type of parabolic inverse source problems. The following result represents the stochastic convergence with random noise with bounded variance. **Assumption 1**.: _(Assumption 2.1 in [3]) We assume the following:_ _(1) There exists a constant_ \(\beta>1\) _such that for all_ \(u\in L^{2}(\Omega)\)_,_ \[\|u\|_{L^{2}(\Omega)}^{2}\leq C(\|u\|_{n}^{2}+n^{-\beta}\,\|u\|_{L^{2}(\Omega) }^{2}),\ \ \|u\|_{n}^{2}\leq C(1+n^{-\beta})\,\|u\|_{L^{2}(\Omega)}^{2}) \tag{2.4}\] _(2) The first_ \(n\) _eigenvalues,_ \(0<\eta_{1}\leq\eta_{2}\leq\cdots\leq\eta_{n}\)_, of the eigenvalue problem_ \[(\psi,v)_{L^{2}(\Omega)}=\eta(\mathcal{S}\psi,\mathcal{S}v)_{L^{2}(\Omega)}\ \forall v\in L^{2}(\Omega) \tag{2.5}\] _satisfy that_ \(\eta_{k}\geq Ck^{\alpha},k=1,2,\cdots,n\)_. The constant_ \(C\) _depends only on the forward operator_ \(\mathcal{S}\)_. And the constant_ \(\alpha\) _satisfies_ \(1\leq\alpha\leq\beta\)_._ **Proposition 1**.: _(Theorem 3.5 in [3]) Suppose Assumption 1 holds and \(\{e_{i}\}_{i=1}^{n}\) are independent random variables satisfying \(\mathbb{E}[e_{i}]=0\) and \(\mathbb{E}[e_{i}]\leq\sigma^{2}\). Let \(f_{n}\in L^{2}(\Omega)\) be the solution to the minimization problem (2.2). Then there exists constants \(\lambda_{0}>0\) and \(C>0\) such that the following estimates hold for all \(0<\lambda_{n}<\lambda_{0}\):_ \[\mathbb{E}\big{[}\,\|\mathcal{S}f_{n}-\mathcal{S}f^{*}\|_{n}^{2} \,\big{]} \leq C\lambda_{n}\,\|f^{*}\|_{L^{2}(\Omega)}^{2}+\frac{C\sigma^{2} }{n\lambda_{n}^{d/4}} \tag{2.6}\] \[\mathbb{E}\big{[}\,\|f_{n}-f^{*}\|_{L^{2}(\Omega)}^{2}\,\big{]} \leq C\,\|f^{*}\|_{L^{2}(\Omega)}^{2}+\frac{C\sigma^{2}}{n\lambda_{n }^{1+d/4}} \tag{2.7}\] Since our problem of interest is a linear parabolic inverse source problem and our noise is i.i.d. Gaussian random with variance \(\sigma^{2}\), the above proposition applies to this case. ## 3 Implementation of numerical solvers The algorithm consists of forward and the backward processes. The forward process solves the state and the adjoint equations. The backward process employs the results of the forward process and solves for the true source term. For simplicity, we denote the Euclidean norm \(\left\|\cdot\right\|_{L^{2}(\Omega)}=\left\|\cdot\right\|_{2}\). ### The backward process In this part, we illustrate the algorithm to find the approximation to the true source term. Let \(\mathcal{J}(f)\) be objective function of our optimization problem (2.2). By construction it is convex, so its minimizer is a global one. Then we use the first-order optimality condition: \[\nabla\mathcal{J}(f)=(\mathcal{S}^{*}\mathcal{S}+\lambda_{n}\mathbb{I})f- \mathcal{S}^{*}m\,\overset{\text{set}}{=}\,0 \tag{3.1}\] where \(\mathcal{S}^{*}\) is the adjoint operator of \(\mathcal{S}\). Define \(A(\lambda_{n}):=(\mathcal{S}^{*}\mathcal{S}+\lambda_{n}\mathbb{I}),\ b(m):= \mathcal{S}^{*}m\), then solving the problem (2.2) is equivalent to solve \(Af=b\). And hence, we implement the conjugate gradient method (CG) to find the solution. Let \(f_{0}\) be an initial guess and \(tol\) be the tolerance; \(m\) is the given final time observation for the inverse source problem and \(\lambda_{n}\) is the regularization parameter. Since we cannot directly derive the exact values for \(A(\lambda_{n})\) or \(b(m)\) in a numerical setting, we introduce Algorithm 1. It incorporates the forward process which we will cover in the next subsection. The inner product we use in Algorithm 1 is defined as below: \[\forall\ x,y\in\mathbb{R}^{n},(x,y)_{M}=(x,My)_{2}=x^{T}My,\ \ \left\|x\right\|_{M}=x^{T}Mx\] Where \(M\) is the mass matrix in the finite element discretization. This is a well-defined norm since \(M\) is symmetric and positive definite (SPD) by Lemma 4 in appendix. Since we use \(\left\|\cdot\right\|_{n}\) instead of \(\left\|\cdot\right\|_{M}\) in our proofs, we thus prove that these two norms are equivalent in Proposition 2. **Proposition 2**.: _For a fixed \(n\in\mathbb{N}\), the finite element norm \(\left\|\cdot\right\|_{M}\) and the empirical norm \(\left\|\cdot\right\|_{n}\) are equivalent._ Proof.: By the definition of equivalency, we want to show that there exists constants \(c_{1},c_{2}>0\) such that for any \(v\in\mathbb{R}^{n},c_{1}\left\|v\right\|_{n}\leq\left\|v\right\|_{M}\leq c_{2} \left\|v\right\|_{n}\). By Lemma 4 in appendix, \(M\in\mathbb{R}^{n\times n}\) is SPD. So by the spectral theorem, we can decompose \(M\) as such: \[M=\sum_{i=1}^{n}\lambda_{i}p_{i}p_{i}^{T}\] Where \(p_{i}\in\mathbb{R}^{n},\left\|p_{i}\right\|_{2}=1\), and \((p_{i},p_{j})_{2}=0\)\(\forall\)\(1\leq i\neq j\leq n\). And \(0<\lambda_{1}\leq\cdots\leq\lambda_{n}\) are the eigenvalues of \(M\). Fix any \(v\in\mathbb{R}^{n}\), \[\left\|v\right\|_{M}=v^{T}Mv\] \[=\sum_{i=1}^{n}\lambda_{i}(v^{T}p_{i})(p_{i}^{T}v)\] \[=\sum_{i=1}^{n}\lambda_{i}(v^{T}p_{i})^{2}\] Let \(y_{i}=v^{T}p_{i},\ y=[y_{1},\cdots,y_{n}]^{T}\) \[=\sum_{i=1}^{n}\lambda_{i}y_{i}^{2}\] So we have \[\lambda_{1}\sum_{i=1}^{n}y_{i}^{2}\leq\sum_{i=1}^{n}\lambda_{i}y_{i}^{2}\leq \lambda_{n}\sum_{i=1}^{n}y_{i}^{2}\] Note that \(\mathbb{I}=\sum_{i=1}^{n}p_{i}p_{i}^{T}\) (all eigenvalues are 1) \[\sum_{i=1}^{n}y_{i}^{2}=\sum_{i=1}^{n}v^{T}(p_{i}p_{i}^{T})v=v^{T}v=\left\|v \right\|_{2}\] By definition, \(\left\|v\right\|_{n}=\frac{1}{n^{1/2}}\left\|v\right\|_{2}\). Therefore \[\lambda_{1}n^{1/2}\left\|v\right\|_{n}\leq\left\|v\right\|_{M}\leq\lambda_{n} n^{1/2}\left\|v\right\|_{n}\] ### The forward process In this part, we introduce the implementations to solve the state equation (2.1) only since the adjoint equation is essentially identical here. #### 3.2.1 Space-time Discretization Given the bounded domain \(\Omega\), we specify a spatial mesh size \(h\) and select a finite element space \(V_{h}\) from the Sobolev space \(H^{1}_{0}(\Omega)\). Then given a source term \(f\in L^{2}(\Omega)\), the semi-discrete formulation is to find \(u_{h}\in C^{0}((0,T],V_{h})\) with \(u_{h}(0)=0\) satisfying: \[(\partial_{t}u_{h}(t),\nu)+(\nabla u_{h}(t),\nabla\nu)=(f,\nu)\ \ \forall\nu\in V_{h},t\in(0,T] \tag{3.2}\] Next, we specify a temporal stepsize \(\Delta t\) and denote \(u_{h}^{n}=u_{h}(n\cdot\Delta t)\). So the full-discrete formulation is to find \(u_{h}^{n}\in V_{h}\) with \(u_{h}^{0}=0\ \forall n\in\{1,2,\cdots,ceil(\frac{T}{\Delta t})\}\) satisfying: \[(\partial_{t}^{+}u_{h}^{n},\nu)+(\nabla u_{h}^{n},\nabla\nu)=(f,\nu)\ \ \forall\nu\in V_{h} \tag{3.3}\] We replace \(\nu\in V_{h}\) by canonical shape functions in the finite element method to formulate the matrix representation from (3.3). So we need to find \(u_{h}^{n}\in V_{h}\) with \(u_{h}^{0}=0\ \forall n\in\{1,2,\cdots,ceil(\frac{T}{\Delta t})\}\) satisfying: \[M\partial_{t}^{+}u_{h}^{n}+Au_{h}^{n}=b \tag{3.4}\] Here \(M\) is the mass matrix, \(A\) is the stiffness matrix, and \(b\) is the load vector. To derive \(\partial_{t}^{+}u_{h}^{n}\), we use a backward differentiation formula (BDF) scheme: \[\partial_{t}^{+}u_{h}^{n}=\begin{cases}0&n=0\\ \frac{u_{h}^{n}-u_{h}^{n-1}}{n}&n=1\\ \frac{3u_{h}^{n}-4u_{h}^{n-1}+u_{h}^{n-2}}{2\Delta t}&n\geq 2\end{cases} \tag{3.5}\] #### 3.2.2 The finite element method Our implementation of the finite element method is conducted using the NGSolve package because it possesses high performance and yields reliable results. The package can be downloaded at [https://ngsolve.org/downloads](https://ngsolve.org/downloads). The code is included in the github repository, and the link will be available in section 5. #### 3.2.3 The fast reduced order method In a nutshell, our reduced order method utilizes the discretization matrices from FEM (in our case, via NGSolve) and then employs a Krylov sequence to conduct dimensional reduction. This method is more convenient and adaptable than other existing reduced order methods because it does not require extra training data to generate the reduced order basis. So this new approach can solve the inverse problem without having any prior knowledge about the true source term. See Fig.1 for a pipeline illustration. In this paper, we provide a general idea of the ROM and a more detailed analysis of this method is given in the work of Walkington et al. [11]. Basically, we want to reduce the forward problem to a small dimension which is denoted as \(\ell(\leq 10)\). So we need a reduced order basis (ROB) that retains essential information about the problem and extract a projection matrix to project the full order model (FOM) to the reduced subspace. First, we generate a Krylov sequence based on the discretization matrices \(M,A\) and \(b\) in (3.4). \[\mathbf{u}_{h}^{i}=\sum_{j=1}^{N}(u_{i})_{j}\varphi_{j},\ u_{i}\in\mathbb{R}^{ N},\ \varphi_{j}\in V_{h},\qquad\forall\ 1\leq i\leq\ell \tag{3.6}\] Where \(N\) is the dimension of \(V_{h}\). \[Au_{1}=b,\ \ Au_{i}=Mu_{i-1}\qquad\forall\ 2\leq i\leq\ell \tag{3.7}\] \[U_{\ell}=[u_{1}|u_{2}|\cdots|u_{\ell}]\in\mathbb{R}^{N\times\ell},\qquad r=rank(U_{\ell}) \tag{3.8}\] Note that if needed, we can set \(r=\ell\) to simplify computations. In this paper, we use \((\nabla\varphi,\nabla\psi)=(\varphi,\psi)_{V}\ \ \forall\varphi,\psi\in V_{h}\). For a complete definition of \((\cdot,\cdot)_{V}\), see Section 2 of [11]. To find the reduced order basis, we solve the following minimization problem: \[\underset{\tilde{\varphi}_{1},\cdots,\tilde{\varphi}_{r}\in V_{h}}{min}\sum_{j =1}^{\ell}\|\mathbf{u}_{h}^{j}-\sum_{i=1}^{r}(\mathbf{u}_{h}^{j},\,\tilde{\varphi}_{i})_{V}\tilde{\varphi}_{i}\|_{V}\qquad\text{ with }(\tilde{\varphi}_{i},\tilde{\varphi}_{j})_{V}=\delta_{ ij},\ \ \forall\ 1\leq i,j\leq r\leq\ell\] (P1) **Lemma 1**.: _(modified Lemma 1 in [11]) The solution to (P1) is the first \(r\) eigenfunctions of \(\mathcal{R}:V_{h}\to V_{h}\) where \(\mathcal{R}(\varphi)=\sum_{j=1}^{\ell}(\mathbf{u}_{h}^{j},\varphi)_{V} \mathbf{u}_{h}^{j}\)_ The proof of Lemma 1 can be found in Theorem 2.7 of [6]. **Lemma 2**.: \(\mathcal{R}=\mathcal{U}\mathcal{U}^{*}\)_, where \(\mathcal{U}:\mathbb{R}^{\ell}\to V_{h}\) is a compact linear operator and \(\mathcal{U}^{*}:V_{h}\to\mathbb{R}^{\ell}\) is the Hilbert adjoint operator. And \(\mathcal{U}^{*}\mathcal{U}\alpha=U_{\ell}^{T}AU_{\ell}\alpha\ \forall\alpha\in\mathbb{R}^{\ell}\), where \(A\) is the stiffness matrix in (3.4) and \(U_{\ell}\) is the Krylov matrix in (3.8)._ Proof.: We define \(\mathcal{U}:\mathbb{R}^{\ell}\to V_{h}\) by \(\mathcal{U}\alpha=\sum_{i=1}^{\ell}\alpha_{i}\mathbf{u}_{h}^{i}\), where \(\alpha=[\alpha_{1},\alpha_{2},\cdots,\alpha_{\ell}]^{T}\in\mathbb{R}^{\ell}\). The Hilbert adjoint operator \(\mathcal{U}^{*}:V_{h}\to\mathbb{R}^{\ell}\) satisfies \[(\mathcal{U}^{*}\nu,\alpha)_{\mathbb{R}^{\ell}}=(\nu,\mathcal{U}\alpha)_{V}= \sum_{i=1}^{\ell}\alpha_{i}(\mathbf{u}_{h}^{i},\nu)_{V}\] This implies \[\mathcal{U}^{*}\nu=[(\mathbf{u}_{h}^{1},\nu)_{V},\cdots,(\mathbf{u}_{h}^{ \ell},\nu)_{V}]^{T}\] So \[\mathcal{U}\mathcal{U}^{*}\nu=\sum_{i=1}^{\ell}(\mathbf{u}_{h}^{j},\varphi)_{ V}\mathbf{u}_{h}^{i}=\mathcal{R}\nu\] We also have \[\mathcal{U}^{*}\mathcal{U}\alpha=[(\mathbf{u}_{h}^{1},\sum_{i=1}^{\ell}\alpha _{i}\mathbf{u}_{h}^{i})_{V},\cdots,(\mathbf{u}_{h}^{\ell},\sum_{i=1}^{\ell} \alpha_{i}\mathbf{u}_{h}^{i})_{V}]^{T}\] By (3.6) and the definition of the stiffness matrix \(A\) \[(\mathbf{u}_{h}^{i},\mathbf{u}_{h}^{j})_{V}=(\sum_{k=1}^{N}(u_{i})_{k}\varphi _{k},\sum_{k=1}^{N}(u_{j})_{k}\varphi_{k})_{V}=u_{i}^{T}Au_{j}\] Figure 1: Pipeline comparison Then with (3.8), we have \[\mathcal{U}^{*}\mathcal{U}\alpha=[\sum_{i=1}^{\ell}\alpha_{i}u_{1}^{T}Au_{i}, \cdots,\sum_{i=1}^{\ell}\alpha_{i}u_{\ell}^{T}Au_{i}]^{T}=U_{\ell}^{T}AU_{\ell}\alpha\] **Lemma 3**.: _Denote \(K_{\ell}=U_{\ell}^{T}AU_{\ell}\in\mathbb{R}^{\ell\times\ell}\). Let \(\lambda_{1}(K_{\ell})\geq\lambda_{2}(K_{\ell})\geq\cdots\geq\lambda_{\ell}(K_ {\ell})>0\) be the eigenvalues of \(K_{\ell}\), then \(\lambda_{i}(K_{\ell})=\lambda_{i}(\mathcal{R})\), \(\forall\ 1\leq i\leq\ell\). Moreover, if \(x_{i}\in\mathbb{R}^{\ell}\) and \(\mathcal{U}^{*}\mathcal{U}x_{i}=\lambda_{i}x_{i}\) s.t. \(\lambda_{i}>0\) and \(x_{i}\) is orthonormal, then let \(y_{i}=\frac{1}{\sqrt{\lambda_{i}}}\mathcal{U}x_{i}\) we have \(\mathcal{U}\mathcal{U}^{*}y_{i}=\lambda_{i}y_{i}\) and \(y_{i}\) is orthonormal._ Proof.: Here we prove a more general case, that the nonzero eigenvalues of \(\mathcal{U}\mathcal{U}^{*}\) and \(\mathcal{U}^{*}\mathcal{U}\) are the same. Say \(\lambda\neq 0\) is an eigenvalue of \(\mathcal{U}\mathcal{U}^{*}\) with eigenfunction \(x\in V_{h}\). Then \[\mathcal{U}^{*}\mathcal{U}(\mathcal{U}^{*}x)=\mathcal{U}^{*}(\mathcal{U} \mathcal{U}^{*}x)=\lambda\mathcal{U}^{*}x\] So \(\lambda\) is an eigenvalue of \(\mathcal{U}^{*}\mathcal{U}\) with eigenvector \(\mathcal{U}^{*}x\in\mathbb{R}^{\ell}\). Proving the other direction uses the same process. Thus, by Lemma 2, the positive eigenvalues of \(K_{\ell}\) and \(\mathcal{R}\) are the same. Next, let \(x_{i}\in\mathbb{R}^{\ell}\) and \(\mathcal{U}^{*}\mathcal{U}x_{i}=\lambda_{i}x_{i}\) s.t. \(\lambda_{i}>0\) and \(x_{i}\) is orthonormal. Set \(y_{i}=\frac{1}{\sqrt{\lambda_{i}}}\mathcal{U}x_{i}\), it is clear that \(y_{i}\) is orthonormal and \[\mathcal{U}\mathcal{U}^{*}y_{i}=\frac{\lambda_{i}}{\sqrt{\lambda_{i}}} \mathcal{U}x_{i}=\sqrt{\lambda_{i}}\mathcal{U}x_{i}=\lambda_{i}y_{i}\] **Theorem 1**.: _The solution to (P1) is \(\tilde{\varphi}_{i}=\frac{1}{\sqrt{\lambda_{i}}}\mathcal{U}\psi_{i}\), where \(\lambda_{i}>0\) and \(\psi_{i}\) satisfies \(K_{\ell}\psi_{i}=\lambda_{i}\psi_{i}\), \(i=1,2,\cdots,r\). And the coefficient vector of \(\tilde{\varphi}_{i}\) is \(\frac{1}{\sqrt{\lambda_{i}}}U_{\ell}\cdot\psi_{i}\)._ Proof.: By Lemma 3, the eigenvalues of \(K_{\ell}\) and \(\mathcal{R}\) are the same. So let \(\psi_{1},\cdots,\psi_{r}\in\mathbb{R}^{\ell}\) be the first \(r\) orthonormal eigenvectors of \(K_{\ell}\) with eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{r}>0\) respectively. Then let \(\tilde{\varphi}_{i}=\frac{1}{\sqrt{\lambda_{i}}}\mathcal{U}\psi_{i}\), \(\tilde{\varphi}_{i}\)'s are the first \(r\) orthonormal eigenfunctions of \(\mathcal{U}\mathcal{U}^{*}\). So \(\tilde{\varphi}_{1},\cdots,\tilde{\varphi}_{r}\) are the solution to (P1) by Lemma 1 and Lemma 2. By the definition in Lemma 2, \[\mathcal{U}\psi_{i}=\sum_{j=1}^{\ell}(\psi_{i})_{j}\sum_{k=1}^{N}(u_{j})_{k} \varphi_{k}=\sum_{k=1}^{N}(\sum_{j=1}^{\ell}(\psi_{i})_{j}(u_{j})_{k})\varphi_ {k}=\sum_{k=1}^{N}(U_{\ell}\cdot\psi_{i})_{k}\varphi_{k}\] Where \((U_{\ell}\cdot\psi_{i})_{k}\) is the \(k\)th component of \(U_{\ell}\cdot\psi_{i}\in\mathbb{R}^{N}\). Thus, the coefficient vector of \(\tilde{\varphi}_{i}\) is \(\frac{1}{\sqrt{\lambda_{i}}}U_{\ell}\cdot\psi_{i}\) We obtain the reduced order coefficient (projection) matrix \(Q\) via concatenating the coefficient vectors. \[Q=U_{\ell}[\frac{\psi_{1}}{\sqrt{\lambda_{1}}}\frac{\psi_{2}}{\sqrt{\lambda_{2 }}}]\cdots[\frac{\psi_{r}}{\sqrt{\lambda_{r}}}] \tag{3.9}\] The following is our algorithm to compute the matrix \(Q\). Now that we obtain the coefficient matrix to generate the reduced order basis. We can compute the reduced order subspace \[V_{r}=span\{\tilde{\varphi}_{1},\tilde{\varphi}_{2},\cdots,\tilde{\varphi}_{r}\}\] The reduced order formulation is a modification to (3.4), we need to find \(u_{r}^{n}\in V_{r}\) with \(u_{r}^{0}=0\ \forall n\in\{1,2,\cdots,ceil(\frac{\mathcal{T}}{\Delta t})\}\) satisfying: \[M_{r}\partial_{t}^{+}u_{r}^{n}+A_{r}u_{r}^{n}=b_{r} \tag{3.10}\] where \[M_{r}=Q^{T}MQ,\quad A_{r}=Q^{T}AQ,\quad b_{r}=Q^{T}b \tag{3.11}\] ``` 0:\(M,A,b,\ell,tol\) 1: Solve \(Au_{1}=b\) 2:\(U_{1}=u_{1},K_{1}=u_{1}^{T}Au_{1}\) 3:for\(i=2\) to \(\ell\)do 4: Solve \(Au_{i}=Mu_{i-1}\) 5:\(U_{i}=[U_{i-1}|u_{i}]\) 6:if\(K_{i-1}\) is a scalar then\(\alpha=u_{i-1}^{T}Au_{i}\) 7:else\(\alpha=[K_{i-1}(i-1,\;2:i-1)\;|\;u_{i-1}^{T}Au_{i}]\) 8:endif 9:\(\beta=u_{i}^{T}Au_{i}\) 10:\(K_{i}=\begin{bmatrix}K_{i-1}&\alpha^{T}\\ \alpha&\beta\end{bmatrix}\) 11:\([\Psi,\Lambda]=eig(K_{i})\)\(\triangleright\)\(\Psi\) contains the eigenvectors, \(\Lambda\) contains the eigenvalues 12:if\(\Lambda(i,i)\leq tol\)then Break\(\triangleright\) Truncate the eigenvalues based on \(tol\) 13:endif 14:endfor 15:\(Q=U_{i}\Psi(:,\;1:i-1)(\Lambda(1:i-1,\;1:i-1))^{-1/2}\) return\(Q\) ``` **Algorithm 2** get_matrix_Q (Algorithm 2 in [11]) Using the BDF scheme in (3.5), we compose the Algorithm 3 to solve the forward process using the new ROM. Let \(N_{t}=ceil(\frac{T}{\Delta t})\). \(Qu_{r}=Q[u_{r}^{0}|u_{r}^{1}|\cdots|u_{r}^{N_{t}}]\) derived from the method above is the reduced order coefficient array projected back to the FOM. By applying this coefficient array to the finite element basis, we obtain a numerical solution to the forward process. ## 4 Convergence analysis and stochastic error estimate ### 5 Numerical results In this section, we demonstrate the performance of our fast ROM framework in solving linear parabolic inverse source problems characterized in Section 2. We use the FEM via NGSolve package as a benchmark and find our method achieves significant computation time savings while retaining good accuracy for different source patterns1. Our testing source functions are inspired by the work of Wang et al. (Section 5 of [12]). Our code is available at [https://github.com/Readilyield/ROM_LPIS](https://github.com/Readilyield/ROM_LPIS). Throughout this section, we refer the CG-FEM pipeline to the CG backward framework with FEM forward solver, and CG-ROM pipeline to the CG backward framework with our ROM forward solver. The following table displays the setup and parameters for the testing problems. Note that our initial guess for the source term is \(f_{0}(x,y)=sin(\pi x)sin(\pi y)\), which is largely irrelevant with the true source terms we have in Sections 5.1, 5.2 and 5.3. ### Single letter reconstruction In this part, we apply the FEM and our ROM to recover the source term f, which is an indicator function in the shape of an Arial regular font capital letter. Here we present the reconstruction of letter A. On a 5-trial average, CG-FEM uses 1018.38 seconds with 31 iterations in the CG framework (Algorithm 1) and CG-ROM uses 66.13 seconds with 72 iterations in the CG framework. ### Non-letter reconstruction In this part, we apply the FEM and our ROM to recover the source term f, which is an indicator function in an irregular shape. For this example, we choose to recover a profile of Scotty, the mascot of CMU. On a 5-trial average, CG-FEM uses 1031.04 seconds with 31 iterations in the CG framework and CG-ROM uses 81.88 seconds with 97 iterations in the CG framework. \begin{table} \begin{tabular}{|c|c|c|} \hline Item & Section 5.1\&5.2 & Section 5.3 \\ \hline \hline \(\Omega\) & \([0,1]\times[0,1]\) & \([0,3]\times[0,1]\) \\ \hline \(h\) & \(1/2^{8}\) & \(1/2^{7}\) \\ \hline \(\Delta t\) & \(1/2^{8}\) & \(1/2^{7}\) \\ \hline \(T\) & \(1\) & \(1\) \\ \hline \(f_{0}\) & \(sin(\pi x)sin(\pi y)\) & \(sin(\pi x)sin(\pi y)\) \\ \hline \(tol(CG)\) & \(1E-8\) & \(1E-8\) \\ \hline \(\lambda_{n}\) & \(1E-7\) & \(1E-7\) \\ \hline Noise level and \(\sigma\) & 10\%, \(1E-3\) & 10\%, \(1E-3\) \\ \hline \end{tabular} \end{table} Table 1: General setup Figure 2: Letter A ### Multiple letter reconstruction In this part, we apply the FEM and our ROM to recover the source term f, which is an indicator function in the shape of multiple Arial regular font capital letters. Here we present the reconstruction of CMU. On a 5-trial average, CG-FEM uses 2637.18 seconds with 256 iterations in the CG framework and CG-ROM uses 171.29 seconds with 355 iterations in the CG framework. ### Time efficiency gain Here we present a table of averaged trials with varying spatial meshsize (\(h\)) and time stepsize (\(\Delta t\)) on recovering a single letter to demonstrate the time efficiency of our ROM. Other parameters not included in Table 2 are identical to those in Table 1. The efficiency gain is formulated as FEM time/ROM time. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline h & \(1/2^{5}\) & \(1/2^{6}\) & \(1/2^{7}\) & \(1/2^{8}\) & \(1/2^{9}\) \\ \hline \(\Delta t\) & \(1/2^{5}\) & \(1/2^{6}\) & \(1/2^{7}\) & \(1/2^{8}\) & \(1/2^{9}\) \\ \hline FEM time(s) & 1.98 & 16.07 & 105.51 & 1033.48 & 14416.52 \\ \hline ROM time(s) & 0.79 & 3.23 & 15.25 & 60.77 & 364.85 \\ \hline **Efficiency gain** & 2.48 & 4.97 & 6.92 & 17.01 & 39.51 \\ \hline FEM ite & 43.0 & 49.0 & 39.0 & 32.0 & 37.0 \\ \hline ROM ite & 77.0 & 96.0 & 93.0 & 70.0 & 62.0 \\ \hline \end{tabular} \end{table} Table 2: Average test results for recovering single letters Figure 4: Letters CMU Figure 3: Scotty profile ## 6 Concluding remarks
2305.13545
Stretching bonds in Density Functional Theory without artificial symmetry breaking
Accurate first-principles calculations for the energies, charge distributions, and spin symmetries of many-electron systems are essential to understand and predict the electronic and structural properties of molecules and materials. Kohn-Sham density functional theory (KS-DFT) stands out among electronic-structure methods due to its balance of accuracy and computational efficiency. It is now extensively used in fields ranging from materials engineering to rational drug design. However, to achieve chemically accurate energies, standard density functional approximations in KS-DFT often need to break underlying symmetries, a long-standing "symmetry dilemma". By employing fragment spin densities as the main variables in calculations (rather than total molecular densities as in KS-DFT), we present an embedding framework in which this symmetry dilemma is resolved for the case of stretched molecules. The spatial overlap between fragment densities is used as the main ingredient to construct a simple, physically-motivated approximation to a universal functional of the fragment densities. This 'overlap approximation' is shown to significantly improve semi-local KS-DFT binding energies of molecules without artificial symmetry breaking.
Yuming Shi, Yi Shi, Adam Wasserman
2023-05-22T23:38:23Z
http://arxiv.org/abs/2305.13545v1
# Stretching bonds in Density Functional Theory without artificial symmetry breaking ###### Abstract Accurate first-principles calculations for the energies, charge distributions, and spin symmetries of many-electron systems are essential to understand and predict the electronic and structural properties of molecules and materials. Kohn-Sham density functional theory (KS-DFT) stands out among electronic-structure methods due to its balance of accuracy and computational efficiency. It is now extensively used in fields ranging from materials engineering to rational drug design. However, to achieve chemically accurate energies, standard density functional approximations in KS-DFT often need to break underlying symmetries, a long-standing "symmetry dilemma". By employing _fragment_ spin densities as the main variables in calculations (rather than total molecular densities as in KS-DFT), we present an embedding framework in which this symmetry dilemma is resolved for the case of stretched molecules. The spatial overlap between fragment densities is used as the main ingredient to construct a simple, physically-motivated approximation to a universal functional of the fragment densities. This 'overlap approximation' is shown to significantly improve semi-local KS-DFT binding energies of molecules without artificial symmetry breaking. Symmetry breaking can occur in the quantum-mechanical simulation of molecules when the lowest-energy solutions of the electronic Schrodinger equation (SE) do not exhibit the same symmetries of the underlying Hamiltonian. An exact solution of the SE or, equivalently, a solution of the Kohn-Sham equations of Density Functional Theory (KS-DFT) [1; 2] with the _exact_ exchange-correlation (XC) functional \(E_{\rm XC}[n_{\uparrow},n_{\downarrow}]\) yields spin-densities \(\{n_{\uparrow},n_{\downarrow}\}\) that retain the symmetry of the molecular Hamiltonian. However, it is well known that spin and charge symmetries of stretched molecules are often broken by _approximate_ density-functional approximations (DFAs) for XC. Attempts to prevent such symmetry breaking often lead to qualitatively incorrect electric and magnetic properties of molecules and materials as a consequence of delocalization and static-correlation errors [3; 4; 5; 6; 7; 8]. Symmetry breaking, when allowed, can provide insight into the quantum-mechanical correlations that exist between fluctuating charges or spins in the constituent fragments. When these fragments are separated by a large distance \(R\rightarrow\infty\), the correct (symmetry unbroken) solution of the SE is an infinite-time average over fluctuations among the possible Figure 1: **a**: Binding energy of H\({}_{2}\) calculated through: (i) CCSD(T) reference values (red); (ii) spin-unrestricted PBE (blue); (iii) spin-restricted PBE (purple); and (iv) OA-PBE from Eq. (3-4) (gray). **b**: restricted-PBE energies (purple) decomposed into: Fragment relaxation energies \(E_{\rm f}-2E_{\rm Hydroden}\) (green) and partition energy \(E_{\rm p}+V_{\rm NN}\) (orange). **c**: Decomposition of \(E_{\rm p}\) (orange) into non-additive kinetic (blue), exchange-correlation (red) and all remaining contributions (pink). Note that the large error of the restricted-PBE calculation as \(R\rightarrow\infty\) can be attributed almost entirely to \(E_{\rm XC}^{\rm nad}\). broken-symmetry solutions. Consider for example the spin symmetry of a stretched hydrogen molecule in its singlet ground state. The correct spin-up density \(n_{\uparrow}({\bf r})\) equals the spin-down density \(n_{\downarrow}({\bf r})\) at every point in space, but imposing this symmetry on the solution of the KS-DFT equations with an approximate XC functional (see'restricted' in panel **a** of Fig. 1 for the popular PBE [9]) leads to unacceptably large energy errors as the molecule is stretched beyond \(R\sim 3\) bohr. A broken-symmetry solution exists with an energy that runs close to the exact one (unrestricted PBE in panel **a** of Fig. 1), with \(n_{\uparrow}({\bf r})\) localized on one atom and \(n_{\downarrow}({\bf r})\) on the other. Although strictly incorrect, this set of spin-densities does reflect one of the two possible dissociation channels observed when infinitesimal environmental perturbations induce the collapse of the wavefunction that breaks the chemical bond [10]. Similarly, recent studies show that the SCAN meta-GGA functional [11] can yield highly accurate binding energies for a great number of systems including some of the'strongly-correlated' type [12; 13; 14; 15; 16], but only when spin-symmetry breaking is allowed. A question then arises on the interpretation of such broken-symmetry solutions for _finite_\(R\)[17]. These are useful, among others, to calculate values of magnetic properties of molecules such as exchange-coupling constants [18; 19]. Is it possible to calculate accurate energies _without_ symmetry breaking when employing standard (e.g. PBE, SCAN) XC functionals? In this work we provide a positive answer. The key is to use a formulation of DFT in which: (1) Electronic _fragment_ spin-densities (as an alternative to total molecular densities) are sharply defined for finite \(R\) and recover those of isolated atoms as \(R\to\infty\); (2) each of those fragment spin-densities is described through a mixed-state ensemble that can place fractional charges and spins on the fragments to guarantee the correct symmetries; and (3) there exists a universal functional of the set of fragment spin-densities that describes the fragment interaction and that is amenable to simple yet accurate approximations. _Strategy, results, and discussion:_ All three of these features are provided by Partition-DFT (P-DFT) [20; 21], a density embedding method in which a molecule, defined by a nuclear "external" potential \(v({\bf r})=\sum_{\alpha}^{N_{\rm frag}}v_{\alpha}({\bf r})\) and \(N\) electrons, is partitioned into \(N_{\rm frag}\) smaller fragments (labeled by \(\alpha\)). The features listed above are met in the following way: (1) The fragment spin-densities are uniquely defined by the requirement that the sum of fragment energies \(E_{\rm f}\) be minimized under the constraint that the sum of fragment spin-densities \(n_{\rm f,\sigma}({\bf r})\equiv\sum_{\alpha}^{N_{\rm frag}}n_{\alpha,\sigma}( {\bf r})\) matches the correct spin-density \(n_{\sigma}({\bf r})\) of the molecule, i.e. the ground-state spin-density for \(N\) electrons in \(v({\bf r})\). The Lagrange multiplier that enforces this constraint is a unique _partition potential_\(v_{\rm p}({\bf r})\)[22] ; (2) Each of the fragment spin-densities \(n_{\alpha,\sigma}({\bf r})\) is a ground-state _ensemble_ density for a (possibly fractional) number of electrons and spins in \(v_{\alpha}({\bf r})+v_{\rm p}({\bf r})\); (3) A universal functional \(Q[{\bf n}]\) of the set of fragment spin-densities \({\bf n}\equiv\{n_{\alpha,\sigma}\}\) is defined as: \[Q[{\bf n}]=F[n]-\sum_{\alpha}^{N_{\rm frag}}\min_{\hat{\rho}_{\alpha}\to\{n_{ \alpha,\uparrow},n_{\alpha,\downarrow}\}}Tr(\hat{\rho}_{\alpha}(\hat{T}+\hat{V }_{\rm ee}))\ \, \tag{1}\] where \(F[n]=\min_{\Psi\to\{n_{\uparrow},n_{\downarrow}\}}\langle\Psi|\hat{T}+\hat{V}_{ \rm ee}|\Psi\rangle\) is the Levy-Lieb functional of the total density [23; 24], \(\hat{T}\) and \(\hat{V}_{\rm ee}\) are the kinetic and electron-electron repulsion operators, and the search inside the sum is performed over fragment density matrices \(\hat{\rho}_{\alpha}\) yielding the preset pairs of fragment spin-densities \(\{n_{\alpha,\uparrow},n_{\alpha,\downarrow}\}\). When evaluated at the unique set \({\bf n}\) of fragment spin-densities minimizing \(E_{\rm f}\), the ground-state energy of the molecule is then given by: \[E=E_{\rm f}[{\bf n}]+E_{\rm p}[{\bf n}], \tag{2}\] where \(E_{\rm f}\) is the fragment energy summation without the contribution from the partition potential (\(E_{\rm f}=\sum_{\alpha}^{N_{\rm frag}}E_{\alpha}\)) and the _partition energy_\(E_{\rm p}\) has been defined as the rest \(E_{\rm p}[{\bf n}]=Q[{\bf n}]+\int d{\bf r}v({\bf r})n_{\rm f}({\bf r})-\sum_{ \alpha,\sigma}^{N_{\rm frag}}\int d{\bf r}v_{\alpha}({\bf r})n_{\alpha,\sigma} ({\bf r})\). It can be proven [21] that the partition potential \(v_{\rm p}({\bf r})\) is the functional derivative of \(E_{\rm p}\) evaluated for a given set of fragment spin-densities \({\bf n}\). It is useful to decompose \(Q[{\bf n}]\) in terms of the usual Kohn-Sham density functional quantities as the sum of three non-additive terms: \(Q[{\bf n}]=T_{\rm s}^{\rm nad}[{\bf n}]+E_{\rm H}^{\rm nad}[{\bf n}]+E_{\rm XC }^{\rm nad}[{\bf n}]\), where, e.g., \(E_{\rm XC}^{\rm nad}[{\bf n}]=E_{\rm XC}[n_{\rm f}]-\sum_{\alpha}^{N_{\rm frag }}E_{\rm XC}[n_{\alpha}]\). One can then see that most of the PBE error for stretched H\({}_{2}\), for example, is contained in \(E_{p}\) (panel \({\bf b}\) of Fig. 1) and, more specifically, in \(E_{\rm XC}^{\rm nad}\) (panel \({\bf c}\)). Various strategies for approximating the KS kinetic term, \(T_{\rm s}^{\rm nad}[{\bf n}]\), are being investigated [25; 26; 27] but here we compute this term _exactly_ via density-to-potential inversions [28; 29; 30; 31]. Thus, for a given approximation to \(E_{\rm XC}[n]\), the P-DFT calculations simply reproduce the results of KS-DFT, including all of their errors (purple line in panel \({\bf a}\) of Fig. 1). In this article, we argue that almost all of the error of PBE at dissociation can be attributed to the incorrect behavior of \(E_{\rm XC}^{\rm nad}(R)\) and can be suppressed through improved approximations for this term alone. The gray line label OA-PBE in panel \({\bf a}\) of Fig. 1, for example, shows how a simple 'overlap approximation' for \(E_{\rm XC}^{\rm nad}[{\bf n}]\) (to be defined below) removes most of the PBE error while preserving the correct spin symmetry as \(R\to\infty\). We demonstrate that accurate binding energies for stretched molecules can be obtained through physically-motivated approximations for \(E_{\rm XC}^{\rm nad}[{\bf n}]\)_without_ symmetry breaking. We begin with the simplest case of closed-shell molecules partitioned into \(N_{\rm frag}=2\) fragments with spin-summed fragment densities \(n_{A}({\bf r})\) and \(n_{B}({\bf r})\) using a standard GGA funcional (PBE). For all such cases, like in H\({}_{2}\), \(E^{\rm nad}_{\rm xc,PBE}(R)\) goes to an incorrect positive constant as \(R\to\infty\) rather than satisfying the _exact constraint_: \(E^{\rm nad}_{\rm xc}[{\bf n}](R)\to 0\). We now build this constraint into \(E^{\rm nad}_{\rm xc}[{\bf n}]\) through: \[E^{\rm nad,OA}_{\rm xc,PBE}[{\bf n}]=S[{\bf n}]E^{\rm nad}_{\rm xc,PBE}, \tag{3}\] \[S[{\bf n}]={\rm erf}\left[\frac{C}{N_{b}}\int d{\bf r}\left(n_{ A}({\bf r})n_{B}({\bf r})\right)^{p}\right], \tag{4}\] where \(N_{b}\) is the bond order. When the two parameters \(C\) and \(p\) in Eq. (4) are fixed as \(C=2\) and \(p=1/2\) (fitted for H\({}_{2}\)), one obtains the binding curve labeled "OA-PBE" in Fig. 1. Although \(N_{b}\) in Eq. (4) is itself a functional of the set of fragment densities, we find that the simple rule where \(N_{b}=1\) for single, 2 for double, and 3 for triple bonds, is adequate for preserving the description of PBE around equilibrium while correcting the PBE errors as these build up beyond the Coulson-Fisher point. Eq. (3-4) perform extremely well for other singly-bonded hydrocarbons (\(N_{b}=1\)) as well as doubly-bonded diazene (\(N_{b}=2\)) and triply-bonded nitrogen molecules (\(N_{b}=3\)), systems that are famously challenging for the standard DFAs in DFT [4], and also for the 'gold standard' of quantum chemistry methods, CCSD(T) (coupled-cluster with single and double and perturbative Figure 2: Using PBE for the fragments, Eq. (3-4) with \(C=2\) and \(p=1/2\) yield accurate binding energies (gray) when cutting through single (\(N_{b}=1\)), double (\(N_{b}=2\)), and triple (\(N_{b}=3\)) bonds. Comparisons are made with spin-restricted PBE (purple), CCSD(T) (red), or reference values from Ref. [32] (yellow). triple excitations) [33], see Fig. 2. The overlap functional as defined by Eq. (4) is inadequate for molecules that are even more'strongly-correlated' than N\({}_{2}\). To illustrate this point, consider the challenging case of the chromium dimer (Cr\({}_{2}\)). A quantitative description of the electronic structure of Cr\({}_{2}\) is a stringent test for any theory that attempts to capture strong correlations in molecules. Neither CASSCF nor CCSD(T) yield quantitative agreement for the ground-state energy of Cr\({}_{2}\) as a function of the inter-nuclear separation. As is well known, standard DFAs in KS-DFT are utterly inadequate to capture the multi-reference character of the ground state in stretched Cr\({}_{2}\). Only very recently has a truly _ab initio_ calculation been reported for Cr\({}_{2}\)[34]. Unsurprisingly, Fig. 3 shows that the OA-PBE of Eq. (3)-(4) with \(N_{b}=6\) (gray line in the **b** panel) does improve (red) but not enough to match the _ab initio_ results, as the inter-fragment interaction in Cr\({}_{2}\) is radically different than in the molecules of Fig. 2. Nevertheless, the yellow line in the **b** panel of Fig. 3 demonstrates that using \(p=0.8\) instead of \(p=0.5\) in the integrand of Eq. (4), and the value \(N_{b}=1.176\) in the denominator, one obtains quantitative agreement between the OA-PBE and the most accurate (but expensive) state-of-the-art _ab initio_ calculations for all inter-nuclear separations. The obvious question is how Figure 3: **a**: \(S(R)\) for Cr\({}_{2}\) obtained from: (i) Numerically exact results from ref. [34] (red, exact); (ii) Eq. (4) with \(p=1/2\), and \(N_{b}=6\) (gray, OA-PBE); and (iii) Fitted with \(N_{b}=1.176\), \(p=0.8\) (yellow, fitted-OA-PBE). **b**: Corresponding binding energies, where pure restricted-PBE has been included for comparison (purple). the overlap functional should be defined for the OA-PBE to become more generally applicable and predictive. The simplicity of Eq. (3) and of the forms we have used for \(S[\mathbf{n}]\) suggest that it should be possible to derive a more general functional for \(S[\mathbf{n}]\) from first principles. If one had access to the _exact_ XC functional, Eq. (3) with "PBE" replaced by "exact" would evidently hold with \(S=1\) for any \(R\). It is then useful to re-write Eq. (4) as: \(S_{\text{exact}}[\mathbf{n}]=E_{\text{xc,exact}}^{\text{nad}}[\mathbf{n}]/E_{ \text{xc,DFA}}^{\text{nad}}[\mathbf{n}]\), where \(E_{\text{xc,DFA}}^{\text{nad}}[\mathbf{n}]\) is the non-additive exchange correlation energy obtained through a self-consistent P-DFT calculation that uses a DFA for the XC energy functional. The overlap functional is thus approximating the missing (mostly non-local) effects in the DFA. With access to accurate total energies, one can extract \(E_{\text{xc,exact}}^{\text{nad}}[\mathbf{n}]\) by subtraction of the other components available from exact P-DFT calculations. The behavior of \(S_{\text{exact}}(R)\) can then be examined as illustrated for the case of Cr\({}_{2}\) in the \(\mathbf{a}\) panel of Fig. 3, where the total electronic energies reported in ref. [34] were used as input. The smoothness of \(S_{\text{exact}}(R)\) (see \(\mathbf{a}\) panel of Fig. 3), especially in regions where \(E(R)\) varies quite rapidly with \(R\) (see \(\mathbf{b}\) panel of Fig. 3) illustrates the usefulness of Eq. (3-4) for modeling the electronic structure of a strongly-correlated molecule, an indication that the path is worth pursuing further. Before moving on to considering the case of charge symmetry, we provide a proof-of-principle demonstration that the same idea of Eq. (3-4) can be extended to an arbitrary number of fragments \(N_{\text{frag}}>2\). When there are more fragments and P-DFT yields a set of densities \(\mathbf{n}=\{n_{1}(\mathbf{r}),n_{2}(\mathbf{r}),...,n_{N_{\text{frag}}}( \mathbf{r})\}\), Eq. (3) can be applied recursively as a nested version of the OA (NOA): Figure 4: The NOA of Eq. (5) corrects the PBE error for the binding energies in hydrogen chains as \(R\rightarrow\infty\) (here H\({}_{10}\), where the x-axis is the distance \(R\) between neighboring nuclei). The reference is Multireference Configuration Interaction taken from Ref. [35]. \[E_{\rm XC}^{\rm nad,NOA}[{\bf n}]=S[{\bf n}_{1\to m},{\bf n}_{m+1\to N_{\rm frag}}]E_{ \rm XC}^{\rm nad}[{\bf n}_{1\to m},{\bf n}_{m+1\to N_{\rm frag}}]+E_{\rm XC}^{\rm nad,NOA}[{\bf n}_{1\to m}]+E_{\rm XC}^{\rm nad,NOA}[{\bf n}_{m+1\to N_{\rm frag}}] \tag{5}\] where \({\bf n}_{a\to d}\) denotes the partial sum of fragment densities \(n_{a}({\bf r})+...+n_{d}({\bf r})\). As for the case of binary fragmentation, this prescription preserves the results of the parent DFA at equilibrium separations and can improve the results when bonds are stretched. We have tested Eq. (5) on hydrogen chains with the overlap model of Eq. (4) and the results demonstrate that Eq. (5) corrects the errors of PBE as \(R\to\infty\) (see Fig. 4 for H\({}_{10}\), a well known test-bed for strongly-correlated systems [35]), although it overestimates the corrections needed in the intermediate range \(2.5<R<4\) bohr. More research into the form of \(E_{\rm XC}^{\rm nad}[{\bf n}]\) and its accompanying overlap measure is clearly needed. We now discuss _charge symmetry_, which is analogous to the case of spin symmetry but with an extra challenge. First, the analogy: As in the case of spin symmetry we have just discussed, the approximation chosen for \(E_{\rm XC}[n_{\uparrow},n_{\downarrow}]\) will typically lead to improved energies when charge symmetries are broken. The extra challenge: The charge-symmetry-broken solutions are typically _higher_ in energy that the charge-symmetric solutions, and will therefore not be found when searching for a minimum. In other words: The analog of spin-unrestricted calculations will not lead to improved energies. Take for example the case of stretched H\({}_{2}^{+}\) in Fig. 5, where PBE underestimates the dissociation energy by 70%. The PBE ground-state energy of an isolated hydrogen atom is only off by 0.08%, so the dissociation energy error can be attributed almost entirely in this case to the fact that the KS equations do _not_ break the charge symmetry of the ground state. How can one keep that symmetry _and_ correct the energy? Again, as before, we analyze the contributions to \(Q[{\bf n}]\) from the different KS components and find that, this time, the problem is not fixed by simply quenching \(E_{\rm XC}^{\rm nad}(R)\) for large \(R\) because, for any given \(R\), \(E_{\rm XC}^{\rm nad}(R)\) does not cancel \(E_{\rm H}^{\rm nad}(R)\) as it should for a 1-electron system (see panel **c** of Fig. 5). This _non-additive self-interaction error_ can be corrected by adding a term to \(E_{\rm XC}^{\rm nad}\) in Eq. (3): \[E_{\rm XC}^{\rm nad,OA}[{\bf n}]=S[{\bf n}]E_{\rm XC,PBE}^{\rm nad }+(1-S[{\bf n}])\Delta E_{\rm H}^{\rm nad} \tag{6}\] \[\Delta E_{\rm H}^{\rm nad}=g_{ij}\int\frac{n_{{\rm A},i}({\bf r} )n_{{\rm B},j}({\bf r}^{\prime})}{|{\bf r}-{\bf r}^{\prime}|}d{\bf r}d{\bf r} ^{\prime}-E_{\rm H}^{\rm nad}, \tag{7}\] where \(g_{ij}=1\) when \(N_{{\rm A},\sigma}+N_{{\rm B},\sigma}=N_{\sigma}\); and \(g_{ij}=0\) otherwise, implying the possible dissociation channels. With this choice of \(g_{ij}\), Eq. (6) reduces to Eq. (3) for closed-shell systems but improves over Eq. (3) for open shells, as shown by the gray line labeled 'OA-PBE' in Fig. 5 for \(\mathrm{H}_{2}^{+}\) and in ref. [36] for \(\mathrm{Li}_{2}^{+}\). Eq. (6-7), together with the model for \(S[\mathbf{n}]\) of Eq. (4) can even significantly improve the PBE binding energies of \(\mathrm{He}_{2}^{+}\), a challenging molecule for reasons other than delocalization and static-correlation [37]. The agreement between OA-PBE and CCSD(T) in this case is not quantitative (panel \(\mathbf{b}\) in Fig. 5), but P-DFT calculations indicate that the main source of error belongs to \(T_{\mathrm{s}}^{\mathrm{nad}}(R)\), probably due to the numerical difficulties associated with finding a pure-state spin-density for such a stretched system [38; 39; 40], leading to a non-zero \(T_{s}^{\mathrm{nad}}(R)\) as \(R\rightarrow\infty\) (see panel \(\mathbf{d}\) of Fig. 5). However, the results labeled by 'kinOA-PBE' in panel \(\mathbf{b}\) of Fig. 5 demonstrate that this error can be suppressed almost entirely by multiplying \(T_{\mathrm{s}}^{\mathrm{nad}}\) by \(S[\mathbf{n}]\) and approximating \(Q[\mathbf{n}]\) as \(Q[\mathbf{n}]\approx S[\mathbf{n}]T_{\mathrm{s}}^{\mathrm{nad}}[\mathbf{n}] +E_{\mathrm{Hxc,PBE}}^{\mathrm{nad,OA}}[\mathbf{n}]\). _Summary and Outlook:_ By using molecular spin-densities \(\{n_{\uparrow}(\mathbf{r}),n_{\downarrow}(\mathbf{r})\}\) as the main variables in calculations, XC approximations are hard-pressed to describe the low-density inter-nuclear regions of molecules where correlation effects are relatively more important (compared to kinetic Figure 5: \(\mathbf{a}\): \(\mathrm{H}_{2}^{+}\) binding energies calculated with CCSD(T) (red), PBE (blue), and OA-PBE (gray). \(\mathbf{b}\): \(\mathrm{He}_{2}^{+}\) binding energies, including a kinOA calculation in which \(Q=ST_{\mathrm{s}}^{\mathrm{nad}}+E_{\mathrm{H}}+E_{\mathrm{xc,PBE}}^{\mathrm{ nad,OA}}[\mathbf{n}]\) (light blue). \(\mathbf{c}\),\(\mathbf{d}\): Components of \(E_{\mathrm{p}}\) showing that the PBE error can be attributed to a poor cancellation of errors between \(E_{\mathrm{xc}}^{\mathrm{nad}}\) and \(E_{\mathrm{H}}^{\mathrm{nad}}\). effects), so the XC approximations are in a sense blind to the formation of fragments when bonds are stretched. Methods including symmetry-breaking [17], self-interaction error corrections [41; 42; 43; 44; 45], range-separated functionals [46; 47; 48; 49; 50; 51], double hybrid functionals [52; 53], and scaling correction methods [54] are all among approaches that have been adopted to overcome such difficulties. Moreover, methods relying on the "on-top" pair density [55], complex orbitals [56], exact strong-interaction limit functionals [57], and fractional-spin localized orbital scaling corrections [58] can improve the accuracy for systems of the strongly-correlated type. Recognizing that such strongly-correlated systems are often composed of weakly overlapping fragments [59; 60], the central result of our work is that, when _fragment_ spin-densities are used as the main variables, the typical de-localization and static-correlation errors of approximate \(E_{\rm XC}[n_{\uparrow},n_{\downarrow}]\)[3; 4; 5; 6; 7; 8] can be largely avoided without having to abandon essential symmetries. This alternative strategy rests on maintaining the use of the same approximate \(E_{\rm XC}[\{n_{\alpha,\uparrow},n_{\alpha,\downarrow}\}]\)_within_ the fragments while introducing new inter-fragment approximations for \(E_{\rm XC}^{\rm nad}[{\bf n}]\). The latter is a functional of the set of fragment spin-densities \({\bf n}\) rigorously defined wihin P-DFT [21]. Two exact constraints satisfied by \(E_{\rm XC}^{\rm nad}[{\bf n}]\) were used in the construction of Eq. (6): (1) \(E_{\rm XC}^{\rm nad}[{\bf n}]\to 0\) as \(R\to\infty\), where \(R\) denotes the separation between fragments; and (2) \(E_{\rm XC}^{\rm nad}[{\bf n}]\to-E_{\rm H}^{\rm nad}[{\bf n}]\) for single-electron bonds. Eq. (6), and the accompanying model for \(S[{\bf n}]\) in Eq. (4) should be seen as initial attempts at approximating these quantities. Future approximations to \(E_{\rm XC}^{\rm nad}[{\bf n}]\) should incorporate more exact constraints. For example, how could Eq. (3-4) be improved to encompass van der Waals interactions [61; 62]? _Methods:_ All calculations were done using a P-DFT implementation in _Psi4_[63]. The cc-pVTZ basis set was used for all molecules in this work, except for the cases of H\({}_{10}\) and Cr\({}_{2}\), for which cc-pVDZ was used instead. P-DFT PBE calculations (without the OA) were checked to differ usually by less than 0.1 Kcal/mol when compared to direct restricted-PBE results from KS-DFT with the same densities. The Wu-Yang algorithm [28] implemented on Gaussian basis sets in \(n2v\)[30] was used to calculate all \(T_{\rm s}[n_{\rm f}]\) components. The OA was performed as a post-PDFT approximation, i.e. using the fragment densities yielded by P-DFT [36]. The exact OA functional \(S\) for Cr\({}_{2}\) is defined as \(E_{\rm XC,PBE}^{\rm nad}/(E_{\rm exact}-E_{\rm f}-T_{\rm s}^{\rm nad}-E_{\rm H }^{\rm nad})\) by assuming the error is entirely contained in \(E_{\rm XC}^{\rm nad}\). Convergence of the partition potential \(v_{\rm p}\) was achieved in each case by updating it iteratively according to: \[v_{\rm p}^{k+1}=v_{\rm p}^{k}+\lambda\left(v_{\rm XC,PBE}[n_{\rm f}]-v_{\rm XC,inv}[n_{\rm f}]\right) \tag{8}\] for the \((k+1)^{\text{th}}\) step, where \(\lambda\) is the step size. \(v_{\text{xc,PBE}}[n_{\text{f}}]\) is the XC potential for a choice of XC approximation (we use PBE in this article). \(v_{\text{xc,inv}}\) is the effective patition potential calculated from inversion, as explained by Eq. (11) below. The derivation of Eq. (8) is outlined next omitting spin indices for simplicity. Start from the definition of \(v_{\text{p}}\)[21] as: \(v_{\text{p}}=\frac{\delta E_{\text{p}}}{\delta n_{\alpha}}\), where \(n_{\alpha}\) is the density for fragment \(\alpha\). At convergerence, the same \(v_{\text{p}}\) is shared by all fragments and is independent of the fragment index \(\alpha\). By separating \(E_{\text{p}}\) as suggested in the main text, \(v_{p}\) is decomposed as: \[v_{\text{p}}(\mathbf{r})=\frac{\delta T_{\text{s}}[n_{\text{f}}]}{\delta n_{ \text{f}}(\mathbf{r})}-\frac{\delta T_{\text{s}}[n_{\alpha}]}{\delta n_{ \alpha}(\mathbf{r})}+v(\mathbf{r})-v_{\alpha}(\mathbf{r})+v_{\text{H}}[n_{ \text{f}}](\mathbf{r})-v_{\text{H}}[n_{\alpha}](\mathbf{r})+v_{\text{xc, PBE}}[n_{\text{f}}](\mathbf{r})-v_{\text{xc,PBE}}[n_{\alpha}]( \mathbf{r}). \tag{9}\] Given that stationary condition for the fragments at each step \(k\): \[\frac{\delta T_{\text{s}}[n_{\alpha}]}{\delta n_{\alpha}(\mathbf{r})}+v_{ \alpha}(\mathbf{r})+v_{\text{H}}[n_{\alpha}](\mathbf{r})+v_{\text{xc,PBE}}[n_ {\alpha}](\mathbf{r})+v_{\text{p}}=\mu_{\alpha}, \tag{10}\] as well as for the entire system through inversion: \[\frac{\delta T_{\text{s}}[n_{\text{f}}]}{\delta n_{\text{f}}(\mathbf{r})}+v( \mathbf{r})+v_{\text{H}}[n_{\text{f}}](\mathbf{r})+v_{\text{xc,inv}}[n_{\text {f}}](\mathbf{r})=\mu, \tag{11}\] Eq. (8) follows by omitting the chemical potentials since \(\mu\) provides no energy contribution to the total energy.
2304.04310
Pressure-induced formation of cubic lutetium hydrides derived from trigonal LuH$_3$
In recent years, there has been a fervent search for room-temperature superconductivity within the binary hydrides. However, as the number of untested compounds dwindled, it became natural to begin searching within the ternary hydrides. This led to the controversial discovery of room-temperature superconductivity at only 1GPa in nitrogen-doped lutetium hydride [Dasenbrock-Gammon et al., Nature 615, 244 (2023)] and consequently provided much impetus for the synthesis of nitrogen-based ternary hydrides. Here, we report the synthesis of stable trigonal LuH$_3$ by hydrogenating pure lutetium which was subsequently pressurised to $\sim$2GPa in a dilute-N$_2$/He-rich pressure medium. Raman spectroscopy and x-ray diffraction were used to characterise the structures throughout. After depressurising, energy-dispersive and wavelength-dispersive X-ray spectroscopies characterised the final compound. Though our compound under pressure exhibits similar structural behaviour to the Dasenbrock-Gammon et al. sample, we do not observe any nitrogen within the structure of the recovered sample at ambient pressure. We observe two cubic structures under pressure that simultaneously explain the X-ray diffraction and Raman spectra observed: the first corresponds well to $Fm\overline{3}m$ LuH$_{2+x}$, whilst the latter is an $Ia\overline{3}$-type structure.
Owen Moulding, Samuel Gallego-Parra, Yingzheng Gao, Pierre Toulemonde, Gaston Garbarino, Patricia De Rango, Sébastien Pairis, Pierre Giroux, Marie-Aude Méasson
2023-04-09T20:33:58Z
http://arxiv.org/abs/2304.04310v3
Trigonal to cubic structural transition in _possibly N-doped_ LuH\({}_{3}\) measured by Raman and X-ray diffraction ###### Abstract After the reported discovery of room-temperature superconductivity at only 1 GPa in nitrogen-doped lutetium trihydride [1] and the resulting heated discussions, there is an urgent need to reproduce the results and synthesis of this compound. Here, we report the synthesis of (potentially N-doped) cubic LuH\({}_{3}\) starting from pure Lu to produce very stable trigonal LuH\({}_{3}\) which was subsequently pressurised to \(\sim\)2 GPa with a dilute N\({}_{2}\)/He rich pressure medium. Raman spectroscopy and X-ray diffraction were used to characterise the structure throughout the synthesis process. ## I Introduction The Holy Grail of room-temperature superconductivity has been a long sought-after quest ever since the initial predictions of superconductivity in metallic hydrogen by Ashcroft in 1968 [2] shortly after the publication of BCS theory 1957 [3; 4]. Though not pure hydrogen, many examples of high-temperature superconductivity have been realised in recent years which have been reliably shattering high-\(T_{c}\) records with each new discovery. A notable discovery was the superconductivity in SH\({}_{3}\) at 203 K and 155 GPa [5] as it provided promise for the field. Subsequent examples continued to push the threshold with the discovery of superconductivity in YH\({}_{9}\) and LaH\({}_{10}\) at 243 K and 260 K respectively both at approximately 200 GPa [6; 7; 8]. Clearly these syntheses require extremely high pressures that few groups are able to reach, and this has been the primary technical challenge to overcome. Hence why the very recent claim of room-temperature superconductivity (294 \({}^{\circ}\)K in N-doped lutetium hydride and at such a low pressure of 1 GPa [1] has drawn so much attention, as not only it is a new record \(T_{c}\) for superconductivity, but it also brings superconductivity into the domain of practicably achievable at near-ambient conditions. Furthermore, the samples are said to be metastable at ambient pressure which further adds to the wishful properties of such a material. In a very short period of time, an impressive number of groups have already tried to replicate the results both theoretically and experimentally [9; 10; 11; 12; 13; 14; 15; 16; 17]. Two experimental papers started from pure Lu and NH\({}_{4}\)Cl and CaH\({}_{2}\) precursors [14; 15] which decompose to provide the required N\({}_{2}\) and H\({}_{2}\). Dasenbrock _et al._[1] and a very recent report by Cai _et al._[18] used pure Lu with H\({}_{2}\)/N\({}_{2}\) gas mixture loading. Here we choose another process, by first synthesising pure LuH\({}_{3}\) and then load it with a mixture of dilute N\({}_{2}\) gas and helium in a diamond anvil cell (DAC). We then characterise the obtained compound thanks to Raman spectroscopy and X-ray diffraction (XRD). ## II Methods ### Experimental Methods Lutetium (Alfa 3N) was characterised by EDX before polishing it, whereupon oxygen was clearly identified in Lu\({}_{2}\)O\({}_{3}\) deposits with atomic concentrations between 20-50 %. A small amount of tantalum (atomic concentrations between 0.2-1 %) was identified (Cf. SI). We then polished a 150 mg piece of lutetium in air until the surface became shiny instead of black in order to remove the oxide from the surface. We then synthesised LuH\({}_{3}\) by hydrogen absorption using the Sievert method. We used a HERA C2-3000 device to measure the quantity of hydrogen absorbed (or desorbed) by the piece of lutetium as a function of time. This is calculated by measuring the hydrogen pressure variation in a sample holder of known volume. The measurement of the hydrogenation rate is performed out of equilibrium. A piece of polished lutetium (147.67 mg) was placed in the sample-holder of the reaction chamber. The sample-holder and compensation chambers were pumped for one hour at ambient temperature to remove contaminating gases. The temperature was then increased to a maximum temperature of 500\({}^{\circ}\)C at 10\({}^{-5}\) mbar and kept stable during 4000 s. The temperature was then decreased to 200 \({}^{\circ}\)C, and H\({}_{2}\) gas at 4 MPa was injected into the chamber. The percentage of absorbed H\({}_{2}\) was measured to be 1.7 % in mass which corresponds to the expected composition LuH\({}_{3}\), as shown in figure 1. After the synthesis, the sample-holder was closed and transferred into a glove box (under argon) where it was opened to recover the LuH\({}_{3}\) powder. A thin sample of LuH\({}_{3}\) was prepared in a diamond anvil cell (DAC with cules of 800 \(\mu\)m diameter) by pressing the synthesised powder between the two diamonds until the sample was \(\sim\) 5-10\(\mu\)m thick. A stainless steel gasket was indented to a thickness of 80 \(\mu\)m and a hole of 400 \(\mu\)m was drilled for the pressure chamber. A ruby sphere and a small piece of silicon were placed inside the pressure chamber. Prior to loading the DAC, the LuH\({}_{3}\) sample was characterised by Raman spectroscopy inside the unloaded DAC. We then used a gas loader (Top Industrie) to load a mixture of nitrogen and helium. After purging with helium, the system was filled with 10 bar of N\({}_{2}\) and then 1500 bar of helium. We estimate that the quantity of N\({}_{2}\) in the pressure chamber was 4 nmol whilst the quantity of LuH\({}_{3}\) was 11 nmol. The DAC was then sealed at 0.1 GPa and then we applied 1.92 GPa and proceeded to characterise the sample by Raman spectroscopy and XRD. X-ray powder diffraction of the starting LuH\({}_{3}\) was performed after the hydrogenation of Lu using a D5000T diffractometer (Cu K\(\alpha\) radiation), at ambient pressure (and outside the DAC). The measurement was repeated several times (up to 9 days after the first measurement) to determine the effect of air exposure on LuH\({}_{3}\). The Rietveld refinements were done with Fullprof software. The X-ray powder diffraction after loading and at 1.9 GPa in DAC was performed on the ESRF beamline ID15B at \(\lambda\)=0.411 A. Calibration of the detector-to-sample distance, beam orientation, detector tilt with respect to the omega rotation axis, and the used wavelength is determined by a Si powder standard ('NIST 640 C' from NIST). The X-ray beam is focused to 4 x 3\(\mu\)m\({}^{2}\) using Be compound refractive lenses. 2D images were collected with a six degrees oscillation of the DAC using an Eiger 2X CdTe 9M photon counting detector from Dectris and integrated into 1D diffractogram using the Dioptas software [19]. Le Bail refinements (lattice parameter, peak profile and background) on the loaded DAC at 2 GPa were done using the gsas2 package [20]. Polarised Raman scattering was performed in quasi-backscattering geometry at 300 K with an incident laser-line at 532 nm from a solid state laser. The DAC was placed in a vacuum to avoid measuring the Raman-response of air. We used a laser power between 2.5mW-10mW with a typical size spot of 25 \(\mu\)m. The scattered light was analysed by a single grating and a triple grating subtractive spectrometer, both were equipped with liquid nitrogen-cooled CCD detectors. The crossed and parallel polarisation dependence was measured by changing the orientation of the polariser on the collection path. We measured the Raman signal of pure LuH\({}_{3}\) just before loading in the DAC and after loading at 1.9 GPa in the same DAC. In the following, we will consider various space groups: cubic \(Fm\overline{3}m\), trigonal \(P\overline{3}c1\), tetragonal \(P\overline{4}m2\); and additionally hexagonal \(P6_{3}cm\), \(P6_{3}\), and \(P6_{3}/mmc\), and \(Ia\overline{3}\) for Lu\({}_{2}\)O\({}_{3}\) impurity. The groups respectively correspond to O\({}_{h}\), D\({}_{3d}\), D\({}_{2d}\), C\({}_{6v}\), C\({}_{6}\), and D\({}_{6h}\) point groups for LuH\({}_{3}\)-based compound and \(T_{h}\) for Lu\({}_{2}\)O\({}_{3}\). The polarised Raman response was measured in crossed and parallel configurations on non-crystalline samples. The only clear expected selection rules concern the \(A\) modes that are mainly Raman active in parallel polarisations (except for C\({}_{6}\)). ## III Experimental results ### Imaging of the sample Images of the sample in the DAC before (300 K, 1bar) and after (300 K, 1.92 GPa) loading are presented in Fig. 2. A white light was used to illuminate the sample in reflection and in transmission. A ruby ball and a piece of silicon were placed inside the pressure chamber. The sample appears translucent with a red color at 1 bar and seems to become opaque at high pressure; however, this could be due to the majority of the sample raising up off of the diamond during loading. After loading with the mixture of He/N\({}_{2}\) and pressurising to 1.92 GPa, the surface becomes reflective and blue. On the image c), we can also see a red region. This part remained flat against the diamond and Raman spectroscopy on it, so performed after loading and at 1.92 GPa, presents trigonal structure of LuH\({}_{3}\). ### X-ray diffraction Fig 3 presents the Rietveld fit of the XRD pattern measured on the trihydride, which we kept protected from the air between two pieces of cellotape. We identify the structure to be trigonal \(P\overline{3}c1\) as expected for LuH\({}_{3}\) at ambient pressure. The lattice parameters are a=6.1680(8)A and c=6.422(1)A which a yield Lu-Lu distance in the range Figure 1: Synthesis of LuH\({}_{3}\) from lutetium: weight percentage of absorbed H\({}_{2}\) by lutetium as a function of time. After 2 hours at 200 \({}^{\circ}\)C, 1.7% of absorbed H\({}_{2}\) is reached showing the successful synthesis of LuH\({}_{3}\). of 3.537(10)-3.828(6) A. For comparison, Tkacz et al. [21] analysed that LuH\({}_{3}\) has a hexagonal (\(P6_{3}/mmc\)) phase with a= 3.57\(\pm\)0.01 A and c = 6.41\(\pm\)0.02 A at ambient conditions. This yielded a Lu-Lu distance of 3.570 A. The powder is primarily LuH\({}_{3}\) at 96.9(1)%, and the rest was identified to be impurities of Lu\({}_{2}\)O\({}_{3}\). This is likely to originate from the deposits on the lutetium surface that were not removed before hydrogenation. The space group of Lu\({}_{2}\)O\({}_{3}\) is \(Ia\overline{3}\) and the refined lattice parameter is 10.380(8) A in agreement with the literature [22]. After the first XRD measurement, we left a fraction of the powder exposed to air and measured the XRD periodically over the course of 9 days to check its stability. The rest of the powder was immediately stored under vacuum or in an argon glove box. As shown Fig.3 b), the oxide Lu\({}_{2}\)O\({}_{3}\) content is the same within the error bar, i.e. 3.4(1)% for 3.2(1)% before. X-ray diffraction in the loaded DAC at 1.9 GPa is shown in figure 4. It was measured at the ID15B line of the ESRF in five different spots with a size of 4x3 \(\mu\)m and separated by 20 \(\mu\)m in a cross-shape. The results on the different spots are remarkably similar and indicate that the sample is homogeneous in this region. From the XRD results, the transformation to a cubic phase is clear. More precisely, a first refinement at 1.9 GPa yielded two different cubic phases with fcc \(Fm\overline{3}m\) structure and lattice parameters of a\({}_{2}\)=4.9800(2) A (75-80% of the sample) and a\({}_{1}\)=5.128(1) A(20-25% of the sample). Further refinements considering single but more complex space groups are ongoing. Tkacz et al. [21; 23] reported a structural transition of LuH\({}_{3}\) at room temperature from hcp to fcc at a moderate pressure of \(\sim\) 12 GPa, but much above our present pressure. The transition to a cubic phase is unlikely due to pure LuH\({}_{3}\) transformation. It was previously shown in YH\({}_{3}\) that a similar transition from hcp to fcc could be induced via mechanical milling [24]. We are however confident that our thinning process of repeatedly compressing the LuH\({}_{3}\) between the diamonds does not have a similar effect as milling. Indeed, Raman spectroscopy after the thinning process clearly showed trigonal hcp structure. The transformation to cubic happened only after the loading and application of 2 GPa. We will now compare our XRD results with known phases in the Lu-H-N landscape at room temperature and \(\sim\) 2 GPa. It consists of \(Fm\overline{3}m\) ammonia NH\({}_{3}\)[25; 26], fcc CaF\({}_{2}\)-type LuH\({}_{2}\) (\(Fm\overline{3}m\)), fcc rock-salt (NaCl-type \(B_{1}\) Figure 3: Powder X-ray diffraction on the trigonal LuH\({}_{3}\) (a) sample kept in glove-box and sealed between two pieces of tape before and during the measurement. (b) After up to 9 days of exposure to air. The quantity of impurity Lu\({}_{2}\)O\({}_{3}\) did not significantly change with time. Figure 2: Image of the sample before (a and b) and after (c and d) loading; in transmission (a and c) and in reflection (b and d), using a white light. \(Fm\overline{3}m\)) LuN, hexagonal LuH\({}_{x}\) (\(P6_{3}/mmc\)), oxide Lu\({}_{2}\)O\({}_{3}\) (\(Ia\overline{3}\)). Firstly, ammonia is unlikely to be detected here by XRD so none of the phases can originate from ammonia. Nevertheless phonons of ammonia could be present in the Raman spectra; this will be discussed in the next section. LuN has a lattice parameter of a=4.7563(4) A [27] at ambient conditions which is then unlikely to explain one of the two phases. The lattice parameters of \(Fm\overline{3}m\) LuH\({}_{2}\) is reported to be a=5.033 A at ambient conditions [14; 28; 29]. This phase could possibly explain the XRD pattern of the phase with the smallest lattice parameter. LuH\({}_{x}\) with various concentration up to x=0.2 are reported to be hexagonal \(P6_{3}/mmc\) with a=3.52 A and c=5.6 A [30; 31]. Our attempt to reproduce any of the two phases by this hexagonal compound failed. Note that above x=0.2, cubic LuH\({}_{2}\) is synthesised. Lin et at. [22] report that Lu\({}_{2}\)O\({}_{3}\) transforms to monoclinic structure under high pressure, above 17 GPa. As we have shown, this cubic phase impurity is minor in our XRD pattern at ambient pressure, even less than 3% at the ESRF beamline measurement. Besides, as will be explained later, we do not see any Raman signature of this phase so we rule out the possibility that Lu\({}_{2}\)O\({}_{3}\) is responsible for one of the two cubic phases observed at 1.9 GPa. We now compare our XRD results with the published ones by Dasenbrock _et al._. After high-pressure synthesis and releasing the pressure, Dasenbrock _et al._ determined the ambient pressure lattice parameters of two distinct \(Fm\overline{3}m\) phases (named A and B) to be a\({}_{A}\)=5.0298(4) A and a\({}_{B}\)=4.7529(9) A[1]. Similar XRD patterns are obtained by us, i.e. two cubic phases with slightly different lattice parameters. Nevertheless, we note that our majority phase is the one with a smaller lattice parameter. More surprisingly, the extracted lattice parameters for both of our phases are larger than the ones of Dasenbrock _et al._, despite our compound being under higher pressure. A tempting explanation might rely on the synthesis process which, starting from pure LuH\({}_{3}\), would tend to produce compounds with higher hydrogen content, near to the trihydride. Unfortunately, Powder XRD is largely insensitive to lighter atoms like hydrogen (and nitrogen). In our case, this means that XRD determines the sublattice of the heavy lutetium. Raman spectroscopy is then complementary as a tool since although it does not directly provide numerical structural values, the excitations depend on the global symmetry of the lattice and the occupied Wyckoff positions. These excitations manifest themselves as phonons that can originate from light or heavy atoms. Thus the number of observable phonons can be used to predict viable global symmetries of the crystal. ### Raman spectroscopy We first recall the nature of the \(\Gamma\)-point phonons expected in the various space groups under consideration. From the literature on YH\({}_{3}\) and LuH\({}_{3}\), [23; 32; 33] the crystal structure could correspond to \(Fm\overline{3}m\) or \(P\overline{3}c1\) space groups. Additional possibilities such as \(P6_{3}cm\), \(P6_{3}\), and \(P6_{3}/mmc\) are discussed in the literature on YH\({}_{3}\)[33; 34]. The Wyckoff positions and the Raman active modes for each space group are written in table 1, and from these we expect a total of 5A\({}_{1g}\)+ 12E\({}_{g}\) Raman active phonon modes in the trigonal \(P\overline{3}c1\) phase, or a single Raman-active T\({}_{2g}\) in the cubic structure of LuH\({}_{3}\). In the latter structure, three infrared-active modes T\({}_{1u}\) are predicted which could become Raman-active with enough disorder Figure 4: (a) X-ray diffraction pattern on the synthesised (N-doped) LuH\({}_{3}\) at 300 K and 1.92 GPa measured at the ESRF (beamline ID15B). Inset: XRD results on the 5 different spots. (b) XRD pattern of the trigonal LuH\({}_{3}\) powder measured on the same beamline at ambient pressure. []. In the cubic phase, the T\({}_{2g}\) mode is associated with the displacement of the hydrogen atoms which occupy the 8c sites. Fig 5 shows the polarisation dependent Raman spectra of the ambient pressure trigonal LuH\({}_{3}\) below 955 cm\({}^{-1}\); at higher energies we do not identify any excitations that clearly originate from LuH\({}_{3}\). Within the aforementioned range, we observe 13 features (marked by arrows) which could account for most of the expected 17 phonons for the \(P\overline{3}c1\) trigonal structure. Overall, we do not observe any significant differences between the different polarisations. The inset shows the low energy spectra down to 20 cm\({}^{-1}\) where we do not see any more notable features. Figures 6.a and 6.b show wide-range Raman spectra on the ambient pressure trigonal LuH\({}_{3}\) and the high-pressure structure of (N-doped) LuH\({}_{3}\). Firstly, we note that all of the phonons observed at ambient pressure have disappeared which indicates a structural transition. In particular, for the cubic \(Fm\overline{3}m\) structure we only expect one Figure 5: Raman susceptibility of LuH\({}_{3}\) at 300 K and 1 bar, measured in the unloaded DAC in cross and parallel configurations. Arrows point to the phonons. Below 175 cm\({}^{-1}\), data are measured in triple-stage with a 2400gr/mm grating and are scaled to overlay with the 2400gr/mm single-stage data measured at higher energy. The inset shows the unscaled triple-stage data at low energy. The raw data at ambient pressure from ref [1] are shown in grey and are scaled to aid comparison. Figure 6: Raman spectra of a) trigonal LuH\({}_{3}\) in the unloaded DAC and b) Synthesised (N-doped) LuH\({}_{3}\) in the loaded DAC at 1.92 GPa. Data on two different spots are reported which are shown in the inset image (for additional data Cf. SI). Below 175 cm\({}^{-1}\), triple-stage data are overlaid on the high-energy spectra. c) For comparison, the raw Raman spectra of part \(A\) of the sample from Dasenbrock _et al._[1] at ambient pressure and at 2.17 GPa are presented. d) and e) Scaled data on the peak at 250 cm\({}^{-1}\) with background correction, to help the comparison, at \(\sim\) 2GPa from our results and from Dasenbrock _et al._[1]. Scaling is the same in d) and e). \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \(P\overline{3}c1\) & \(Fm3\overline{m}\) & \(P6_{3}cm\) & \(P6_{3}\) & \(P6_{3}/mme\) \\ \hline Lu & 6\(f\) & 4\(a\) & 6c & 6c & 2\(c\) \\ H & 2\(a\) & 8c & 6c X 2 & 6c X 2 & 2\(a\) \\ H & 4\(d\) & 4\(b\) & 4\(b\) & 2\(b\) X 2 & \\ H & 12\(g\) & & 2\(a\) & 2\(a\) & \\ \hline & 5A\({}_{1g}\) + & & 7A\({}_{1}\) + & 11A+ & A\({}_{1g}\) + \\ & 12E\({}_{g}\) & 1 T\({}_{2g}\) & 11E\({}_{1}\)+12E\({}_{2}\) & 11E\({}_{1}\)+12E\({}_{2}\) & E\({}_{1g}\)+2E\({}_{2g}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Wyckoff positions of the specified atoms within the considered space groups for LuH\({}_{3}\) compound. Below are the total number of Raman active modes for each space group. Raman-active mode for the more symmetric structure, hence the loss of modes is consistent. We also observe a large increase in the background by a factor of \(\sim 10\) which seems to be intrinsic as we observe it in multiple regions of the sample, as shown in figure 6.b. Despite this, we observe a couple of features at high pressure. Most notably, we observe two peaks that consistently appear at approximately 1240 cm\({}^{-1}\) and 250 cm\({}^{-1}\) which were not present at ambient pressure. We note that the large increase of the background Raman response does not apply to the peaks, i.e. the mode at 1240 cm\({}^{-1}\) has similar intensity (5-10 counts/s.mW) as the mode at 550 cm\({}^{-1}\) at 0 GPa. This tends to show that the change in the Raman response when applying pressure might be of electronic origin. In the cubic phase of LuH\({}_{3}\), a single T\({}_{2g}\) Raman active mode is expected. An additional IR-active T\({}_{1u}\) is predicted at lower energy. Sun et al. calculate the phonon spectrum of cubic LuH\({}_{3}\) at 25 GPa [12], and although the pressure is different from the pressure observed here, they predict high energy mode at 1334 cm\({}^{-1}\) and the other mode at lower energy 467 cm\({}^{-1}\) (14 Thz). Though higher in energy, the high energy modes probably correspond to the phonon we observe at 1240 cm\({}^{-1}\); this in turn corresponds to the Raman-active T\({}_{2g}\) mode that we expect for this structure based on other work in rare-earth trihydrides. The 14THz mode could correspond to our mode at 250 cm\({}^{-1}\), however, the energies are significantly different and the remaining phonon modes for this cubic structure are T\({}_{1u}\), which are Raman-inactive. It is possible that these become Raman-active if a sufficient amount of disorder is present in the system [35]. Most importantly, we observe other excitations at an energy below 250 cm\({}^{-1}\) that shall be discussed shortly. For completeness, we shall discuss potential impurity phases. We recall that Lu\({}_{2}\)O\({}_{3}\) remains cubic until 12GPa [22; 36]. The most intense Raman-active mode is measured at 390 cm\({}^{-1}\) at ambient pressure and hardens slightly up to 400 cm\({}^{-1}\) at 2 GPa [36]. All the other modes are much less intense. Therefore we are confident that our Raman-active modes are not due to Lu\({}_{2}\)O\({}_{3}\) impurities neither at ambient or high pressure. Yang _et al._[28] calculated the Raman active mode of LuH\({}_{2}\) to be a single T\({}_{2g}\) mode between 960-1170 cm\({}^{-1}\) at ambient pressure. We cannot rule out that the LuH\({}_{2}\) phase explains our high-energy Raman mode, but such a scenario would be unlikely. Indeed we know that we loaded trigonal LuH\({}_{3}\). Thus to form a measurable quantity of LuH\({}_{2}\) would require a significant change in chemical composition and such a transformation then depends on the thermodynamic landscape between the two compositions, both of which are stable [10; 12; 17; 37]. Overall, this seems like an unlikely scenario. Besides, it is clear that LuH\({}_{2}\) cannot explain the low energy modes. At room temperature and 2 GPa, pure N\({}_{2}\) may form either a fluid or a solid \(\beta\) phase. The \(\beta\)-phase crystallises in the \(P6_{3}/mmc\) space group [38]. A single mode is expected at \(\sim\) 2330 cm\({}^{-1}\), which we observe as a narrow peak in this range of energy. N\({}_{2}\) gas has a similar vibron mode at high energy but also other peaks at low energy below 150 cm\({}^{-1}\)[39]. Some modes we measured might be ascribed to N\({}_{2}\) gas but not the ones at 195 cm\({}^{-1}\) and 166 cm\({}^{-1}\) or clearly our dominant modes at 1240 cm\({}^{-1}\) or 250 cm\({}^{-1}\). At 2 GPa and ambient temperature, ammonia NH\({}_{3}\) is expected to be in the \(Fm\overline{3}m\) space group or still fluid. Raman scattering under pressure [26] show that only modes at energy higher than 3100 cm\({}^{-1}\) are measured; the spectra are similar in the liquid phase. So we exclude that ammonia is responsible for the Raman modes we measure. The primary nitride that could form is LuN with a \(Fm\overline{3}m\) rock-salt (RS) structure. This is in principle Raman-inactive as only the 4a and 4b sites are occupied. Despite this, a strong excitation was observed at 582 cm\({}^{-1}\) but this was ascribed to be caused by strong disorder [35] and we do not identify such mode in our Raman data. Note that the synthesis of LuN compound is challenging and previously required heating pure lutetium and nitrogen at 1600degC [27]. Since we have not laser heated our sample, we do not expect the formation of this compound. Finally, we compare our spectra to the raw data of Dasenbrock et al. [1] (part A of the sample), as shown Fig. 5 at ambient pressure. Their sample is measured after high-pressure synthesis, therefore all comparisons are against the reported cubic, \(Fm\overline{3}m\), N-doped LuH\({}_{3}\). We note that very few peaks agree with our data, such as the ones at 100 cm\({}^{-1}\) and 148 cm\({}^{-1}\) as shown in the inset. Most notably, the intense broad peak at 550 cm\({}^{-1}\) of trigonal LuH\({}_{3}\) is absent in Dasenbrock et al. data. Therefore we conclude that it is unlikely that trigonal LuH\({}_{3}\) is present in the Dasenbrock sample. Nevertheless, when we compare our pressurised samples against one another (figures 6.b and 6.c) we see that our high-energy peak at 1240 cm\({}^{-1}\) is clearly present in the Dasenbrock sample. We also see that when we scale the two data on the 250 cm\({}^{-1}\) peaks (figure 6.d), the 1240 cm\({}^{-1}\) modes have the same shape and energy. Besides, still with the same scaling, we also see several peaks at low energy with similar intensities that are shared by both samples. The energies of some modes are similar (Cf. SI) and some are different (Cf. SI). It is then possible that we synthesise similar sample as Dasenbrock et al. with slightly different stoichiometry. The origin of these low energy Raman peaks is unclear. Dasenbrock _et al._ reported that the primary impurity phase could be LuN in either an RS structure or a zinc-blend (ZB) structure. As discussed earlier, the RS structure is in principle Raman-inactive and is synthesised at high-temperatures. The \(F\overline{4}m\) ZB structure has occupied 4a and 4c sites that contribute one T\({}_{2}\) Raman-active phonon each. This structure has been predicted to form under pressures above 250 GPa [40; 41], and experimentally the RS structure has been shown to form preferentially when synthesised at 30 GPa and 2000 K [42]. Overall we do not expect either structure of LuN to be present, and even if they were present, neither structure could explain the vast number of low energy peaks in the Dasenbrock sample and in our sample at 2 GPa. ## IV Discussion The low energy Raman modes in Dasenbrock _et al._ and in our sample at 2 GPa appear to be a fundamental puzzle. Indeed, they cannot be explained by the known structures in this temperature-pressure region, especially since the Lu sublattices were identified to be cubic by XRD. Nor can these modes be explained by the primary impurity phases either, as discussed above. As stated earlier, X-rays preferentially scatter off of heavy atoms and interact far less with light atoms like hydrogen and nitrogen compared to lutetium. As clearly stated by Hilleke _et al._[17], the fcc sublattice of the lutetium atoms provides a constraint about which we should search but it does not necessarily describe the entire structure. The Raman spectra clearly show this based on the shear number of low energy peaks that were observed previously [1; 10] as well as by us. One way to observe a large number of phonon modes from a structure with a global \(Fm\overline{3}m\) symmetry is if the lower symmetry sites (24e, 32f, 48g...) become occupied. Another way is that the overall symmetry distorts whilst the fcc lutetium sublattice is preserved. It is indeed possible if substitutions between Hydrogen and Nitrogen atoms happen. Consequently the space group will have to be revisited. Further work, both experimentally and theoretically, is necessary to unveil this space group. ## V Conclusion We synthesise a compound, which presents structural similarities with the sample of Dasenbrock et al. [1], starting from pure trigonal LuH\({}_{3}\) loaded in a DAC with a mixture of N\({}_{2}\)/He gas. From X-ray diffraction, we clearly see a structural transformation from trigonal to cubic under pressure. Similarly, with Raman spectroscopy we observe the loss of the modes associated with the trigonal structure and see the appearance of a strong mode at 1240 cm\({}^{-1}\) that we associate with the T\({}_{2g}\) Raman-active mode of the cubic structure. However, we (and others) observe other more excitations than is possible for a simple cubic structure. Overall we believe that it is unlikely that these excitations come from impurity phases since they are either not visible in XRD, chemically unlikely to form, or simply their excitations do not occur in the energy range. Thus we conclude that although the sublattice of the lutetium atoms is cubic, with potentially two cubic phases in the sample, this does not describe the global symmetry of the entire structure as a whole. It is then possible that a compound with substituted H-N atoms was formed and could explain both the XRD and Raman results. If the compound we synthesised is superconductor has still to be studied. We thanks Sebastian Pairis for EDX measurements, Abdellali HADJ-AZZEM and Elise PACHOUD for Lutetium preparation. We thanks Laetitia Laversenne for fruitful discussions and Eva Zurek for stimulating exchanges of information. ## Competing interests The authors declare no competing interests.
2307.02231
Tit-for-Token: Understanding Fairness when Forwarding Data by Incentivized Peers in Decentralized Storage Networks
Decentralized storage networks offer services with intriguing possibilities to reduce inequalities in an extremely centralized market. The challenge is to conceive incentives that are fair in regard to the income distribution among peers. Despite many systems using tokens to incentivize forwarding data, like Swarm, little is known about the interplay between incentives, storage-, and network-parameters. This paper aims to help fill this gap by developing Tit-for-Token (Tit4Tok), a framework to understand fairness. Tit4Tok realizes a triad of altruism (acts of kindness such as debt forgiveness), reciprocity (Tit-for-Tat's mirroring cooperation), and monetary rewards as desired in the free market. Tit4Tok sheds light on incentives across the accounting and settlement layers. We present a comprehensive exploration of different factors when incentivized peers share bandwidth in a libp2p-based network, including uneven distributions emerging when gateways provide data to users outside the network. We quantified the Income-Fairness with the Gini coefficient, using multiple model instantiations and diverse approaches for debt cancellation. We propose regular changes to the gateway neighborhood and show that our shuffling method improves the Income-Fairness from 0.66 to 0.16. We quantified the non-negligible cost of tolerating free-riding (altruism). The performance is evaluated by extensive computer simulations and using an IPFS workload to study the effects of caching.
Vahid Heidaripour Lakhani, Arman Babaei, Leander Jehl, Georgy Ishmaev, Vero Estrada-Galiñanes
2023-07-05T12:22:53Z
http://arxiv.org/abs/2307.02231v2
Tit-for-Token: Understanding Fairness when Forwarding Data by Incentivized Peers in Decentralized Storage Networks ###### Abstract Decentralized storage networks offer services with intriguing possibilities to reduce inequalities in an extremely centralized market. The challenge is to conceive incentives that are fair in regard to the income distribution among peers. Despite many systems using tokens to incentivize forwarding data, like Swarm, little is known about the interplay between incentives, storage-, and network-parameters. This paper aims to help fill this gap by developing Tit-for-Token (Tit4Tok), a framework to understand fairness. Tit4Tok realizes a triad of altruism (acts of kindness such as debt forgeriveness), reciprocity (Tit-for-Tat's mirroring cooperation), and monetary rewards as desired in the free market. Tit4Tok sheds light on incentives across the accounting and settlement layers. We present a comprehensive exploration of different factors when incentivized peers share bandwidth in a libp2p-based network, including uneven distributions emerging when gateways provide data to users outside the network. We quantified the Income-Fairness with the Gini coefficient, using multiple model instantiations and diverse approaches for debt cancellation. We propose regular changes to the gateway neighborhood and show that our shuffling method improves the Income-Fairness from 0.66 to 0.16. We quantified the non-negligible cost of tolerating free-riding (altruism). The performance is evaluated by extensive computer simulations and using an IPFS workload to study the effects of caching. Fairness, Bandwidth Incentives, Token-based Incentives, Networked Economy, Web3 Incentives, Reciprocity, Tit-for-Tat, Monetary-based Incentives, Decentralized storage networks, Prefix-based Routing Networks, Libp2p ## 1 Introduction Open decentralized systems (ODS)1 such as the Interplanetary File System (IPFS) or the Swarm network propose a tantalizing vision of a decentralized web and a fair data economy through large-scale collaborative ecosystems that depend on incentivized users, content generators, and operators of data-sharing platforms facilitated by peers moving and storing data across a network [11, 3, 1, 24]. Furthermore, evidence shows that despite some problems with the reliability and manageability of autonomous peer operators forming peer-to-peer (p2p) networks, these networks provide very cost-effective solutions. This is evident from the observation that even large companies benefit from these networks, especially in edge-computing, content distribution networks, and other systems using mechanisms similar to those used in the BitTorrent network [7, 13, 17, 21, 53]. Many ODS, including the systems mentioned above, have a networking stack that depends on the libp2p project [2]. This modular library provides key components to build decentralized networks equally accessible from anywhere in the world. On top of the networking layer, many systems include some kind of incentivization layer. For instance, Swarm uses the SWAP protocol and the postage stamps to incentivize bandwidth and storage sharing respectively. The topic of incentives has received a lot of attention in systems research in part thanks to the advancements in blockchain protocols and token-based incentivization mechanisms [57, 42, 5, 14, 23]. Nonetheless, the literature is vague when it comes to discerning how the resources shared in the network are and if peer operators receive a fair reward for contributing to the ecosystem. We know that imbalances in incentives can cause centralization problems, e.g., consensus power concentration, routing centralization, wealth concentration, bandwidth concentration, etc. However, a taxonomy of centralization in public systems [44] showed that consensus power has been widely studied, leaving, in comparison, a research gap for the factors that affect bandwidth incentives. This paper aims to help fill this gap by developing _Tit-for-Token_ or, in short, _Tit4Tok_, a framework to understand fairness when forwarding data by incentivized peers in decentralized storage networks. We also present the results from a comprehensive exploration of a compound of incentive mechanisms found in real-world networks using realistic workloads. Incentives often play a significant role in motivating peer operators to participate in a network. Well-designed incentives can be used to reinforce motivation. We define the _triad of altruism, reciprocity, and free enterprise_ as required incentives for a more fair data economy. In other words, our framework realizes a triad that comprises acts of kindness such as debt forgiveness, mirroring cooperation such as in standard Tit-for-Tat incentives, and monetary rewards as desired in the free market. We think that realizing this triad could potentially bring closer the vision of a fair data economy found in the Swarm network community. While the ideas of a fair data economy, data sovereignty, and decolozization of the digital space are flourishing among society, e.g., software developers, content creators, artists, investors, and EU policymakers, the literature lags behind. This is unfortunate for advancing networked systems research with societal impact. The majority of the papers on incentives published during the last three decades only focus on one or at most two aspects of the triad [23]. For an illustrative example, we need to retrace incentives to the reciprocity of the Tit-for-Tat (from here on abbreviated _Tit4Tat_) found in BitTorrent [10]. Despite the success of BitTorrent, and the good arguments about _Tit4Tat_, this mechanism, or many others that have been proposed later, is not solid enough to provide an alternative to the asymmetric wealth distribution of the current data economy. By paraphrasing Adam Smith, _if a peer served your network request in your distress, ought you to serve his request in his? How much ought you to serve him? When ought you to serve him? Now, or tomorrow, or next month? And for how long?_ These questions are closely related to the rationale behind _Tit4Tok_. Altruism brings up questions like: Ought you to forgive someone's debt? Will forgiveness create imbalances? How often do you ought to tolerate free riders? Reciprocity, instead, raises questions like: Ought you to remember the favors somebody made for you? Ought you to payback with another favor? Finally, free enterprise triggers the warning: Debts, or favors, are understood better in monetary terms. Ultimately, ought you to serve two peer requests equally? In more detail, our contributions are: \(\bullet\)_Incentive model:_ We introduce an abstract model to incentivize network peers to realize the triad of altruism, reciprocity, and free enterprise. \(\bullet\)_Incentive toolkit:_ We operationalize the Tit4Tok concept with a comprehensible open-sourced toolkit to simulate different instantiations of the model and emulate real decentralized network workloads. \(\bullet\)_Fairness analysis:_ We investigate potential sources for uneven distributions and their effect on fairness like for example, the gateways neighborhood. One of the sources of uneven distributions is when a few peers are originating large amounts of requests. This situation could happen when peers act as a gateway to take requests from clients that do not participate in the network and access a gateway, for example via a normal browser. \(\bullet\)_Measurements:_ We measure the Gini coefficient to quantify the income fairness in the accounting and settlement layer to 1) study the interplay of the income received by peers with network parameters, and 2) compare the distribution of received tokens with and without reciprocity and free service. An Income-Fairness of 0 reflects perfect fairness while 1 reflects maximal inequality. \(\bullet\)_Shuffling:_ We proposed shuffling to regularly change the gateway neighborhood and show that it can improve Income-Fairness from 0.66 to 0.16. \(\bullet\)_Altruism:_ We show that the cost of providing a limited free service may distributed unevenly among peers, possibly worsening income-fairness from 0.34 to 0.63, and propose pairwise limits based on address distance that distributes the cost more fairly (0.47). \(\bullet\)_Gateway Cliques:_ We investigate the benefit and centralization risk connected gateways pose, compared to the benefits powerful operators can gain from coordinating multiple gateways. \(\bullet\)_Caching:_ We show that caching can smoothen imbalance due to peer distribution in the address space, but increases imbalance due to gateways. We presented preliminary results on fairness in Swarm at [28], but all of the above contributions are new or significantly extended in this paper. ## 2 Tit-for-Tat: Preliminaries and Prior Work Incentive mechanisms are a way to coordinate participation, increase cooperation, and limit the selfish behaviors of network participants. This section gives a broad overview of _Tit4Tat_ and the general challenges and open problems of sharing computational resources in decentralized storage networks. Finally, we summarize the literature gaps and discuss what motivates _Tit4Tok_, particularly in the scope of bandwidth incentives. Peer-to-peer (p2p) networks are computationally-based human social systems that create a common pool of resources available for community participants in an open membership and, commonly, a permissionless environment. An unmanaged commons, where participants do not control the overuse of resources, risks the extremely unfair event of depletion of resources, resulting in a network collapse known as the "tragedy of the commons" [20] (a discussion is found in Appendix K). Thus, it is crucial to manage the computational resources of open and permissionless networks via effective incentive mechanisms. ### Tit-for-Tat: Strength and Limitations The _Tit4Tat_ mechanism has been intensively used in p2p networks, e.g., BitTorrent [10], with the general belief that it efficiently discourages free-riding, i.e., peers who only consume resources without giving back. It has a simple strategy in which each participant first cooperates and then mirrors, or reciprocates, the immediately observed behavior from its interacting peers, likely to incentivize mutual cooperation. _Tit4Tat_ works by punishing bad behaviors in the future, i.e., _cheat me first, and I will cheat you back_. This concept, referred to as the "shadow of the future," was largely studied by Axelrod in computer tournaments playing the Prisoner's Dilemma cooperation game [4]. Later, the Pavlov strategy proposed a more robust strategy that included some degree of forgiveness or generosity between peers [37]. Altruism vs Free-Riding.The cooperation that surges from _Tit4Tat_ is known as "reciprocally altruistic behavior" [49]. But does it work in practice? One of the main criticisms is that _Tit4Tat_ can be easily subverted by participants who change their client's code to cheat (fail to reciprocate) their peers. The _Tit4Tat_ in BitTorrent can induce free riding [26], and entire files can be downloaded without reciprocating in a cheap free-riding attack [31, 39]. The problem of selfish and misbehaving nodes was widely studied in the literature, which offers a plethora of strategies, including punishments and/or variations of the _Tit4Tat_ to mitigate misbehaviors [26, 56, 16, 38, 55, 32, 30, 15]. On the contrary, altruistic behavior provides an alternative narrative, which explains why networks do not collapse [52]. Moreover, a generalized reciprocity behavior, in which peers do favors without direct expectations while relying only on somebody else willing to do a favor to them, can explain why peers can tolerate some free-riding behavior [25]. Tribler presented down-to-earth expectations about altruism with its social group incentives based on the "kinship fosters cooperation" argument [40]. Scientists found that even a small amount of altruism effectively improved the performance of a p2p live streaming service [9]. As other scientists noted, substantial research has been dedicated to discouraging selfish behavior with complex technical solutions or disregarding the cost overhead [12]. Considering all the above, our model _Tit4Tok_, which builds on _Tit4Tat_ and real-world decentralized networks, motivates altruistic behaviors continuously, and tolerates some free-riding. Mechanism Dependencies on Upper Layers._Tit4Tat_ reciprocates the immediately observed behavior, and going beyond that requires a public history of behaviors, a robust identification layer, or other complex mechanisms often found in reputation-based incentives. The overarching question is which peer is trustworthy or at least offers the best cost-quality service relationship [47]. Reputation-based systems present multiple challenges and can harm participants. For example, "Sybil attacks," in which a single entity controls multiple fake identities either to inflate its reputation value or discretion other participants[41]. Despite the centralized control, even YouTube cannot stop abusers from illicit monetization exploits like selling accounts[8]. Our paper focuses on the networking and incentive layers without adding dependencies to more complex mechanisms. _Tit4Tok_ has a mutual accounting layer but does not depend on upper layers like other reputation-based incentives. The mutual accounting layer could be further improved with ideas such as indirect reciprocity [36] to address manipulation by Sybils without the cost of a global reputation layer. Novel Business Models and Stakeholders._Tit4Tat_ drives peers in BitTorrent to exchange content of mutual interest. Even if reciprocating bandwidth instead of content is an improvement [16], the model behind it is a restricted version of a barter economy that suffers from the double coincidence of wants, i.e., peers would reward a forwarded chunk with another forwarded chunk. Thus, _Tit4Tat_ impairs complex business developments. What if a node sharing bandwidth is not interested in bandwidth for itself at that particular time or if the node participates in the network mostly to generate wealth (income)? Monetary or credit-based incentives can address the questions above by enabling more practical transactions among peers than in bartering economies. This area has generated significant interest [46, 18, 27, 34], especially with recent research focusing on cryptocurrencies and token-based incentives. Our paper acknowledges that participants may take on different roles, especially in large-scale systems; for example, some peers prefer to optimize bandwidth provision while others provide storage provision without forgetting those interested in consuming resources. Thus, we evaluate, for example, if peers acting as gateways change the rules of the game by bringing centralization or income inequalities. ### Literature Gap A recent ACM Computing Survey, which reviewed p2p incentive mechanisms based on monetary, reputation, and service published between 1993 and 2022, reported a large number of papers focusing on _Tit4Tat_ to improve service quality, many papers covering auction-based monetary incentives, and close to zero papers focusing on the interplay between DHTs and incentives [23]. Further, the results found in the literature are difficult to reproduce due to the lack of unavailable or discontinued simulation tools. To our disappointment, free services and societal aspects are largely ignored, e.g., the term "free," which appeared more than 60 times, is only used to discuss free-riding mitigation schemes. We argue that despite the abundant literature about _Tit4Tat_, researchers have not investigated deeply how a triad of altruism, reciprocity, and free enterprise can co-exist in a well-rounded incentive mechanism for decentralized networks. Hence, our paper differs in a number of fundamental ways from previous research on incentives. We extend or generalize reciprocal behavior to tolerate some free-riding for a limited time. This is comparable with freemium services, in which the initial service is free of charge, but a premium is charged for additional services. In this paper, peers may accept to move a few chunks per connection for free during a short time period, and requests done after reaching the thresholds are not free. By relaxing security concerns about free-riding and facilitating a limited free service without judging the reasons of the request, our incentives are more aligned with societal issues and with common strategies to increasing adoption. First, we believe that networks that provide some degree of free service contribute to secure the rights of freedom of expression and universal access. Second, the development of an inclusive society needs technologies that can be used even by the most marginalized. Third, empirical research in the mobile app market showed that the freemium option is a cost-effective business model to increase sales without incurring significant marketing costs [29]. Thus, limited free services may attract consumption and service adoption, helping achieve desirable network effects. Fourth, previous experiences in Gnutella showed that most users were free riders and did not adopt anti-free-riding measurements despite recommendations [22]. ## 3 Decentralized Storage Networks This section introduces the terminology used in this paper, gives an overview of content routing in the context of decentralized storage networks, and discusses the incentives in Swarm. Terminology.A _node_ (or its equivalent _peer_) connects to other peers (_neighbors_) in a peer-to-peer network. The network depends on participants sharing bandwidth to move data efficiently through the network for serving requests between data consumers (_user_) and suppliers (_storers_ or storage nodes). Users may become _peer operators_ themselves by installing a _network client_ to have their own peer (_originator_) connect directly to the network and originate requests. Alternatively, users may choose not to run their own peer and use a third party _gateway_. Gateways serve as an interface that connects the decentralized network to the outside world, including clients, applications, and other networks. Therefore, gateways receive requests from the outside and originate requests on behalf of users. Any piece of content (_chunk_) that is in the local store of a node becomes available to other nodes after being push-synced to the network. A storer node is responsible for storing a chunk if the chunk address falls within the node's area of responsibility. Unless there is an end-to-end connection between originators and storer, the chunk will travel one or more _hops_ following a _routing path_. The peers in the routing path (_forwarders_) perform _forwarding actions_) to forward the chunk one hop from its current location on the path to the next closer location until it reaches the destination. Such content-delivery service benefits the community of users, content creators, and other stakeholders; ultimately, it enables the network storage service that pays to the storer. Bandwidth, therefore, is a valuable resource for decentralized storage networks. ### Peer-to-Peer Networking Stack The open-sourced libp2p library [2] maintained by Protocol Labs includes many components to implement the network layer of any decentralized storage system. This resource is used by many systems, including the InterPlanetary File System and Swarm to do the peer and content discovery and routing. One key component is the Kad-DHT, which implements the Kademlia Distributed Hash Table (DHT) subsystem largely based on the Kademlia [33], and expanded with notions from S/Kademlia [6]. The library manage connections at the transport layer, tolerating multiple protocols and protecting peers from some security problems. The Kademlia distributed hash table (DHT) allows to reach any other network peer in a few hops while maintaining a small number of direct connections. Each peer \(v\) has a routing table that contains the addresses of other peers but without requiring maintaining a direct connection with them. The routing table is organized based on a prefix length and a distance metric and maintained in a \(k\)-bucket data structure. The \(b\)-bit address space is organized with \(k\)-buckets, where a bucket contains up to \(k\) peers' addresses that share a prefix of length \(i\) with peer \(v\) address, with \(i\in[0,b]\). The _distance metric_ helps to find the closest peers to a specific key in the routing table. The distance between two keys is the bitwise exclusive-or (XOR) of the SHA-256 hash of the two keys, for example a distance 0 means that both keys are identical, and a distance 1 means that one bit is different. Peers close to each other in the address space form a _neighborhood_ that is responsible of storing multiple chunk replicas. Appendix A provides insights about the routing tables in Kademlia and their construction process. ### Swarm and its SWAP Protocol Swarm is a peer-to-peer network with ongoing efforts to provide storage and communication services globally. The mission of the project, as published on its website [3], is: "to shape the future towards a self-sovereign global society and permissionless open markets by providing scalable base-layer data storage infrastructure for the decentralized internet." According to the writings of the Swarm founder [50], Swarm aims at developing a fair data economy, defined as "an economy of processing data characterized by fair compensation of all parties involved in its creation or enrichment." Networking Layer.Swarm was designed to thoroughly integrate with the devp2p Ethereum network layer as well as with the Ethereum blockchain for domain name resolution (using ENS). Later, it adopted the libp2p library with some features relevant to this paper that distinguish Swarm from other networks summarized here. First, the network distributes the content through incentivized peers, which sync chunks directly in the DHT up to the redundantly synced area of responsibility for each chunk. This allows content-addressing with fine-granularity, that in term can reduce imbalances caused by popular files, since the content is distributed among all the peers responsible for the chunks. Second, chunks are synced and retrieved with the aid of incentivized peers, which forward chunks using a variant of Kademlia called forwarding Kademlia. An originator initiates a request that is relayed via forwarding nodes \(F_{0},...,F_{n}\) all the way to storer node S, the storer node closest to the chunk address. The chunk is then delivered by being passed back along the same route to the downloader. It is claimed that forwarding Kademlia provides some degree of ambiguity and deniability to any peer sending a chunk. Incentives Layer.Swarm includes a built-in incentives layer, which includes bandwidth incentives to achieve speedy and reliable data provision and storage incentives to ensure long-term data preservation. Since Swarm's initial attempt to incentivize forwarding and storing chunks in the Swap, Swear, and Swindle protocol [51], various consistent software development iterations have followed, combining off-chain communication and settlements with on-chain registration and enforcement. Swarm uses BZZ, an ERC-20 token created on the Ethereum network that can be bridged to other networks through Omnichain for Gnosis Chain or Celer Network for Binance Smart Chain. The supply of BZZ is defined by bonding curves that may decrease or increase the token supply. The incentive system is enforced through smart contracts on the Gnosis Chain blockchain and powered by the xBZZ token 2. In order to upload and store data to Swarm, a user needs to have postage stamps. Postage stamps can be purchased in batches with xBZZ. The monetary incentives for storing data are auction-based and are not considered in this paper. Footnote 2: xBZ is BZZ bridged to the Gnosis Chain using OmniBridge. Business Layer.The Swarm Foundation aims to enable a scalable and self-sustaining open-source infrastructure for a supply-chain economy of data that provides end-users with a speedy and safe service. Its business model opens an intriguing possibility: developers get tools for creating and hosting decentralized applications (dApps), NFT metadata, and media files with zero hosting cost, independently of the number of people accessing a dApp, i.e., one person or one million would cost the same, due to in-built crypto-economic incentives of the network. The incentivized infrastructure encourages decentralization, inclusivity, and privacy as it relieves developers from the need to seek data monetization models and the support from the exclusive and concentrated venture capital to develop dApps or share content. ## 4 Tit-for-Token This section presents the design of _Tit4Tok_, its rationale, and the system model. ### Design Overview Our design is based on pairwise credit accounting, with a possibility for debt forgiveness and token payments that work together in a layered architecture. Figure 1 presents an overview of _Tit4Tok_ with its network, accounting and settlement layers (the exchange layer is excluded from _Tit4Tok_ but provided for completeness). On the _network layer_ peers \(\{p_{n};n\in N\}\) form a self-organized overlay layer using a DHT table and route requests using forwarding Kademlia, a variant of the Kademlia found in Swarm. Peer connections are bidirectional, meaning if peer \(p_{1}\) is part of \(p_{2}\)'s routing table, then also \(p_{2}\) is part of \(p_{1}\)'s routing table. On the _accounting layer_, peers maintain pairwise balances associated with each connection in the routing layer. Pairwise balances are not verifiable to others and thus do not need transactional or enforcement mechanisms. When receiving a chunk from a neighbor, balances are updated, and the sending neighbor is credited some accounting units. This happens independently on every hop in the route. We note that each peer \(p_{i}\) performing a forwarding action is credited some amount \(c_{i}^{in}\) from the previous peer on the route, and again credits some amount \(c_{i}^{out}\) to the next peer. We say that the difference between these two amounts \(c_{i}^{in}-c_{i}^{out}\) is the _reward_\(p_{i}\) receives for forwarding. The reward for storing is simply the credited amount. We note that the reward for a storing action is incentivizing bandwidth usage. The incentivization of long-term storage is done through storage incentives, which are out of scope for this paper. On the _settlement layer_, peers transfer utility tokens to each other. Settlements are used to rebalance pairwise accounts and may result in a monetary transfer. We assume settlements are realized through an off-chain payment, similar to Swarm's signed recepits. We also assume the existence of an exchange layer, where utility tokens received through a transfer can be converted to native currency. ### System Model We describe the system model in which the incentives operate. Normal Operation.We use _Tit4Tok_ in our toolkit to study how the tokens paid by originators get distributed in a well-functioning network. We assume no significant churn, massive failures, or massive amounts of Sybils exist in the system. Selfish (rational) clients.We assume that all, or at least some gateways, are willing to make a profit in the network. Thus, peers use the maximum amount of free service and pay the requests that exceed the threshold using a token transfer. The upload of chunks functions similarly to downloads, and can use the same layers. However, since uploads are less frequent, we focus on the retrieval of chunks. Also, in Swarm, chunk uploads are part of storage incentives. We do not model the exchange layer or the detailed mechanisms of token transfers, but assume they ensure finality and incur no fee or added cost. Such assumptions can be approximated by off-chain solutions where fees can be amortized over many individual settlements [19]. Figure 1: _Tit4Tok_ includes routing and accounting layers. The settlement layer transfers tokens through off-chain payments. The exchange layer is excluded. 1) A gateway requests a chunk, and 2-4) forwarding operators route the request to a storer and the chunk back to the gateway, sometimes for free or in reciprocity for services. 5) Pairwise balances are updated, and 6) an unbalanced account is 7)rebalanced through token transfer. 8) Tokens are sold on the exchange layer to extract monetary rewards. We assume there is a fixed exchange rate from utility tokens to accounting units and assume that utility tokens have a stable market price reflecting their usefulness. In the rest of this paper, we mostly ignore utility tokens. Instead, we say that a certain amount of accounting units is settled if the corresponding amount of utility tokens are being transferred. ### Debts: Forgiveness and Payments Reciprocity is operationalized via a pairwise credit accounting balance. Peers accumulate debts with their neighbors in their respective pairwise accounting balances. The services that peers provide to each other may balance out without requiring settlement via utility tokens. This way, peers can provide service without requiring settlement as long as the pairwise balance is below a given threshold. If not, there are two possibilities: 1) once the debt hits the threshold, the indebted peer can pay tokens to reduce the debt through the settlement layer, and the creditor receives a monetary reward; 2) once the debt hits the threshold, the indebted peer can wait for the refresh rate to get the debt reduced unless the creditor rejects the connection. If a peer always waits for 2), it receives a limited service for free. Figure 2 shows an example of reciprocity. Peers A and B initialize the pairwise accounting channel with balance zero and a threshold in step \((1)\). A sends a request and goes into debt to B, and the balance tilts towards A in step \((2)\). Then B sends a request, and A provides the service, and the balance goes back around zero in step \((3)\). In step \((4)\), B has reached the maximum debt and hits the threshold. A stops providing service until B brings the balance below the threshold again. ## 5 Fairness This section gives a primer on fairness and introduces the metrics used in this paper. ### Fairness in ODS The notion of fairness is generally rather broad and multifaceted, even when considered in a specific context of ODS. It can refer either to participants' perceived fairness of a system (and its components) or to some observable agreed-upon metric. While both of these aspects are important for ODS design, here we focus on fairness in the latter sense. Wierzbicki offered an interdisciplinary view of fairness in ODS3. Social psychologists judge fairness mainly through three perspectives: distributive, procedural, and retributive fairness. Distributive fairness is about the minimization of the differences between the shares or profits that equally entitled agents collect from a system. As an example, operators willing to contribute similar resources to the system, i.e., running nodes on a similar infrastructure, should receive the same token rewards. Procedural fairness refers to impartial and fair processes, e.g., having transparent and impartial mechanisms that reward peers with honest behavior. Retributive fairness is about rule violations and how sanctions are in proportion with the violations, i.e., ensuring that rule-breakers are fairly treated. Footnote 3: In his book the acronym ODS refers to open distributed systems while in our paper, it refers to open decentralized systems. Fairness also has a mathematical foundation and can be measured with some agreed metrics, as discussed later in the section. The main focus of this paper lies in distributive fairness. Figure 2: Example of pairwise credit accounting balances ### Measuring fairness In the following, we define Income-Fairness as a measure of distributive fairness. Income-Fairness measures how evenly the tokens, paid by gateways for downloading chunks, are distributed in the network. For this, we determine the net income of peers by summing the payments peers receive during a given time interval and subtracting the payments performed. In the latter, we ignore payments performed by gateways, to pay for the download of requested chunks.4 We then compute the Gini coefficient over the net income of different peers. While this metric of inequality has certain limitations in the context of complex macroeconomic models [54], it does provide helpful insights for the measurements of decentralized systems [45]. The Gini coefficient is a measure of income equality with values between 0 (good) and 1 (bad). For example, if a fraction \(f\) of peers equally share all the income, the Gini coefficient is \(1-f\). Footnote 4: Further details are provided in Appendix B. Perfect Income-Fairness implies that the total income received by different peers is equal. Assuming that every peer operator runs one peer on similar hardware, Income-Fairness ensures a fair reward for making these resources available. Assuming equal rewards per action, Income-Fairness requires an equal distribution of actions to peers. The Income-Fairness may differ with the time observed. A few chunks requested in a large network may not be sufficient to engage all nodes and, therefore, will not generate income at all nodes. On the other hand, a fair distribution of income also during short periods and with small workloads indicates a more stable income and thus may better motivate nodes to stay in the system. We note that Income-Fairness can be measured both at the accounting or settlement layer, based on whether they are applied to rewards credited on the accounting layer or settlements transferred on the settlement layer. The Gini coefficient is a relative measure, so Income-Fairness does not provide information about how many tokens peers actually receive. This is well suited for our study since we assume that the value of tokens represents their utility. Thus, if more tokens are paid and received for the same service, we assume their value to be less. ## 6 _Tit4Tok_ Implementation In this section, we give a detailed description of the accounting layer of our _Tit4Tok_ model. We show how much credit is sent for one chunk, discuss the parameter settings for the debt threshold and limited free service layer, and discuss how and when peers transfer tokens to settle their debt. We also discuss some challenges that arise with specific settings. ### How many accounting units for a chunk? We consider two models for the credit peers receive for replying to a request: constant reward and distance-based credit. This happens on the accounting layer of our model. Remember that for forwarding actions, reward \(c_{i}^{in}-c_{i}^{out}\) is the difference between accounting units credited to a peer \(p_{i}\) and by \(p_{i}\) to the next peer. Here \(c_{i}^{out}=c_{i+1}^{in}\) Constant reward.In this model, all peers on the path of a request receive the same reward. This model is not practical but is a useful baseline for our evaluation. With constant reward, the income a peer receives is proportional to the number of answered requests. Distance-based credit.This model uses the XOR distance to find the accounting units credited for a chunk. The credit \(c_{i}^{in}\) is calculated based on the distance of peer \(p_{i}\) from the chunk. The net credit received by \(p_{i}\), \(c_{i}^{in}-c_{i}^{out}\) is determined by the distance over which \(p_{i}\) forwards the request and reply. In this way, peers are motivated to forward to the peer with the shortest possible distance to the destination. In this model, the credit sent does not depend on knowledge of the path; e.g., the credit the originator sends to the first hop in the path does not depend on the length of the path. Distance-based credit is used in the Swarm network. As the Swarm network, we use Equation 1 to determine \(c_{i}^{in}\). Here \(\texttt{commonBits}(p_{i},\textit{chunkAddress})\) returns the number of bits in the common prefix of the two addresses. The constant \(+1\) in the equation ensures that the last peer in the route (performing the storage action) receives at least one accounting unit. \(\omega\) is a configurable parameter. When sending a request to peer \(p_{i}\), \(\texttt{commonBits}(p_{i},\textit{chunkAddress})\) is at least 1. Thus \(\omega\) is the maximum amount for \(c_{i}^{in}\). Therefore, \(\omega\) is a useful unit also for other parameters in pairwise accounting. We note that in Swarm, \(c_{i}^{in}\) is additionally multiplied by a constant _price_, which we omit for simplicity. \[c_{i}^{in}=(\max(0,\omega-\texttt{commonBits}(p_{i},\textit{chunkAddress}))+1) \tag{1}\] We could find no information about how to initialize \(\omega\) in Swarm documents. We evaluate different parameters for \(\omega\) but keep \(\omega\geq\delta\), where \(\delta\) is the storage depth, another parameter determining whether a peer is responsible for storing a chunk. A peer \(p_{i}\) is responsible to store a chunk if \(\texttt{commonBits}(p_{i},\textit{chunkAddress})\geq\delta\). Setting \(\omega\geq\delta\) thus ensures \(\omega>\texttt{commonBits}(p_{j},\textit{chunkAddress})\) for a peer \(p_{j}\) not storing a chunk. Peer \(p_{j}\) can forward the request to a different peer \(p_{j+1}\) closer to the chunk, and receive \(c_{j}^{in}-c_{j+1}^{in}>0\). Thus, a forwarding peer receives a non-zero net credit. A larger \(\omega\), e.g., \(\omega>>\delta\) propagates a larger credit to the peers located on the last hops on the path. Figure 3 shows an example of how many credits are sent using Equation 1. In Section 7.1.1 we analyze the suitable choices of the \(\omega\) parameter and show the effect distance-based rewards have on Income-Fairness in Section 7.1.2. ### Parameters for reciprocity and forgiveness Reciprocity is parametrized by the threshold for maximum accumulated debt. Intuitively, a larger threshold parameter can increase reciprocity, allowing a peer first to receive multiple chunks and later repay by serving multiple requests. However, a small threshold limits how many chunks a peer may have to provide to free-riding neighbors without receiving reciprocation. Comparing constants in the code, we found that Swarm uses a constant threshold parameter on all connections, which is equal to 400 times the maximum credit sent for a single chunk, i.e., \(400\times\max(c_{i}^{in})\). We model this by using a constant default threshold parameter of \(1\times\max(c_{i}^{in})\). Thus, at least one chunk can be retrieved on any connection without requiring settlement. According to Equation 1, \(\max(c_{i}^{in})=\omega\). In this work, we are interested in how the tokens, transferred after reaching the threshold, get distributed in the network. Thus, our model keeps the same characteristics as Swarm, but thresholds saturate faster, allowing more efficient evaluation. Forgiveness in _Tit4Tok_ is parametrized by a _refresh rate_ expressed in accounting units per second. Thus, after \(\Delta\) seconds, a peer is forgiven up to \(\Delta\times\textit{refresh rate}\) many accounting units. Note that a peer is not forgiven more than its current debt, which is limited by the threshold discussed above. Swarm sets the _refresh rate_ allows to forgive the complete debt threshold after waiting 20 seconds. Based on the maximum reward for a chunk, this reflects a free layer of at least 20 chunks per second on every connection. Again, to allow more efficient evaluation, we use a smaller _refresh rate_ of \(1/2\times\max(c_{i}^{in})=\omega/2\) per second. The introduction of reciprocity and limited free service on all connections poses a significant challenge. We show in Section 7.2.1 that a constant _refresh rate_ distributes the cost of limited free service unevenly on the path, impacting negatively on Income-Fairness. This is because the accounting units sent using distance-based credit, but also the frequency of use differs for different connections. We designed a different parametrization of reciprocity and limited free service that adapts these parameters based on the distance between the peers adjacent to a connection. Our evaluation shows that this pairwise parametrization can reduce the negative impact the limited free service has on Income-Fairness. Details on our pairwise adjusted threshold and _refresh rate_ are given in Appendix D. ### When to settle debt with tokens? When reaching the debt threshold, peers need to decide to either wait for _refresh rate_ and rely on the limited free service or to settle debt by transferring tokens. We assume that all peers make use of reciprocity and the limited free service and do not settle debt unless it is required. Peers, therefore, try to request chunks from a neighbor, from which the pairwise balance allows retrieval without a settlement, and apply the _refresh rate_ whenever possible. We also assume that peers transfer as few tokens as possible. Thus, when requesting a chunk without free bandwidth available, a peer will only settle enough debt to be able to request the current chunk. We discuss other approaches to settlement in Appendix J. Our settlement model differs from the default settings in Swarm. According to these default settings, originators always settle their debt, while forwarding peers never do so. If forwarding peers never transfer tokens, the limited free service imposes a limit, also on originators settling their debt, and a paying originator may have to wait for the _refresh rate_ on Figure 3: Gateway \(G\) seeks chunk \(C\) at depth \(\delta=11\). Nodes \(F1\) and \(F2\) route to destination \(S\). Three \(\omega\) values (\(\omega=11,16,30\)) calculate \(c_{i}^{in}\). Payment amounts on red arrows: \(\omega=11\), \(G\) pays \(9\) to \(F1\), \(F1\) pays \(5\) to \(F2\), and retains \(4\). Gray shades depict common chunk address bits. a distance hop of a requests path. If forwarding peers settle debt with their neighbors, it allows them to serve more requests from paying neighbors and also receive more tokens. In our simulations, both forwarding peers and originators perform settlements, but only after using reciprocity and the limited free service. In this setting, a paying originator is not limited by the free service layer. However, it may happen that forwarding peers spend more tokens than they receive, resulting in _negative income_. Such negative income is due to bottlenecks in the network, where a peer may give more free bandwidth to others than it receives. In Appendix J.2, we provide measurements of the effective download rates gateways can achieve when forwarding peers do not perform settlements. We also show that both our pairwise limited free service and especially a more balanced network can reduce the occurrence of _negative income_. It is challenging to ensure that paying originators do not have to wait for the _refresh rate_ while also preventing negative income, which may discourage peers from sharing their bandwidth. We believe that this problem can be solved by more complex policies when forwarders should settle debt but leave the details for future work. ## 7 Evaluation In this section, we evaluate the different incentive mechanisms and parameters in our _Tit4Tok_ model. To better understand the different mechanisms, we evaluate them incrementally. To investigate distance-based credit, we assume that no reciprocity or limited free service happens and study the distribution of accounting united. We investigate the effect of adding first reciprocity and then limited free service. Finally, we evaluate the effect of various additional mechanisms like caching and shuffling of connections. We focus on the effect that different parameters have on Income-Fairness, but also show other findings relevant to the configuration and effectiveness of these mechanisms. **Parameters** The main parameter for distance-based rewards is \(\omega\), which we vary in our experiments. As explained in Section 6 we bind the _threshold_ for maximum debt to \(\omega\) and the _refresh rate_ to \(\omega/2\). These settings ensure that at least one chunk can be received for free over any connection every 2 seconds. We use \(\omega=16\) as default parameter, since we find that it most evenly spreads the accounting units from the originator on a single path. Further, we explore the impact of adjusting the bucket size \(k\), which determines the number of connections maintained in Kademlia. Our experimentation involves a network comprising \(10,000\) peers. For comparison, in September 2023, the Swarm network contained 8457 active and 6179 staked nodes5. We use a storage depth \(\delta=11\). This ensures that on average, four peers are responsible to store a chunk, similar as in the Swarm network. Unless otherwise noted, peer addresses are picked uniformly at random. Thus, the actual number of peers responsible to store a chunk may vary significantly. By default, we set the bucket size to \(k=8\). This results in each peer maintaining 80 connections. This aligns with values in the Swarm network, recently changed from 4 to 20. Connections are used in both directions and chosen uniformly at random. Footnote 5: According to [https://swarmscan.io/](https://swarmscan.io/) **Workload** We use a uniform distribution of chunks, meaning that any address will be requested with the same probability. While in real workloads, some files will be more popular than others, even popular files will contain many chunks evenly distributed among peers. Since our chunk addresses are mainly used to determine which peers should be contacted, we believe the assumption of uniform distribution is reasonable. We also did adjust a dataset showing the workload of a public IPFS gateway to chunk addresses. In our simulation _originator_ nodes represent such gateways. As we report in Appendix G (Figure 13), the IPFS workload gives similar results as our uniform chunks. We change the number of peers that function as originators varying from 0.5% (50 peers) to 100%. The workload is evenly distributed among the originators. Thus, all originators request chunks at the same frequency. We typically use a workload of 10 million chunks requested over 10 seconds. We use larger workloads where it was necessary to make measurements converge. **Simulation tool** We did implement a tool6 that simulates routing in the Kademlia network, pairwise accounting, applying reciprocity and the limited free service, and records settlements performed. Further details on the tool are given in Appendix C. To mitigate the influence of randomness in our experiments, each value reported is the average over five distinct network graphs. Footnote 6: [https://github.com/relab/bandwidth-incentive-simulation](https://github.com/relab/bandwidth-incentive-simulation) ### Distance-based credit In the following, we analyze what effects distance-based credits have on balances and fairness on the accounting layer. Distance-based credits are parametrized by the maximum price \(\omega\). We performed extensive simulations, varying the parameter \(\omega\) and the bucket size \(k\) in peers' routing table. #### 7.1.1 Income distribution on the path We investigate different parameters for \(\omega\). A larger \(\omega\) means that more accounting units need to be sent, but it also changes how these units are distributed among the different peers on a path. The distribution is more relevant to us since we assume that a higher cost in accounting units would result in a lower token value. Figure 4 shows how accounting units are distributed with varying \(\omega\) values. **Observation 1:** A larger \(\omega\) parameter gives a smaller fraction of the reward to the first hop in the route. #### 7.1.2 Income-Fairness on the accounting layer In the following, we investigate the effect of different parameters on Income-Fairness. Figure 4(a) shows the effect of the distribution of request originators (corresponding to gateways in a real life deployment) among peers. Figure 4(a) shows Income-Fairness for different values of \(\omega\). It also shows the difference between networks, where peer addresses are picked uniformly at random, and a network, where every peer receives 2 choices for his address and picks the one with fewer peers within the storage distance. This results in a significantly more even distribution of peer addresses [35]. With \(1\%\) or fewer peers as originators, income becomes significantly unequal, especially for smaller \(\omega\) (see \(\omega\) =16). With 10% or more peers as originators the more even peer distribution following 2 choices results in lower income fairness. Changing the parameter \(k\) from 8 to 16 does not impact Income-Fairness. Figure 4: Fraction of rewards that goes to each hop on the route from originator to storer on average, with \(k=8\), and different \(\omega\) values. Larger \(\omega\) gives a smaller fraction to the first hop. Figure 5: Uneven income distribution, due to storage and forwarding discrepancy, and the impact of \(\omega\) on income fairness **With 0.5% originators** Income-Fairness is 0.48 with \(\omega=16\) and reduces to 0.33 with \(\omega=30\). Figure 4(b) investigates the causes for this inequality. Figure 4(b) shows the fraction of the total accounting units peers receive based on the average hop on which they are located on routes in the experiment. For example, a peer located at hop 1 for 1000 downloaded chunks and hop 2 in 2000 downloaded chunks in the experiment has an average hop of 1.66. Accounting units are shown as the ratio of even share, where a value of 2 means that a peer receives \(2\times 1/n\) of the total accounting units. The Figure also shows values for a constant reward distribution, where every action is rewarded with a constant value. This also shows the distribution of load in the system. As can be seen in Figure 4(b), with \(\omega=16\), 10% of the peers receive \(37\%\) of the total income. The constant distribution shows, that the same 10% of peers also perform \(36\%\) of the actions. We note that a peer can only be on hop 1, if it has a connection to an originator. With \(\omega=30\) a larger share of the reward is given to the storing action, which is distributed more evenly among peers. This results in better income fairness. While Figure 4(a) only contains data for k=8 and k=16 experiments varying k from 4 to 32 have shown no effect on the Income-Fairness. **Observation 2:** Concentration of few originators results in unequal income, independent of the connectivity. Especially, peers connected to an originator (hop 1) will receive more requests and accordingly more income. **Swarm improvement proposal:** Observation 2 suggests that Swarm should facilitate and encourage different access modes than through a web gateway (originator) to avoid an uneven load. When uneven load cannot be avoided, a larger \(\omega\) can still improve Income-Fairness. **With 100% originators** Figure 4(a) shows a Income-Fairness of 0.13 (0.19) for \(\omega=16\) (\(\omega=30\)). This value is reduced to 0.08 (0.12) by giving peers 2 choices for their addresses. In Figure 4(c) we further investigate this effect, correlating a peer's income with the number of other peers located in its neighborhood. Peers in a densely populated neighborhood receive a smaller part of the income, since all neighbors store the same chunks. With addresses picked uniformly at random, we see on average 58.6 out of 2048 neighborhoods with only a single peer, and 214 neighborhoods with 8 or more peers. In the networks where peers choose their address in the sparser neighborhood out of 2 choices, we did not see neighborhoods with 1 or 8 or more peers. **Observation 3:** Random assignment of peer addresses leads to sparse and dense neighborhoods, which again result in unequal income. **Swarm improvement proposal:** Observation 3 suggests that peers should not simply choose their addresses at random but rather should try to achieve placement in a sparsely populated neighborhood. Simply using the sparser of two randomly selected neighborhoods (2choices), has a significant impact. To the best of our knowledge, Swarm currently does not contain mechanisms to achieve such balancing. ### Reciprocity and Limited free service This section shows the effect reciprocity and limited free service have on income fairness at the settlement layer. We assume that all peers perform settlements. All measurements use distance-based credit on the accounting layer. The limited free service allows peers to download some chunks for free every timestep. When measuring fairness on the settlement layer, it is therefore important to vary not only the number of chunk download requests raised to the system but also the rate at which these requests are issued. We let every originator to retrieve between \(10\) and \(2,000\) chunks Figure 6: _Reciprocity and Income-Fairness_ The effect of reciprocity on Income-Fairness with \(0.5\%\) originators, each requesting \(2,000\) chunks per second. Heatmaps show the saturation of edges with and without reciprocity. per timestep. Assuming a chunk size of 4kB as in Swarm these numbers result in a request rate between \(40kB/s\) and \(8MB/s\). #### 7.2.1 Distributing the cost of free service on the path In the following, we investigate the effect reciprocity and the free service layer have on Income-Fairness. In Figure 6 we investigate reciprocity without applying the _refresh rate_ from the limited free service. Without _refresh rate_, peers still go into debt with each other. The threshold for maximum debt is set to \(\omega\). With reciprocity, peers balance out debt given to each other. To better understand the effect of reciprocity, we also implemented a variant, where peers can go into debt with each other, until the threshold is reached, but the pairwise debts of neighbors are not substracted from each other. We refer to this variant as _no reciprocity_. With no reciprocity, connections hit the threshold significantly faster than with reciprocity, as can be seen from Figure 5(a) and 5(b). **Observation 4:** Reciprocity worsens Income-Fairness since requests on the first hop more often produce settlements. In Figure 7, we investigate the effect of the limited free service on Income-Fairness. We also show results of our _pairwise limited free service_, where threshold and _refresh rate_ are set based on the proximity of adjacent peers. We ensure the adjusted threshold is still larger than the accounting units required for any chunk forwarded on that connection. More details on how we adjust _threshold_ and _refresh rate_ can be found in Appendix D. Figure 7 shows that limited free service can further worsen Income-Fairness, compared to using only reciprocity. While Figure 7 shows results for \(0.5\%\) gateways, a larger number of gateways results in similar results (see Appendix F). **Observation 5:** The default variant of limited free service significantly reduced Income-Fairness. Our pairwise limited free service mitigates this difference, resulting in similar Income-Fairness as the variant not providing free service. This shows that our pairwise limits distribute the cost of free service more equally among peers on a path. **Observation 6:** Our pairwise limited free service allows significantly improves Income-Fairness. **Swarm improvement proposal:** Following Observation 6, Swarm could achieve a better Income-Fairness by introducing our adaptive free service limit. #### 7.2.2 Centralization risk through gateways According to Observation 2, peers adjacent to an originator, generating a lot of requests as a gateway, receive more income. Accordingly, such originators can reduce their overall cost by connecting to other originators who generate a lot of requests. Table 1 shows how a cluster of 5 or 100 originators can reduce its cost, and how many of the hops are performed between originators. With an average path length of 2.6 hops, if 45% of hops are performed between 100 originators, other peers are mostly used for the retrieval of chunks but not for their forwarding. Enabling reciprocity further increases this effect, since peers prefer to exchange bandwidth for bandwidth, rather than settling with tokens. This shows that originators are incentivized to form clusters, monopolizing a large fraction of the traffic. We also show the cost an operator of multiple gateways, acting as originators, can achieve by requesting each chunk from the Figure 7: _Limited free service and Income-Fairness_ The effect of limited free service on Income-Fairness with \(0.5\%\) originators. Variants included: accounting only considers only amounts credited on the accouting layer; reciprocity does not provide free service, setting the _refresh rate_ to zero; free service uses the _refresh rate_\(\omega/2\); pairwise free service, adapts threshold and _refresh rate_ to peer distance. originator closest to the chunk address. In this variant, the originators collude through external connection without pairwise accounting to form an (_external clique_). Table 1 shows that by connecting inside the system, gateway operators can achieve similar benefits as an operator of multiple originators. **Observation 7:** Originators are incentivized to form clusters but do not gain significant benefits from external cliques. #### 7.2.3 Caching Forwarding Kademlia allows peers to cache chunks they forward to an originator. We did implement such caching and evaluate its effect on Income-Fairness. Since caching requires a non-uniform workload, we use the adapted IPFS workload for this evaluation. Our experiments detailed in Appendix H show that with few originators caching may increase Income-Fairness from 0.57 to 0.71. However, if the cache at originators is very large, Income-Fairness reduces again (0.67), since mostly unique chunks are requested from the network. With many originators (e.g., 20%) caching may have positive effect, e.g. improving fairness from 0.26 to 0.23. Caching can reduce the imbalance caused by neighborhoods with few peers, reducing the ratio of tokens peers in a neighborhood of size 1 receive from 3.3 to 2.9 times the fair share. **Observation 8:** Caching can smoothen imbalances related to the peer distribution in the address space, but increases imbalances with few originators. ### Income Distribution Across Path One of the reasons behind unfairness is the static routing table in the underlying DHT. The limited number of neighbors, together with a condensed source of requests puts the nodes on the top buckets of the originators in an advantageous position. The topmost bucket of originators is responsible for handling half of the requests while containing only \(k\) nodes. Figure 8 shows that if originators connect to significantly more nodes (\(k=256\)), income fairness decreases by \(0.2\). However, this solution may not scale since it requires originators to maintain thousands of connections. Instead, we propose a shuffling mechanism where peers regularly change their neighbors. Thus, peers can have a large number of neighbors distributed over time. Appendix I discusses how shuffling can be implemented and the side effects this has. To evaluate how shuffling works, we considered two variants of shuffling: 1. Only originators shuffle their neighbors, 2. All nodes shuffle their neighbors. We ran these variants in the _Tit4Tok_ model with reciprocity and pairwise limited free service. Figure 8 shows how shuffling affects the Income-Fairness in the network. We observe that shuffling the neighbors of originators eventually converges to shuffling every node's neighbors. During our experiment, we found that shuffling originator's neighbors results in a \(13.6\%\) reduction in the cost of each chunk since originators may abandon connections with debt. ## 8 Conclusion In this work, we present _Tit-for-Token_, a framework to study token-based incentives across accounting and settlement layers of decentralized storage systems. We propose the triad of altruism, reciprocity, and monetary incentives as a compound of worthwhile incentive mechanisms and study the interplay between these incentives and the storage-, and network-parameters. We quantify income-fairness using multiple model instantiations and propose effective methods to reduce inequalities introduced by gateways. The Tit-for-Token framework can help system designers to improve the design of incentive mechanisms. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \#_originators_ & 5 & 100 & 5 & 100 \\ & \multicolumn{2}{c|}{tok per chunk} & \multicolumn{2}{c|}{internal hops} \\ \hline random with recip. & 11.6 & 11.1 & 0.1\% & 1.6\% \\ clique no recip. & 11.0 & 7.0 & 5.9\% & 38.4\% \\ clique with recip. & 10.1 & 6.5 & 17.5\% & 45.8\% \\ external clique & 9.7 & 5.9 & – & – \\ \hline \end{tabular} \end{table} Table 1: Centralization risk through originator cliques ## Acknowledgements We express our gratitude to the following individuals and organizations for their valuable contributions and support during the course of this research: * Filip B. Gotten, Rasmus Oglend, and Torjus J. Knudsen, whose dedicated work significantly improved our simulation tool as part of their bachelor theses. * The Swarm community, with special appreciation to Daniel Nagy, for their insightful discussions on an early version of this work. * Derouich Abdessalam for his valuable contribution and feedback concerning the shuffling solution. We also acknowledge the financial support provided by the BBChain project, granted under the Research Council of Norway (grant 274451). Figure 8: Income-Fairness of different shuffling variants for \(0.5\%\) originators, each requesting \(2,000\) chunks per second with \(\omega=16\), pairwise limited free service and reciprocity enabled.
2310.07786
Non-Stationary Contextual Bandit Learning via Neural Predictive Ensemble Sampling
Real-world applications of contextual bandits often exhibit non-stationarity due to seasonality, serendipity, and evolving social trends. While a number of non-stationary contextual bandit learning algorithms have been proposed in the literature, they excessively explore due to a lack of prioritization for information of enduring value, or are designed in ways that do not scale in modern applications with high-dimensional user-specific features and large action set, or both. In this paper, we introduce a novel non-stationary contextual bandit algorithm that addresses these concerns. It combines a scalable, deep-neural-network-based architecture with a carefully designed exploration mechanism that strategically prioritizes collecting information with the most lasting value in a non-stationary environment. Through empirical evaluations on two real-world recommendation datasets, which exhibit pronounced non-stationarity, we demonstrate that our approach significantly outperforms the state-of-the-art baselines.
Zheqing Zhu, Yueyang Liu, Xu Kuang, Benjamin Van Roy
2023-10-11T18:15:55Z
http://arxiv.org/abs/2310.07786v2
# Non-Stationary Contextual Bandit Learning via Neural Predictive Ensemble Sampling ###### Abstract Real-world applications of contextual bandits often exhibit non-stationarity due to seasonality, serendipity, and evolving social trends. While a number of non-stationary contextual bandit learning algorithms have been proposed in the literature, they excessively explore due to a lack of prioritization for information of enduring value, or are designed in ways that do not scale in modern applications with high-dimensional user-specific features and large action set, or both. In this paper, we introduce a novel non-stationary contextual bandit algorithm that addresses these concerns. It combines a scalable, deep-neural-network-based architecture with a carefully designed exploration mechanism that strategically prioritizes collecting information with the most lasting value in a non-stationary environment. Through empirical evaluations on two real-world recommendation datasets, which exhibit pronounced non-stationarity, we demonstrate that our approach significantly outperforms the state-of-the-art baselines. ## 1 Introduction Contextual bandit learning algorithms have seen rapid adoptions in recent years in a number of domains (Bouneffouf and Rish, 2019), from driving personalized recommendations (Li et al., 2010) to optimizing dyanmic advertising placements (Schwartz et al., 2017). The primary objective of these algorithms is to strategically select actions to acquire information about the environment in the most cost-effective manner, and use that knowledge to guide subsequent decision-making. Thanks in part to the historical development in this field, many of these algorithms are designed for a finite-horizon experiment with the environment remaining relatively stationary throughout. However, real-world environments are rife with non-stationarity (Ditzler et al., 2015; Elena et al., 2021), as a result of seasonality (Keerthika and Saravanan, 2020; Hwangbo et al., 2018), serendipity (Kotkov et al., 2016, 2018), or evolving social trends (Abdollahpouri et al., 2019; Canamares and Castells, 2018). To make matters worse, many practical contextual bandit systems, such as these commonly used in a recommendation engine, operate in a continuous manner over a long, or even indefinite time horizon, further exposing the learning algorithm to non-stationarity that is bound to manifest over its lifetime. Indeed, when applied to non-stationary environments, traditional contextual bandit learning algorithms designed with stationarity in mind are known to yield sub-optimal performance (Trovo et al., 2020; Russac et al., 2020). Figure 1: NeuralPES Regret in Nonstationary Contextual Bandits The goal of this paper is to study the design of contextual bandit algorithms that not only successfully navigate a non-stationary environment, but also scale to real-world production environments. Extending classic bandit algorithms to a non-stationary setting has received sustained attention in recent years (Kocsis and Szepesvari, 2006; Garivier and Moulines, 2008; Raj and Kalyani, 2017; Trovo et al., 2020). A limitation in these existing approaches, however, is that their primary exploration mechanisms still resemble the stationary version of the algorithm, and non-stationarity is only taken into account by discounting the importance of past observations, which often leads to excessive exploration. As pointed out by Liu et al. (2023), exploration designs intended for stationary environments tend to focus on resolving the uncertainty surrounding an action's current quality, and as such, suffer sub-optimal performance for failing to prioritize collecting information that would be of more enduring value in a non-stationary environment. In response, Liu et al. (2023) proposed the predictive sampling algorithm that takes information durability into account, and demonstrated an impressive performance improvement over existing solutions. However, the predictive sampling algorithm, among many nonstationary contextual bandit learning algorithm we discuss in the related work section, suffers from their scalability and does not scale with modern deep learning systems. In this work, we take a step towards solving large-scale nonstationary contextual bandit problems by introducing Neural Predictive Ensemble Sampling (NeuralPES), the first non-stationary contextual bandit learning algorithm that is scalable with modern neural networks and effectively explores in a non-stationary environment by seeking lasting information. Theoretically, we establish that NeuralPES emphasizes the acquisition of lasting information, information that remains relevant for a longer period of time. Empirically, we validate the algorithm's efficacy in two real-world recommendation datasets, spanning across \(1\) week and \(2\) months of time, respectively, and exhibiting pronounced non-stationarity. Our findings reveal that our algorithm surpasses other state-of-the-art neural contextual bandit learning algorithms, encompassing both stationary and non-stationary variants. As a spoiler for our empirically results, see Figure 1 for the average regret of our agent compared to other baselines on an AR(1) nonstationary contextual bandit environment. ## 2 Related Work **Non-Stationary Bandit Learning.** A large number of non-stationary bandit learning algorithms rely on heuristic approaches to reduce the effect of past data. These heuristics include maintaining a sliding window Cheung et al. (2019, 2022); Garivier and Moulines (2008); Russac et al. (2020); Srivastava et al. (2014); Trovo et al. (2020), directly discounting the weight of past rewards by recency Bogunovic et al. (2016); Garivier and Moulines (2008); Russac et al. (2020); Kocsis and Szepesvari (2006), restarting the algorithm periodically or with a fixed probability at each time Auer et al. (2019); Allesiardo et al. (2017); Besbes et al. (2019); Bogunovic et al. (2016); Wei et al. (2016); Zhao et al. (2020), restarting upon detecting a change point Abbasi-Yadkori et al. (2022); Allesiardo and Feraud (2015); Auer et al. (2019); Allesiardo et al. (2017); Besson and Kaufmann (2019); Cao et al. (2019); Chen et al. (2019); Ghatak (2021); Ghatak et al. (2021); Hartland et al. (2006); Liu et al. (2018); Luo et al. (2018); Mellor and Shapiro (2013), and more complex heuristics (Gupta et al., 2011; Kim and Tewari, 2020; Raj and Kalyani, 2017; Viappiani, 2013). These algorithms adapt stationary bandit learning algorithms like Thompson sampling (TS) (Thompson, 1933), Upper Confidence Bound (UCB) (Lai and Robbins, 1985), and exponential-weight algorithms (Rexp3) (Auer et al., 2002; Freund and Schapire, 1997) using aforementioned heuristics to reduce the impact of past data and encourage continual exploration. However, they often lack intelligent mechanisms for seeking lasting information during exploration. While predictive sampling (Liu et al., 2023) seeks for lasting information, it does not efficiently scale. **Deep Neural Network-Based Bandit Algorithms.** In practical applications of bandit learning, both the set of contexts and the set of actions can be large. A number of algorithms (Gu et al., 2021; Jia et al., 2022; Kassraie and Krause, 2022; Riquelme et al., 2018; Salgia, 2023; Su et al., 2023; Xu et al., 2022; Zhang et al., 2020; Zhou et al., 2020; Zhu and Van Roy, 2023b) utilize the capacity of deep neural networks to generalize across actions and contexts. These algorithms are designed for stationary environments. While Allesiardo et al. (2014) proposes a deep neural-network based algorithm for non-stationary environments, it does not intelligently seek for lasting information. ## 3 Contextual Bandits This section formally introduces contextual bandits, and other related concepts and definitions. We first introduce contextual bandits. **Definition 1** (**Contextual Bandit)**.: A contextual bandit \(\mathcal{E}\) with a finite set of contexts \(\mathcal{C}\) and a finite set of actions \(\mathcal{A}\) is characterized by three stochastic processes: the reward process \(\{R_{t}\}_{t\in\mathbb{N}}\) with state space \(\mathbb{R}^{|\mathcal{C}|}\times\mathbb{R}^{|\mathcal{A}|}\), the contexts \(\{C_{t}\}_{t\in\mathbb{N}}\) with state space \(\mathcal{C}\), and the sequence of available action sets \(\{\mathcal{A}_{t}\}_{t\in\mathbb{N}}\) with state space \(2^{\mathcal{A}}\). We use \(\mathcal{E}=(\{R_{t}\}_{t\in\mathbb{N}},\{C_{t}\}_{t\in\mathbb{N}},\{\mathcal{ A}_{t}\}_{t\in\mathbb{N}})\) to denote the bandit. At each timestep \(t\in\mathbb{N}\), an agent is presented with context \(C_{t}\) and the set of available actions \(\mathcal{A}_{t}\). Upon selecting action \(a\in\mathcal{A}_{t}\), the agent observes a reward of \(R_{t+1,C_{t},a}\). ### Linear Contextual Bandits In many practical applications, both the context set and the action set are large. To enable effective generalization across these sets, certain structural assumptions on how the rewards are generated come into play. In this regard, the reward \(R_{t,c,a}\) can be described as a function of a feature vector \(\phi(c,a)\), which captures relevant contextual information in context \(c\in\mathcal{C}\) and action information in action \(a\in\mathcal{A}\). To exemplify this structure, let us introduce the linear contextual bandit. **Example 1** (**Linear Contextual Bandit)**.: A linear contextual bandit is a contextual bandit with feature mapping \(\phi:\mathcal{C}\times\mathcal{A}\rightarrow\mathbb{R}^{d}\), a stochastic process \(\{\theta_{t}\}_{t\in\mathbb{N}}\) with state space \(\mathbb{R}^{d}\). For all \(t\in\mathbb{N}\), \(c\in\mathcal{C}\), and \(a\in\mathcal{A}_{t}\), the reward \(R_{t,c,a}\) satisfies that \(\mathbb{E}[R_{t,c,a}|\phi,\theta_{t}]=\phi(c,a)^{\top}\theta_{t}\). ### Policy and Performance Let \(\mathcal{H}\) denote the set of all sequences of a finite number of action-observation pairs. Specifically, the observation at timestep \(0\) consists of only the initial context and available action set, and each following observation consists of a reward, a context, and an available action set. We refer to the elements of \(\mathcal{H}\) as _histories_. We next introduce a policy. **Definition 2**.: A policy \(\pi:\mathcal{H}\rightarrow\mathcal{P}(\mathcal{A})\) is a function that maps each history in \(\mathcal{H}\) to a probability distribution over the action set \(\mathcal{A}\). A policy \(\pi\) assigns, for each realization of history \(h\in\mathcal{H}\), a probability \(\pi(a|h)\) of choosing an action \(a\) for all \(a\in\mathcal{A}\). We require that \(\pi(a|h)=0\) for \(a\notin\mathcal{A}_{t}\), where \(\mathcal{A}_{t}\) is the available action set defined by \(h\). For any policy \(\pi\), we use \(A_{t}^{\pi}\) to denote the action selected at time \(t\) by an agent that executes policy \(\pi\), and \(H_{t}^{\pi}\) to denote the history generated at timestep \(t\) as an agent executes policy \(\pi\). Specifically, we let \(H_{0}^{\pi}\) be the empty history. We let \(A_{t}^{\pi}\) be such that \(\mathbb{P}(A_{t}^{\pi}\in\cdot|H_{t}^{\pi})=\pi(\cdot|H_{t}^{\pi})\) and that \(A_{t}^{\pi}\) is independent of \(\{C_{t}\}_{t\in\mathbb{N}}\), \(\{R_{t}\}_{t\in\mathbb{N}}\), and \(\{\mathcal{A}_{t}\}_{t\in\mathbb{N}}\) conditioned on \(H_{t}^{\pi}\), and let \(H_{t+1}^{\pi}=(C_{0},\mathcal{A}_{0},A_{0}^{\pi},R_{1,C_{0},A_{0}^{\pi}}, \ldots,A_{t}^{\pi},R_{t+1,C_{t},A_{t}^{\pi}},C_{t+1},\mathcal{A}_{t+1})\). For all policies \(\pi\), all bandits \(\mathcal{E}=(\{R_{t}\}_{t\in\mathbb{N}},\{C_{t}\}_{t\in\mathbb{N}},\{ \mathcal{A}_{t}\}_{t\in\mathbb{N}})\), and \(T\in\mathbb{N}\), the expected cumulative reward and the long-run average expected reward are \[\mathrm{Return}(\mathcal{E};T;\pi)=\sum_{t=0}^{T-1}\mathbb{E}\left[R_{t+1,C_{t},A_{t}^{\pi}}\right];\overline{\mathrm{Return}}(\mathcal{E};\pi)=\limsup_{T \rightarrow+\infty}\frac{1}{T}\mathrm{Return}(\mathcal{E};T;\pi).\] The average expected reward is particularly useful in evaluating agent performance when both the reward process \(\{R_{t}\}_{t\in\mathbb{N}}\) and the context process \(\{C_{t}\}_{t\in\mathbb{N}}\) are stationary stochastic processes. In such cases, \(\overline{\mathrm{Return}}(\mathcal{E};\pi)=\mathbb{E}\left[R_{t+1,C_{t},A_{t}^ {\pi}}\right],\) which is independent of \(t\). ## 4 Neural Predictive Ensemble Sampling In this section, we introduce a novel algorithm for non-stationary contextual bandit learning. The algorithm has several salient features below. See visualization of the architecture in Fig. 2 **Use Deep Neural Network Ensemble as Uncertainty Representation for Exploration.** In contextual bandit learning, an agent should intelligently balance exploration and exploitation. Thompson sampling (TS) Thompson (1933) stands as one of the most popular bandit learning algorithms, backed by well-established theoretical guarantees Agrawal and Goyal (2012); Russo and Van Roy (2014) and good empirical performance Chapelle and Li (2011); Zhu and Van Roy (2023). To adopt TS in complex settings, Ensemble sampling Lu and Van Roy (2017) is introduced an efficient approximation and is also compatible with deep neural networks. Importantly, ensemble sampling has shown both theoretical effectiveness and superior empirical performance with neural networks Lu et al. (2018); Qin et al. (2022); Osband et al. (2016). Therefore, we adopt a deep ensemble architecture. **Seek Out Lasting Information.** In a non-stationary environment, a continuous stream of new information emerges. As an agent strives to balance between exploration and exploitation, an important consideration involves prioritizing the acquisition of information that remains relevant for a longer period of time Liu et al. (2023). We introduce an algorithm that effectively prioritizes seeking such lasting information. Notably, our algorithm, NeuralPES, avoids the introduction of assumptions on how the rewards are generated or that of additional tuning parameters to adjust the extent of exploration. Indeed, it determines the exploration extent by training a deep neural network. To our knowledge, NeuralPES is the first algorithm that both suitably prioritizes seeking lasting information and scales to complex environments of practical interest. ### Neural Ensemble Sampling Before delving into the specific design of our algorithm, let us introduce a baseline algorithm which can be thought of as a deep neural network-based TS. This algorithm is referred to as the Neural Ensemble Sampling (NeuralEnsemble-Sampling). At each timestep \(t\in\mathbb{N}\), a NeuralEnsembleSampling agent (See Algorithm 1): 1. Trains an ensemble of \(M\) reward models, updating weights using stochastic gradient descent. 2. Samples \(m\sim\mathrm{unif}(\{1,...,M\})\), and uses the \(m\)-th reward model to predict a reward at the next timestep \(\hat{R}_{t+1,C_{t},a}\). 3. Selects an action that maximizes \(\hat{R}_{t+1,C_{t},a}\). ``` 1Input: Horizon \(T\), number of particles in each ensemble \(M\), loss function \(\mathcal{L}\), replay buffer size \(K\), sequence model input size \(L\), number of gradient steps \(\tau,\tau_{\mathrm{seq}}\), step sizes \(\alpha,\alpha_{\mathrm{seq}}\), minibatch sizes \(K^{\prime}\), Initialize: Let replay buffer \(\mathcal{B}=\emptyset\), and randomly initialize weights \(\psi_{1:M}\), \(w_{1:M,0}\), and \(w_{1:M,0}^{\mathrm{seq}}\), for\(t=0,1,\ldots,T-1\)do 2for\(m=1,2,\ldots,M\)do 3 Let \((\psi_{m},w_{m,t})\leftarrow\mathrm{TrainRewardNN}(\mathcal{B},\mathcal{L}, \psi_{m},w_{m,t-1},\tau,\alpha,K^{\prime})\) sample: \(m\sim\mathrm{unif}(\{1,...,M\})\) select: \(A_{t}\in\arg\max_{a\in\mathcal{A}_{t}}f(w_{m,t};b(\psi_{m};C_{t},a))\) 4observe: \(R_{t+1,C_{t},A_{t}}\), \(C_{t+1}\), \(\mathcal{A}_{t+1}\) update: Update \(\mathcal{B}\) to keep the most recent \(K\) tuples of context, action, reward, and timestep data. ``` **Algorithm 1**NeuralEnsembleSampling ``` 1Input: Replay buffer \(\mathcal{B}\), sequence model weights network weights \(\psi\), historical last-layer weights \(w_{1:t-1}\), number of gradient steps \(\tau\), step size \(\alpha\), minibatch size \(K^{\prime}\). for\(i=0,1,\ldots,\tau-1\)do 2sample: a minibatch \(\mathcal{B}^{\prime}\) of size \(K^{\prime}\) from replay buffer \(\mathcal{B}\) update \((\psi,w)\) following Equation 1 return:\(\psi\), \(w\) ``` **Algorithm 2**TrainRewardNN ``` 1Input: Replay buffer \(\mathcal{B}\), sequence model weights network weights \(w^{\mathrm{seq}}\), historical last-layer weights \(w_{1:t-1}\), number of gradient steps \(\tau\), step size \(\alpha\), number of future steps \(x\). for\(i=0,1,\ldots,\tau-1\)do 2sample: \(j\sim\mathrm{unif}(\{L,...,t-1\})\) update \(w^{\mathrm{seq}}\) following Equation 2 return:\(w^{\mathrm{seq}}\) ``` **Algorithm 3**TrainSequenceNN The **Reward Model** Figure 2 presents a visualization of the ensemble of reward models. The ensemble has \(M\) particles, each consists of a base network \(b\) defined by weights \(\psi_{m}\), and last layer \(f\) defined by weights \(w_{m,t}\). Each particle in the Figure 2: Visualization of three components of NeuralPES: reward, sequence, and predictive model. ensemble is a reward model that aims to predict the reward \(R_{t+1,c,a}\) given context and action pair \((c,a)\). Specifically, at each timestep \(t\in\mathbb{N}\), the \(m\)-th reward model predicts \(f(w_{m,t};b(\psi_{m};c,a))\). We maintain a replay buffer \(\mathcal{B}\) of the most recent \(K\) tuples of context, action, reward, and timestep data. At each timestep, the network weights \(w_{1:M}\) and \(\psi_{1:M}\) are trained via repeatedly sampling a minibatch \(\mathcal{B}^{\prime}\) of size \(K^{\prime}\), and letting \[(\psi_{m},w_{m})\leftarrow(\psi_{m},w_{m})-\alpha\sum_{(c,a,r,j)\in\mathcal{B }^{\prime}}\nabla_{(\psi_{m},w_{m})}\mathcal{L}(f(w_{m};b(\psi_{m};c,a)),r) \tag{1}\] for each \(m\in[M]\). Note that we use \(w_{m,t}\) to denote the last-layer weight of the \(m\)-th particle at the \(t\)-th timestep; when it is clear that we are considering a single timestep, we drop the subscript \(t\). ### Predicting Future Reward via Sequence Modeling Given the non-stationarity of the environment, a natural choice to adapt to the changing dynamics is to predict future reward model weights via sequence models, and use the predictive future reward model to select actions. We refer to this agent as the Neural Sequence Ensemble agent At each timestep \(t\in\mathbb{N}\), a Neural Sequence Ensemble agent proceeds as the following: 1. Trains an ensemble of \(M\) reward models and an ensemble of \(M\) sequence models through updating their weights using stochastic gradient descent. 2. Samples \(m\sim\mathrm{unif}(\{1,...,M\})\), uses the \(m\)-th sequence model to predict a future reward model one step ahead of time based on past reward models, and uses this predicted future model to predict a reward at the next timestep \(\hat{R}_{t+1,C_{t},a}\). 3. Selects an action that maximizes \(\hat{R}_{t+1,C_{t},a}\). **The Sequence Model** Figure 2 presents a visualization of the ensemble of the sequence models as well. The ensemble consists of \(M\) particles. Each particle is a sequence model implemented as a recurrent neural network that aims to predict future reward model weights \(w_{m,t+1}\) given historical ones \(w_{m,t-L+1:t}\). At each timestep \(t\in\mathbb{N}\), the \(m\)-th sequence model predicts \(f^{\text{seq}}(w^{\text{seq}}_{m,t};w_{m,t-L+1,t})\). The network weights \(w^{\text{seq}}_{m}\) are trained via repeatedly sampling \(j\) from \(\{L,...,t-1\}\) and letting \[w^{\text{seq}}_{m}\gets w^{\text{seq}}_{m}-\alpha\nabla_{w^{\text{seq}}_{ m}}\mathcal{L}_{\text{MSE}}(f^{\text{seq}}(w^{\text{seq}}_{m};w_{m,j-L+1;j}),w_{m,j+1 }). \tag{2}\] ### Neural Predictive Ensemble Sampling Let us now present NeuralPES. A key distinction between this algorithm and NeuralEnsemble lies in its ability to prioritize information that maintains relevance over a longer period of time. This is achieved through incorporating a new model which we refer to as the predictive model. Specifically, the predictive model is designed to take a function of a context-action pair \((c,a)\) and a future reward model as input. Its purpose is to generate a prediction for the upcoming reward \(R_{t+1,c,a}\). When maintaining an ensemble of predictive models for exploration, an agent can suitably prioritize information based on how lasting the information is. At each timestep \(t\in\mathbb{N}\), a NeuralPES agent (see Algorithm 4): 1. Trains an ensemble of \(M\) reward models, an ensemble of \(M\) sequence models, and an ensemble of \(M\) predictive models 2. Samples \(m\sim\mathrm{unif}(\{1,...,M\})\), and uses the \(m\)-th sequence model to predict a future reward model two steps ahead of time based on past models. 3. Takes this predicted future model as part of input to the \(m\)-th predictive model, and predicts a reward at the next timestep \(\hat{R}_{t+1,C_{t},a}\). 4. Selects an action that maximizes \(\hat{R}_{t+1,C_{t},a}\). **The Predictive Model** Figure 2 also presents a visualization of the ensemble of the predictive models. The ensemble consists of \(M\) particles. Each particle in the ensemble is a predictive model that aims to predict the next reward \(R_{t+1,c,a}\) provided context-action pair \((c,a)\) and a future reward model of two timesteps ahead of time. Specifically, at each timestep \(t\in\mathbb{N}\), the \(m\)-th predictive model aims to predict \(R_{t+1,c,a}\) by taking an intermediate representation, i.e., \(\hat{w}_{m,t+2}\odot b(\psi;c,a)\), as input. We maintain a replay buffer \(\mathcal{B}\) of the most recent \(K\) tuples of context, action, reward, and timestep data. The network weights \(w^{\text{pred}}_{1:M}\) are trained via repeatedly sampling a minibatch \(\mathcal{B}^{\prime\prime}\) of size \(K^{\prime\prime}\) \[w^{\text{pred}}_{m}\gets w^{\text{pred}}_{m}-\alpha\sum_{(c,a,r,j)\in \mathcal{B}^{\prime\prime}}\nabla_{w^{\text{pred}}_{m}}\mathcal{L}(f^{\text{ pred}}(w^{\text{pred}}_{m};w_{m,j+2}\odot b(\psi_{m};c,a)),r) \tag{3}\] for each \(m\in[M]\). Note that we use \(w^{\text{pred}}_{m,t}\) to denote the last-layer weight of the \(m\)-th particle at the \(t\)-th timestep; when it is clear that we are considering a single timestep, we drop the subscript \(t\). **Regularization to Address Loss of Plasticity** To address the loss of plasticity, we regularize each particle's weight towards its initial weight in the last layer of the reward model ensemble and the predictive model ensemble Kumar et al. (2023). (1) and (3) now becomes \[\begin{split}&(\psi_{m},w_{m})\leftarrow(\psi_{m},w_{m})-\alpha \sum_{(c,a,r,j)\in\mathcal{B}^{\prime}}\nabla_{(\psi_{m},w_{m})}\left\{ \mathcal{L}(f(w_{m};b(\psi_{m};c,a)),r)+\|w_{m}-w_{m,0}\|_{2}\right\}.\\ & w^{\text{pred}}_{m}\gets w^{\text{pred}}_{m}-\alpha\sum_{(c,a,r,j)\in\mathcal{B}^{\prime\prime}}\nabla_{w^{\text{pred}}_{m}}\left\{ \mathcal{L}(f^{\text{pred}}(w^{\text{pred}}_{m};w_{m,j+2}\odot b(\psi_{m};c,a) ),r)+\|w^{\text{pred}}_{m}-w^{\text{pred}}_{m,0}\|_{2}\right\}.\end{split} \tag{4}\] ### Theoretical Insights and Analysis We provide intuition and evidence that NeuralPES's prioritizes the acquisition of lasting information. #### 4.4.1 NeuralPES Prioritizes Lasting Information We focus on comparing NeuralPES and NeuralEnsemble in linear contextual bandits. In such contexts, NeuralPES can be viewed as a neural network-based implementation of an algorithm which we refer to as linear predictive sampling (LinPS); NeuralEnsemble can be viewed as a neural network-based implementation of TS. In a linear contextual bandit, a LinPS agent carries out the following three-step procedure at each timestep, and a TS agent carries out a similar procedure, replacing \(\theta_{t+2}\) with \(\theta_{t+1}\): 1. samples \(\hat{\theta}_{t+2}\) from the posterior \(\mathbb{P}(\theta_{t+2}\in\cdot|H_{t})\), and \(\hat{\phi}_{t}\) from the posterior \(\mathbb{P}(\phi\in\cdot|H_{t})\). 2. estimates the reward \(\hat{R}_{t+1,C_{t},a}=\mathbb{E}[R_{t+1,C_{t},a}|H_{t},\phi=\hat{\phi}_{t}, \theta_{t+2}=\hat{\theta}_{t+2}]\), 3. and selects an action that maximizes the sample \(A_{t}\in\arg\max_{a\in\mathcal{A}_{t}}\hat{R}_{t+1,C_{t},a}\). The procedures are carried out by approximating \(\mathbb{P}(\theta_{t+1}\in\cdot|H_{t})\) using the ensemble of the last layers of the reward models, approximating \(\mathbb{P}(\phi\in\cdot|H_{t})\) using the ensemble of the base models, approximating \(\mathbb{P}(\theta_{t+2}\in\cdot|H_{t})\) utilizing the sequence models; the reward estimation step of LinPS utilizes the predictive models. To compare the behaviors of NeuralPES and NeuralEnsemble, we can compare LinPS with TS. It is worth noting that both algorithms trade off exploration and exploitation in a similar fashion, yet TS trades off between optimizing the immediate reward and learning about \(\phi\) and \(\theta_{t+1}\) and LinPS trades off between optimizing the immediate reward and learning about \(\phi\) and \(\theta_{t+2}\). If \(\theta_{t+1}=\theta_{t}\) for all \(t\in\mathbb{N}\), then the environment is stationary and the two algorithms are equivalent. In general, compared with \(\theta_{t+1}\), \(\theta_{t+2}\) better represents valuable information that is helpful for making future decisions. Aiming to learn about \(\theta_{t+2}\), LinPS strategically prioritizes information that is still valuable in the next timestep and does not acquire information for which its value immediately vanishes. #### 4.4.2 Theoretical Analysis Next, we present a regret analysis that offers further evidence of LinPS's effectiveness in prioritizing lasting information. In particular, we demonstrate that LinPS excels in environments where a substantial amount of information is transient. This success stems from its strategic approach to acquire less of such information. We assume that the action set is known and remains unchanged, \(\mathcal{A}_{t}=\mathcal{A}\) for all \(t\in\mathbb{N}\), and that \(\phi\) is known. We first introduce the notion of regret. **Definition 3** (**Regret)**.: For all policies \(\pi\) and \(T\in\mathbb{N}\), the regret and long-run average regret associated with a policy \(\pi\) over \(T\) timesteps in a linear contextual bandit is \(\operatorname{Regret}(T;\pi)=\sum_{t=0}^{T-1}\mathbb{E}\left[R_{t+1,*}-R_{t+1,C_{t},A_{t}^{\pi}}\right],\) and \(\overline{\operatorname{Regret}}(\pi)=\limsup_{T\to+\infty}\frac{1}{T} \overline{\operatorname{Regret}}(T;\pi)\), respectively, where \(R_{t+1,*}=\max_{a\in\mathcal{A}}\mathbb{E}[R_{t+1,C_{t},a}|\theta_{t}]\). We use \(\operatorname{Regret}(T)\) and \(\overline{\operatorname{Regret}}\) to denote the regret of LinPS and present a regret bound on LinPS. **Theorem 1**.: **(LinPS Regret Bound)** _In a linear contextual bandit, suppose \(\{\theta_{t}\}_{t\in\mathbb{N}}\) is a reversible Markov chain. For all \(T\in\mathbb{N}\), the regret and the long-run average regret of LinPS is upper-bounded by \(\operatorname{Regret}(T)\leq\sqrt{\frac{d}{2}}T\left[\mathbb{I}(\theta_{2}; \theta_{1})+(T-1)\mathbb{I}(\theta_{3};\theta_{2}|\theta_{1})\right]\) and \(\overline{\operatorname{Regret}}\leq\sqrt{\frac{d}{2}}\mathbb{I}(\theta_{3}; \theta_{2}|\theta_{1})\)._ The key proof idea essentially follows from that of (Liu et al., 2022) and (Russo and Van Roy, 2016). For the sake of completeness, we include the proof in the appendix. It is worth noting that when \(\theta_{t+1}=\theta_{t}\) for all \(t\in\mathbb{N}_{0}\), we have \(\operatorname{Regret}(T)\leq\sqrt{\frac{d}{2}}T\mathbb{H}(\theta_{1})\). We recover a regret bound for TS in a stationary linear contextual bandit. In the other extreme, if \(\theta_{t}\) changes very frequently, say if \(\{\theta_{t}\}_{t\in\mathbb{N}}\) is an i.i.d. sequence each with non-atomic distribution, then the regret of LinPS is zero that LinPS achieves optimal. This suggests that when information about \(\theta_{t}\) is not lasting, LinPS stops acquiring this information and is optimal. To specialize the bound to a particular example, we introduce linear contextual bandits with abrupt changes. Similar models were introduced by (Mellor and Shapiro, 2013) and (Liu et al., 2023). **Example 2** (**Linear Contextual Bandit with Abrupt Changes)**.: For all \(i\in[d]\), let \(q_{i}\in[0,1]\), and \(\{B_{t,i}\}_{t\in\mathbb{N}}\) be an i.i.d. sequence of Bernoulli r.v.'s each with success probability \(q_{i}\). For all \(i\in[d]\), let \(\{\beta_{t,i}\}_{t\in\mathbb{N}}\) be an i.i.d. sequence. Consider a linear contextual bandit where for all \(i\in[d]\), \(\theta_{1,i}=\beta_{1,i}\), and \(\{\theta_{t,i}\}_{t\in\mathbb{N}}\) transitions according to \(\theta_{t+1,i}=B_{t,i}\beta_{t+1,i}+(1-B_{t,i})\theta_{t,i}\). **Corollary 1**.: **(LinPS Regret Bound in Example 2)** _For all \(T\in\mathbb{N}\), the regret and long-run average regret of LinPS in a linear contextual bandit with abrupt changes is upper-bounded by \(\operatorname{Regret}(T)\leq\sqrt{\frac{d}{2}}T\left[\sum_{i=1}^{d}(1-q_{i}) \mathbb{H}(\theta_{1,i})+(T-1)\sum_{i=1}^{d}\left[2\mathbb{H}(q_{i})+q_{i}(1-q_ {i})\mathbb{H}(\theta_{1,i})\right]\right]\), and \(\overline{\operatorname{Regret}}\leq\sqrt{\frac{d}{2}}\sum_{i=1}^{d}\left[2 \mathbb{H}(q_{i})+q_{i}(1-q_{i})\mathbb{H}(\theta_{1,i})\right],\) where \(\mathbb{H}(q_{t,i})\) denotes to the entropy of of a Bernoulli random variable with success probability \(q_{t,i}\)._ We can use Theorem 1 to investigate how the performance of LinPS depends on various key parameters of the bandit. On one hand, when \(q_{i}=0\) for all \(i\in[d]\), i.e., when the environment is stationary, the bound becomes \(\sqrt{\frac{d}{2}}T\mathbb{H}(\theta_{1})\), which recovers a sublinear regret bound for TS in a stationary environment. On the other hand, as the \(q_{i}\)'s approach \(1\) the regret bound approaches \(0\), suggesting that LinPS performs well. Recall that this is a setting where \(\theta_{t}\) are redrawn frequently, and the information associated with \(\theta_{t}\) is not enduring. Our regret bound further confirms that LinPS continues to excel in such environments. We consider another example, which models bandits with "smooth" changes. Similar bandits have been introduced by (Burtini et al., 2015; Gupta et al., 2011; Kuhn et al., 2015; Kuhn and Nazarathy, 2015; Liu et al., 2023; Slivkins and Upfal, 2008). **Example 3**.: **[AR(1) Linear Contextual Bandit] Let \(\gamma\in[0,1]^{d}\), with its \(i\)-th coordinate denoted \(\gamma_{i}\). Consider a linear contextual bandit where \(\{\theta_{t,i}\}_{t\in\mathbb{N}}\) transitions independently according to an AR(1) process with parameter \(\gamma_{i}\): \(\theta_{t+1,i}=\gamma_{i}\theta_{t,i}+W_{t+1,i}\), where \(\{W_{t,i}\}_{t\in\mathbb{N}}\) is a sequence of i.i.d. \(\mathcal{N}(0,1-\gamma_{i}^{2})\) r.v.'s and \(\theta_{1,i}\sim\mathcal{N}(0,1)\).** Applying Theorem 1 to an AR(1) linear contextual bandit, we establish the following result. **Corollary 2**.: **(LinPS Regret Bound in AR(1) Linear Contextual Bandit)** _For all \(T\in\mathbb{N}\), the regret and long-term average regret of LinPS in an AR(1) linear contextual bandit is upper-bounded by \(\mathrm{Regret}(T)\leq\sqrt{\frac{d}{4}T\left[\sum_{i=1}^{d}\log\left(\frac{1}{ 1-\gamma_{i}^{2}}\right)+\sum_{t=1}^{T-1}\sum_{i=1}^{d}\log\left(1+\gamma_{i} ^{2}\right)\right]},\overline{\mathrm{Regret}}(T)\leq\sqrt{\frac{d}{4}\sum_{ i=1}^{d}\log\left(1+\gamma_{i}^{2}\right)}\) if \(\gamma_{i}<1\) for all \(i\in[d]\)._ The regret bound suggests that LinPS prioritizes the acquisition of lasting information. Specifically, when \(\gamma_{i}=0\) for all \(i\in[d]\), information about all \(\theta_{t,i}\)'s lose their usefulness immediately. In such contexts, LinPS achieves \(0\) regret and is such optimal. In addition, the regret of LinPS remains small when \(\gamma_{i}\) is small for each \(i\in[d]\), suggesting that the algorithms consistently performs well when information about \(\theta_{t,i}\)'s are not durable. ## 5 Experiments In this section, we introduce AR(1) contextual logistic bandit experiment and two experiments built on real-world data. Among the two real-world dataset experiments, one leverages one-week user interactions on Microsoft News website in time order and the other is built on Kuai's short-video platform's two-month user interaction data in time order. We consider Neural Ensemble (Osband et al., 2016), Neural LinUCB (Xu et al., 2022) and Neural Linear (Riquelme et al., 2018) and their sliding window versions (Cheung et al., 2019, 2022; Garivier and Moulines, 2008; Russac et al., 2020; Srivastava et al., 2014; Trovo et al., 2020) (to address nonstationarity in environments) as our baselines for comparison. All experiments are performed on AWS with 1 A100 40GB GPU per experiment, each with 8 CPUs, and each experiment repeated over 20 distinct seeds. To scale the experiments to the large scale experiments, we learn every batch of interactions instead of per interaction, more details in Appendix B.0.1. Constrained by computation, we do not consider Neural UCB (Zhou et al., 2020) and Neural TS (Zhang et al., 2020), given their computation requirement of inverting square matrices with dimensions equal to neural network parameter count. ### AR(1) Contextual Logistic Bandit Following Example 3, An AR(1) contextual logistic bandit changes its reward function to \(R_{t,c,a}\sim\mathrm{Bernoulli}\left(\sigma\left(\phi(c,a)^{\top}\theta_{t} \right)\right)\), all others the same. We set number of actions to 10, and \(d=10\), \(\gamma_{i}=0.99^{i}\). Each entry in \(\theta\) is initialized with \(\mathcal{N}(0,0.01)\). Hyperparameters of the agents are presented in Appendix B.0.2. The average reward is presented in Table 1, and Figure 2(a). \begin{table} \begin{tabular}{c c c c} \hline \hline Algorithm & AR(1) Average Reward & MIND 1-week Average CTR & Kuai 2-month Average Rating \\ \hline Neural Ensemble & \(0.5683\pm 0.0025\) & \(0.1503\pm 0.0013\) & \(1.2614\pm 0.0017\) \\ Window Neural Ensemble & \(0.5688\pm 0.0025\) & \(0.1513\pm 0.0012\) & \(1.3162\pm 0.0013\) \\ Neural LinUCB & \(0.5684\pm 0.0020\) & \(0.1468\pm 0.0015\) & \(1.2798\pm 0.0020\) \\ Window Neural LinUCB & \(0.5730\pm 0.0031\) & \(0.1482\pm 0.0020\) & \(1.3172\pm 0.0023\) \\ Neural Linear & \(0.5701\pm 0.0027\) & \(0.1467\pm 0.0016\) & \(1.2690\pm 0.0020\) \\ Window Neural Linear & \(0.5741\pm 0.0029\) & \(0.1492\pm 0.0015\) & \(1.3171\pm 0.0026\) \\ NeuralPES & \(\mathbf{0.5850\pm 0.0023}\) & \(\mathbf{0.1552\pm 0.0013}\) & \(\mathbf{1.3421\pm 0.0016}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Empirical Experiment Results ### Microsoft News Dataset Experiments We leverage the MIND dataset (Wu et al., 2020) to carry out the first real-world dataset experiment. MIND is collected from real user interactions with Microsoft News website and its public training and validation set covers the interactions from November 9 to November 15, 2019. Each row of the MIND dataset is presented as in Table 2. In this dataset, since every recommendation's groundtruth feedback is provided at a single timestamp, no counterfactual evaluation is needed. In this experiment, we feed the rows in the order of interaction timestamp to the agent for action selection to resemble the real-world nonstationarity in user preferences. The nonstationarity presented in this dataset is commonly observed as day of week patterns in real-world recommender systems. To visualize the nonstationarity in user behavior within a week, see Figure 2(c) to see daily average click-through rate (CTR) in the dataset to see a week of day pattern in the dataset. We sample 10,000 users from the dataset and asks candidate agents to select news recommendations sequentially according to the time order of the interactions that happened in the dataset. Hyperparameters of the agents are presented in Appendix B.0.2. Features for each recommendation is derived by average pooling over the entity embeddings of each news recommendation provided by the dataset and features for each user as average pooling over features of their clicked articles. Both user and recommendation features are of size 100. The average CTR of news recommendations offered by candidate agents over 1 week is presented in Table 1, and Figure 2(b), where NeuralPES outperforms all baselines. Note that since we present interactions to users sequentially according to time order, the figure presents natural day of week seasonality from the dataset. ### KuaiRec Dataset Experiment While the MIND dataset offers a setup to empirically test agents' performance under day of week nonstationarity, the short duration of the dataset naturally limits the possibility of observing long-term agent behaviors under nonstationarity. In this experiment, we make slight modifications to the KuaiRec dataset Gao et al. (2022) to offer a 2-month-long real-world experiment. Every row of KuaiRec offers a user ID, the timestamp, a video ID of a recommended video, \begin{table} \begin{tabular}{c c c c c} \hline \hline Impression ID & User ID & Time & User Interest History & News with Labels \\ \hline 91 & U397059 & 11/15/2019 10:22:32 AM & N106403 N71977 N97080 & N129416-0 N26703-1 N120089-1 N53018-0 \\ \hline \hline \end{tabular} \end{table} Table 2: MIND Dataset Illustration Figure 3: Empirical Results and Ablations and a rating derived from the user's watch duration. The dataset also offers daily features of each user and each video candidate, of dimensions 1588 and 283 respectively. In our transformed dataset, we grouped every 12 hours of recommendation to a user into a contextual bandit format where each row contains a user ID, the 12-hour window, set of videos alongside with their corresponding ratings, sorted by the 12-hour window start time. The agent's goal is to select the best recommendation to each user in each window in the order of occurrence in the real-world. Hyperparameters of the agents are presented in Appendix B.0.2. The average rating of news recommendations offered by candidate agents over 2 months is presented in Table 1 and see Figure (d)d and we see NeuralPES outperforms all baselines. ### Ablation Studies #### 5.4.1 Regularization for Continual Learning To facilitate continual learning and avoid loss of plasticity, we leverage regularization trick introduced in Eq.4 to ensure the agent continues to learn while the environment changes. See Figure (e)e. The algorithm with regularization consistently outperforms its version without regularization. #### 5.4.2 Importance of Predictive Model We compare NeuralPES' performance against its version without the Predictive Model, Neural Sequence Ensemble, introduced in Section 4.2. See Figure (e)e. Without the Predictive Model, the agent crashes in its performance because in nonstationary environments, the environment changes are mostly unpredictable and the predictive model is responsible for determining whether a piece of information from the sequence model prediction lasts in the future. ## 6 Conclusion and Future Work There are a few lines of future work that can extend on top of this work. First of all, this work does not consider context and state evolution as a result of actions, as mentioned in Zhu and Van Roy (2023a); Xu et al. (2023); Chen et al. (2022). As these state transition kernels can also be nonstationary, it calls for future extension of this work to address nonstationarities in reinforcement learning problems. Furthermore, to enhance the quality of future reward parameter predictions, attention mechanisms (Vaswani et al., 2017) can be potentially leveraged to further improve the performance of the models. In this paper, we introduced a novel non-stationary contextual bandit learning algorithm, NeuralPES, which is scalable with deep neural networks and is designed to seek enduring information. We theoretically demonstrated that the algorithm effectively prioritizes exploration for enduring information. Additionally, through empirical analysis on two extensive real-world datasets spanning one week and two months respectively, we illustrated that the algorithm adeptly adapts to pronounced non-stationarity and surpasses the performance of leading stationary neural contextual bandit learning algorithms, as well as their non-stationary counterparts. We aspire that the findings and the algorithm delineated in this paper will foster the adoption of NeuralPES in real-world systems.
2303.09687
BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback
In this paper, we propose BeamSense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Wi-Fi sensing enables game-changing applications in remote healthcare, home entertainment, and home surveillance, among others. However, existing work leverages the manual extraction of channel state information (CSI) from Wi-Fi chips to classify activities, which is not supported by the Wi-Fi standard and hence requires the usage of specialized equipment. On the contrary, BeamSense leverages the standard-compliant beamforming feedback information (BFI) to characterize the propagation environment. Conversely from CSI, the BFI (i) can be easily recorded without any firmware modification, and (ii) captures the multiple channels between the access point and the stations, thus providing much better sensitivity. BeamSense includes a novel cross-domain few-shot learning (FSL) algorithm to handle unseen environments and subjects with few additional data points. We evaluate BeamSense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFI-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% and 80% when compared with state-of-the-art cross-domain algorithms.
Khandaker Foysal Haque, Milin Zhang, Francesca Meneghello, Francesco Restuccia
2023-03-16T23:12:26Z
http://arxiv.org/abs/2303.09687v1
# BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback ###### Abstract In this paper, we propose BeamSense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Wi-Fi sensing enables game-changing applications in remote healthcare, home entertainment, and home surveillance, among others. However, existing work leverages the manual extraction of channel state information (CSI) from Wi-Fi chips to classify activities, which is not supported by the Wi-Fi standard and hence requires the usage of specialized equipment. On the contrary, BeamSense leverages the standard-compliant beamforming feedback information (BFI) to characterize the propagation environment. Conversely from CSI, the BFI (i) can be easily recorded without any firmware modification, and (ii) captures the multiple channels between the access point and the stations, thus providing much better sensitivity. BeamSense includes a novel cross-domain few-shot learning (FSL) algorithm to handle unseen environments and subjects with few additional data points. We evaluate BeamSense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFI-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% and 80% when compared with state-of-the-art cross-domain algorithms. Wi-Fi sensing, IEEE 802.11ac, SU-MIMO, MU-MIMO, beamforming, beamforming feedback angles ## I Introduction Since 1990, Wi-Fi has become the technology of choice for Internet connectivity in indoor environments [1]. Beyond connectivity, Wi-Fi signals can be used as sounding waveforms to perform activity recognition [2], health monitoring [3], and human presence detection [4], among others [5]. The intuition behind Wi-Fi sensing is that humans act as obstacles to the propagation of radio signals in the environment. Specifically, when encountering the human body, the radio waves undergo reflections, diffractions and scattering that make the signals collected at the Wi-Fi receiver differ from the transmitted ones. Wi-Fi sensing aims at detecting the changes in the Wi-Fi signals and associating them to the way the subject stays/moves in the environment, thus realizing device-free monitoring solutions. To date, the vast majority of Wi-Fi sensing systems - discussed in Section II - leverage channel measurements obtained from pilot symbols as sensing primitive. Such measurements are usually referred to as channel state information (CSI) and describe the way the signals propagate in the environment. Despite leading to good performance, CSI-based techniques require extracting and recording the CSI estimated by the Wi-Fi devices involved in the sensing activities, and such operations are currently not supported by the IEEE 802.11 standard. This has led to the introduction of custom-tailored firmware modifications to extract the CSI [6, 7, 8, 9, 10], which makes the sensing process not scalable. Such CSI extraction tools only provide support for single-user multiple-input multiple-output (MIMO) sensing as the channel is sounded on the link between the transmitter and the device implementing the extraction tool. Therefore, Wi-Fi sensing approaches relying on CSI extraction tools cannot benefit from the spatial diversity that can be gained through multi-user MIMO (MU-MIMO) transmissions. Spatial diversity may be achieved considering multiple CSI collectors but this would increase the computation burden as synchronization among the devices would be needed. Moreover, even if CSI extraction could be supported in the future without the need for custom-tailored firmware modifications, it would require additional processing to extract the data from the chip, thus increasing energy consumption. Therefore, we argue that more suitable approaches to Wi-Fi sensing should be put forward. In this paper, we propose BeamSense, an entirely new approach to Wi-Fi sensing that leverages the MU-MIMO capabilities of Wi-Fi to drastically increase sensing performance while substantially reducing sensing overhead. As shown in Figure 1, BeamSense leverages the beamforming feedback information (BFI) - traditionally used to beamform transmissions - to estimate the propagation environment between the access point (AP) and the connected stations (STAs). In stark contrast with CSI-based sensing, BeamSense (i) does not need firmware modifications, since any off-the-shelf Wi-Fi device can capture BFI packets, which are sent unencrypted to keep the processing delay below a few milliseconds [11]; and (ii) does not require synchronization among receivers, since a single BFI report contains the information about all the Fig. 1: CSI-based vs BFI-based Wi-Fi sensing. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. MIMO channels established between the AP and the STAs. In fact, while devices empowered with CSI extraction tools allow obtaining information on a single MIMO channel, when capturing the BFI we obtain the channel information associated with all the STAs involved in a MU-MIMO transmission. Thus, multiple spatially diverse channel information is collected with a single capture. For this reason, BeamSense exhibits far better performance in challenging environments, as shown in Section IV. **This paper provides the following contributions:** \(\bullet\) We propose BeamSense, a new approach to Wi-Fi sensing where the standard-compliant BFI routinely sent in MU-MIMO Wi-Fi networks is used to characterize the propagation environment between the MU-MIMO users and the AP. To the best of our knowledge, this is the first work proposing the utilization of BFI to perform Wi-Fi sensing; \(\bullet\) We propose a deep learning (DL)-based Fast and Adaptive Micro Reptile Sensing (FAMReS) algorithm to perform activity classification based on BFI. We chose DL since it has shown remarkable performance in classifying activities in Wi-Fi sensing settings [12]. However, it is well-known that DL models may perform poorly when tested in different settings [13]. For this reason, FAMReS leverages few-shot learning (FSL) to quickly generalize to different subjects and environments with few additional data points; \(\bullet\) We extensively evaluate BeamSense through a comprehensive data collection campaign, with three subjects performing twenty different activities in three different environments. For that, we built a reconfigurable IEEE 802.11ac MU-MIMO network with three STAs and one AP. The Wi-Fi network was also synchronized with a camera-based system that records the ground truth for our experiments and a secondary IEEE 802.11ac network empowered with Nexmon CSI [8] to concurrently collect the CSI measurements used for comparative analysis. We show that our BFI-based approach combined with a traditional convolutional neural network (CNN) without pre-processing achieves about 10% more accuracy when compared to state-of-the-art CSI-based techniques, which uses pre-processing. Moreover, FAMReS improves accuracy by up to 30% and 80% when compared with state-of-the-art cross-domain algorithms. **For reproducibility, we will release the entirety of our 800 GB dataset and our code.** The rest of the article is organized as follows. In Section II we review the existing literature in the area. The BeamSense Wi-Fi sensing system is illustrated in Section III whereas the performance evaluation of the system is presented in Section IV. Section V concludes the discussion. ## II Related Work Over the last ten years, a lot of efforts have been made to explore wireless sensing, which is summarized by Liu et al. in [14]. The first Wi-Fi sensing approaches were based on the received signal strength indicator (RSSI) [15, 16, 17, 18, 19, 20]. More recently, researchers have focused on the more fine-grained CSI information that describes how the wireless channel modifies signals at different frequencies rather than providing a cumulative metric on the signal attenuation as the RSSI does. Passive Wi-Fi radar (PWR)-based approaches [21, 22, 23, 24, 25] have also been proposed in the literature. However, such an approach requires specialized hardware (software defined radio (SDR)) to analyze the collected signal. In the rest of the section, we focus on CSI-based sensing, and summarize the main research on the topic. **Background on CSI-based Sensing.** The term CSI can refer both to the time-domain channel impulse response (CIR) or the frequency-domain channel frequency response (CFR). Specifically, the CIR encodes the information about the multipath propagation of the transmitted signal: each peak in the CIR represents a propagation path characterized by a specific time delay (linked with the length of the path) and an attenuation. Multipath propagation is a typical phenomenon of indoor environments, where obstacles (objects, people, animals) in the surroundings act as reflectors/diffractors/scatterers for the irradiated wireless signals. In turn, the receiver collected different copies of the transmitted signal each associated with a different propagation, or, equivalently, an obstacle in the environment. The CFR represents the Fourier transform of the CIR and describes how the environment modifies signals transmitted with different carrier frequencies. Specifically, indicating with \(\mathbf{x}(f,t)\) and \(\mathbf{y}(f,t)\) the frequency domain representation of the transmitted and received signals at time \(t\) and frequency \(f\) respectively, and with \(\mathbf{h}(f,t)\) the CFR, we have that \(\mathbf{y}(f,t)=\mathbf{h}(f,t)\times\mathbf{x}(f,t)\)[26]. Considering the \(M\times N\) MIMO orthogonal frequency-division multiplexing (OFDM) system, with \(K\) sub-channels, and \(M\) and \(N\) transmitting and receiving antennas respectively, the CFR is a \(K\times M\times N\)-dimensional matrix providing the amplitude and phase information over each OFDM sub-channel for any given pair of transmitting and receiving antenna. **Existing Research on CSI-based Sensing.** Over the last decade, CSI-based sensing has been proposed for a wide variety of applications. Among the most compelling, we mention person detection and identification [27, 28, 29], crowd counting [30, 18], respiration monitoring [31], baggage tracking [32], smart homes [33, 34], human pose tracking [35, 36, 37, 38], patient monitoring [39, 40], with most of the previous research activities focusing on human activity recognition (HAR) and human gesture recognition (HGR) [41, 42, 43, 44, 13, 45]. _The above list is definitely not exhaustive._ For excellent survey papers on the topic, we refer the reader to [46, 5, 2, 47]. In the following, we just summarize the most recent approaches that are most related to the work conducted in this article. Guo et al. presented WiAR [48], a CSI-based system achieving up to 90% accuracy in the recognition of 16 human activities. Similarly, a meta-learning-based approach called RF-Net was presented in [49] based on the usage of recurrent neural networks with long short-term memory (LSTM) cells. However, only six activities were considered in the evaluation. Regarding HGR, [43] and [44] presented Widar 3.0 and OneFi, respectively considering six and forty gestures. The authors in [43] proposed to use a body velocity profile (BVP) measure which has been shown to improve the generalization capability of the classification algorithm. The authors of [44] used one-shot learning to classify unseen gestures with few labeled samples. The majority This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. of previous work has been evaluated on 802.11n channel data while, to the best of our knowledge, only two works considered HAR in the context of 802.11ac [13, 12]. Meneghello et al. proposed to use the Doppler shift estimated through the CSI to obtain an algorithm that generalizes to different environments [13]. Bahadori et al. use instead few-shot learning to achieve environmental robustness [12]. **Limitations of CSI-based Sensing.** Since the CSI is computed at the physical layer (PHY), it is not readily available with off-the-shelf network interface cards (NICs). Although CSI can be extracted with SDR implementations, which only support up to 40 MHz of bandwidth, being only IEEE 802.11 a/g/p/n compliant [50, 12]. Moreover, SDRs are costly specialized hardware that may be unavailable in real-life situations and require expert knowledge to be used. To overcome such limitations, in recent years, researchers have developed some CSI extraction tools that run on commercial Wi-Fi NICs. Two of them, namely Linux CSI [6] and Atheros CSI [7], target IEEE 802.11n compliant NICs (up to 40 MHz bandwidth). The third one, Nexmon CSI [8], allows extracting the CFR from some IEEE 802.11ac compliant devices, supporting bandwidths up to 80 MHz. The most recent one, AX CSI [10] is designed for IEEE 802.11ax devices and provides CFR measurements also on 160 MHz bandwidth channels. These tools, however, need non-trivial firmware modifications of the NICs. Moreover, they do not provide support for estimating the channel on MU-MIMO channels. Both when the CSI extractor tool is implemented on one receiving Wi-Fi device or on another monitor device, only the MIMO links between the transmitter and the CSI collector is monitored, i.e., only SU-MIMO mode is supported. This is a limitation of CSI-based systems as MU-MIMO systems can provide way richer information than SU-MIMO ones as they capture the correlation of the propagated signal from different STAs relative to the sensed subject. As a last consideration, Wang et al. [51] recently pointed out the importance of the placement of the CSI extractor device. Specifically, they showed that accurate placement of the sensing devices can enhance the sensing coverage by mitigating severe interference. Non-calibrated placement of the sensing devices can severely hamper the sensing quality. **Advantages of BeamSense.** Our approach addresses these challenges by exploiting the MU-MIMO beamforming feedback to sense the environment. The collection of the MU-MIMO beamforming feedback packets can be done with any standard-compliant 802.11 ac/ax device, and it does not need any close proximity or direct access to the sensed subject. As our system does not need any specific hardware or infrastructure, it facilitates mass deployment. Moreover, since it utilizes the aggregated feedback from different users placed at different locations, BeamSense is less sensitive to the accurate placement of the STAs. ## III The BeamSense Wi-Fi Sensing System Figure 2 shows a high-level overview of BeamSense, which leverages the channel estimation mechanism standardized in IEEE 802.11 to sound the physical environment. The channel estimation is performed on the STAs (beamformees) and is reported to the AP (beamformer) that uses it to properly beamform MU-MIMO transmissions. The report is referred to as the BFI and is transmitted over the air in clear text. Since the AP continuously triggers the channel estimation procedure on the connected STAs, _the BFI contains very rich, reliable, and spatially diverse information_. Moreover, the BFI _can be collected with a single capture_ by the AP or any other Wi-Fi-compliant device, thus reducing the system complexity. **BeamSense Technical Challenges.** BeamSense is a completely novel way to perform Wi-Fi sensing. While previous work in the literature deal with the well-known CSI data, we instead consider the BFI as a sensing primitive. We stress that BFI represents a completely new type of data. While CSI consists of complex I/Q-values, BFI is expressed in terms of compressed rotational matrices. In this respect, the first challenge we need to address is the design and implementation of a novel tool to extract the BFI data embedded within Wi-Fi frames transmitted from the beamformees to the beamformer as part of the channel sounding procedure. On top of that, the second challenge concerns the implementation of a new data processing pipeline for the new data type that effectively performs activity classification based on BFI data and provides environment adaptation features. The third challenge to be addressed is the setup of an extensive experimental testbed to implement and assess the performance of the new Wi-Fi sensing approach in a real-world scenario with commercial Wi-Fi devices. In the following, we thoroughly detail the BeamSense sensing system. We use the superscripts \(T\) and \(\dagger\) to denote the transpose and the complex conjugate transpose (i.e., the Hermitian). We define with \(\angle\)**C** the matrix containing the phases of the complex-valued matrix **C**. Moreover, \(\text{diag}(c_{1},\ldots,c_{j})\) indicates the diagonal matrix with elements \((c_{1},\ldots,c_{j})\) on the main diagonal. The \((c_{1},c_{2})\) entry of matrix **C** is defined by \(\left[\textbf{C}\right]_{c_{1},c_{2}}\), while \(\mathbb{I}_{c}\) refers to an identity matrix of size \(c\times c\) and \(\mathbb{I}_{c\times d}\) is a \(c\times d\) generalized identity matrix. ### _BeamSense: A Walkthrough_ The BeamSense sensing system entails eight steps, as depicted in Figure 2. The process stems from the way beamforming is implemented in IEEE 802.11 networks. Specifically, the Fig. 2: The BeamSense Wi-Fi sensing system. beamformer (AP) uses a matrix \(\mathbf{W}\) of pre-coding weights - called steering matrix - to linearly combine the signals to be simultaneously transmitted to the different beamformees (STAs). The steering matrix is derived from the CFR matrices \(\mathbf{H}\) estimated by each of the beamformee and that describe how the environment modifies the irradiated signals in their path to the receivers. The estimation process is called _channel sounding_ and is triggered by the AP which periodically broadcasts a null data packet (NDP) (**step 1** in Figure 2) that contains sequences of bits - named long training fields (LTFs) - the decoded version of which is known by the beamformees. Since its purpose is to sound the channel, the NDP is _not beamformed_ by the AP. _This is particularly advantageous for sensing purposes_, since the resulting CFR estimation will not be affected by inter-stream or inter-user interference. The LTFs are transmitted over the different beamformer antennas in subsequent time slots, thus allowing each beamformee to estimate the CFR of the links between its receiving antennas and the beamformer transmitting antennas. The LTFs are modulated - as the data fields - through OFDM by dividing the signal bandwidth into \(K\) partially overlapping and orthogonal sub-channels spaced apart by \(1/T\). The input bits are grouped into OFDM symbols, \(\mathbf{a}=[a_{-K/2},\ldots,a_{K/2-1}]\), where \(a_{k}\) is named OFDM sample. These \(K\) OFDM samples are digitally modulated and transmitted through the \(K\) OFDM sub-channels in a parallel fashion thus occupying the channel for \(T\) seconds. The transmitted LTF signal is \[s_{\text{tx}}(t)=e^{j2\pi f_{c}t}\sum_{k=-K/2}^{K/2-1}a_{k}e^{j2\pi kt/T}, \tag{1}\] where \(f_{c}\) is the carrier frequency. The NDP is received and decoded by each STA **(step 2)** to estimate the CFR \(\mathbf{H}\). The different LTFs are used to estimate the channel over each pair of transmitting (TX) and receiving (RX) antennas, for every OFDM sub-channel. This generates a \(K\times M\times N\) matrix \(\mathbf{H}\) for each beamformee, where \(M\) and \(N\) are respectively the numbers of TX and RX antennas. We refer the reader to Section II for additional details about the CFR. Next, the CFR is compressed - to reduce the channel overhead - and fed back to the beamformer. Using \(\mathbf{H}_{k}\) to identify the \(M\times N\) sub-matrix of \(\mathbf{H}\) containing the CFR samples related to sub-channel \(k\), the _compressed beamforming feedback_ is obtained as follows ([52], Chapter 13). First, \(\mathbf{H}_{k}\) is decomposed through singular value decomposition (SVD) as \[\mathbf{H}_{k}^{T}=\mathbf{U}_{k}\mathbf{S}_{k}\mathbf{Z}_{k}^{\dagger}, \tag{2}\] where \(\mathbf{U}_{k}\) and \(\mathbf{Z}_{k}\) are, respectively, \(N\times N\) and \(M\times M\) unitary matrices, while the singular values are collected in the \(N\times M\) diagonal matrix \(\mathbf{S}_{k}\). Using this decomposition, the complex-valued beamforming matrix \(\mathbf{V}_{k}\) is defined by collecting the first \(N_{\mathrm{SS}}\leq N\) columns of \(\mathbf{Z}_{k}\). Such a matrix is used by the beamformer to compute the pre-coding weights for the \(N_{\mathrm{SS}}\) spatial streams directed to the beamformee. Hence, \(\mathbf{V}_{k}\) is converted into polar coordinates as detailed in Algorithm 1 to avoid transmitting the complete matrix. The output is matrices \(\mathbf{D}_{k,i}\) and \(\mathbf{G}_{k,\ell,i}\), defined as \[\mathbf{D}_{k,i}=\begin{bmatrix}\mathbb{I}_{i-1}&0&\ldots&0\\ 0&e^{j\phi_{k,i}}&0&\ldots&\vdots\\ \vdots&0&\ddots&0\\ 0&\vdots&0&e^{j\phi_{k,M-1,i}}&0\\ 0&\ldots&&0&1\end{bmatrix}, \tag{3}\] \[\mathbf{G}_{k,\ell,i}=\begin{bmatrix}\mathbb{I}_{i-1}&0&\ldots&0\\ 0&\cos\psi_{k,\ell,i}&0&\sin\psi_{k,\ell,i}&\vdots\\ \vdots&0&\mathbb{I}_{\ell-i-1}&0&\vdots\\ \vdots&-\sin\psi_{k,\ell,i}&0&\cos\psi_{k,\ell,i}&0\\ 0&\ldots&&0&\mathbb{I}_{M-\ell}\end{bmatrix}, \tag{4}\] that allow rewriting \(\mathbf{V}_{k}\) as \(\mathbf{V}_{k}=\mathbf{\tilde{V}}_{k}\mathbf{\tilde{D}}_{k}\), with \[\mathbf{\tilde{V}}_{k}=\prod_{i=1}^{\min(N_{\mathrm{SS}},M-1)}\left(\mathbf{D }_{k,i}\prod_{l=i+1}^{M}\mathbf{G}_{k,l,i}^{T}\right)\mathbb{I}_{M\times N_{ \mathrm{SS}}}, \tag{5}\] where the products represent matrix multiplications. In the \(\mathbf{\tilde{V}}_{k}\) matrix, the last row - i.e., the feedback for the \(M\)-th transmitting antenna - consists of non-negative real numbers by construction. Using this transformation, the beamformee is only required to transmit the \(\phi\) and \(\psi\) angles to the beamformer as they allow reconstructing \(\mathbf{\tilde{V}}_{k}\) precisely. Moreover, it has been proved (see [52], Chapter 13) that the beamforming performance is equivalent at the beamformee when using \(\mathbf{V}_{k}\) or \(\mathbf{\tilde{V}}_{k}\) to construct the steering matrix \(\mathbf{W}\). In turn, the feedback for \(\mathbf{\tilde{D}}_{k}\) is not fed back to the beamformer. The angles are quantized using \(b_{\phi}\in\{7,9\}\) bits for \(\phi\) and \(b_{\psi}=b_{\phi}-2\) bits for \(\psi\), to further reduce the channel occupancy. The quantized values \(-q_{\phi}=\{0,\ldots,2^{b_{\phi}}-1\}\) and \(q_{\psi}=\{0,\ldots,2^{b_{\psi}}-1\}-\) are packed into the compressed beamforming frame (**step 3**) and such _beamforming feedback information_ (BFI) is transmitted to the AP **(step 4)** in _clear text_. Each BFI contains \(A\) number of angles for each of the \(K\) OFDM sub-channels for a total of \((K\cdot A)\) angles each. In Figure 3, we show an example of how beamforming is conducted in a \(3\times 2\) MIMO system. BeamSense captures the BFI reports (**step 5**), and uses the channel estimation data to perform Wi-Fi sensing. We remark that, since MU-MIMO requires fine-grained channel This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. According Windows having less than \(S\) packets are padded with BFI packets containing zero-valued angles while packets exceeding such threshold are discarded. Hence, the \(K\times A\) BFI angles contained in each packet are extracted and the final tensor is obtained by aggregating the \(S\times K\times A\) angles for all the \(U\) MU-MIMO users for which the BFI data have been captured in the observation window. Note that even if it would be possible to define learning algorithms that accept input of different sizes, this would lead to an increase in the complexity of the approach, both from the training and inference perspective. Therefore, to keep the model simple for implementation on memory- and battery-constrained devices, we decided to follow a fixed-input approach. To obtain the training data, the \(S\times K\times A\times U\) tensors derived from the BFI packets captured during the data collection phase are stored in a dataset, together with their associated activity and/or phenomenon, and a timestamp (**step 6** in Figure 2). This phase can be performed offline by sensing application vendors without requiring the users' cooperation. The trained model (**step 7**) is then used for online sensing (**step 8**). As mentioned in [53], the MU-MIMO sounding procedure should be performed at least every 10 ms, which corresponds to 100 BFI measurements/second. Since the frequency of channel sounding is not specified in the standard and since the sounding measurement lasts approximately 500 microseconds, _the BFI rate can theoretically reach 2000 BFI per second_. **Example.** Let assume the activity recording is 300 seconds long, and \(W\) is 0.1 seconds. Then, 3000 windows are present in the recording. Let us assume that the average number of packets in the considered windows is \(S\) = 10. The windows presenting less than 10 packets are zero-padded. Considering a bandwidth of 80 MHz, according to the IEEE 802.11 standard, four angles describe each of the \(K\) = 234 sub-channels where sounding is performed, i.e., the total number of OFDM sub-channels (256) minus pilots and control sub-channels that are excluded from the sounding procedure. Assuming that \(U\) = 3 users are connected to the AP, the resulting input tensor has dimensions \(10\times 234\times 4\times 3\), and presents a total size of \(10\cdot 234\cdot 4\cdot 3\) = 28080. ### _The FAMReS Classification Algorithm_ Existing research in CSI-based sensing has exposed that designing classifiers that are robust to changing the subject performing the activity (i.e., different people) and the environment where the activity is performed (i.e., different rooms) is very challenging [43, 44, 13, 12]. On the other hand, it is hardly feasible to collect a large amount of data for all possible scenarios. To address this key issue, we propose a deep learning (DL)-based algorithm for BFI-based activity classification called _Fast and Adaptive Micro Reptile Sensing_ (FAMReS), which is a few-shot learning (FSL) algorithm based on Reptile [54] which needs a limited set of new input data to generalize to unseen environments. FSL is a DL technique that leverages only small amounts of additional data to adapt to classes that are unseen at training time. Specifically, in K-way-N-shot FSL, the model is trained on a set of mini-batches of data that only have K different classes (ways) and N samples (shots) of each class. The key Fig. 4: BFI data processing. The processing is applied to each observation window of \(W\) seconds. Fig. 3: Example of \(3\times 2\) MIMO system. \(s_{1},s_{2}\) and \(r_{1},r_{2}\) are respectively the transmitted and received signals. The symbol \(\mathbf{W}\) indicates the steering matrix, while \(\mathbf{H}\) is the CFR. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. #### Iii-B1 FAMeS Algorithm The original purpose of Reptile is to extract meta-features from a large dataset so that it can be quickly fine-turned when a new task is sampled from the given dataset. However, _Reptile requires the inference and meta-learning data to be sampled from the same dataset_. Such a dataset should contain as many classes as possible so that the meta-learner can extract the general characteristics and fine-tune a task with fewer classes. Since this is unfeasible in BFI-based sensing, we find some common ground between meta-learning and general DL. The aim of learning is trying to approach the ground truth between different sampled data, while meta-learning is to find shared features between various tasks. Thus, if we consider each batch of training data as a new task in meta-learning, _the learning problem can be converted into a meta-learning problem_. Formally, we aim to find a set of parameters \(\theta^{*}\) that minimize the loss function \(L\) on training data \(x_{i}\) and \(y_{i}\): \[\theta^{*}=\min_{\theta}\quad\mathbb{E}_{i}\left\{L\left[f\left(x_{i},y_{i}| \theta\right)\right]\right\}. \tag{9}\] By plugging the derivative \(\mathbb{E}_{i}\left\{\nabla_{\theta}\left(L\left[f\left(x_{i},y_{i}|\theta \right)\right]\right)\right\}\) to the SGD optimizer, the optimization problem can be solved as \[\tilde{\theta}=\theta-\alpha\frac{1}{m}\sum_{i=1}^{m}\nabla_{\theta}\left(L \left[f\left(x_{i},y_{i}|\theta\right)\right]\right). \tag{10}\] By comparing Equation 7 with 10, we can easily find that if we set \(n=1\) in Equation 7, the only difference between these two equations is a constant scalar. Based on this observation, we note that Reptile learns common ground from different mini-batch of data. The meta-learning rate \(\beta\), which is usually a scalar less than 1, is to adjust the step size of the learning, making it less likely to overfit the mini-batch data. This meta-learning process can be regarded as a warm-up phase before learning, which makes the parameters \(\theta\) closer to the ground truth in the hyperspace than random initial weights. Inspired by this idea, FAMeS is divided into two stages: (i) meta-learning stage; and (ii) micro-learning stage. In stage (i), the model utilizes a small portion of data to learn the shared features. In stage (ii), the same micro dataset is used for training. The complete FAMeS workflow is reported in Algorithm 2. **We stress the difference between the original Reptile and FAMeS**: we only use a small portion of data in meta-learning and micro-learning and use other unseen data for testing. On the contrary, Reptile uses the same dataset for both learning and inference. Although we have only done experiments offline in this work, FAMeS is a strong candidate for online learning. The algorithm can run the meta-learning phase while collecting new data. Once there is enough data, it can move on to the next stage. Therefore, we define a time variable \(\delta\) in experiments to simulate the real-time implementation. We use the data collected within the \(\delta\) time window for learning and the other for inference. FAMeS is an empirical risk minimizer that can be unstable when using small values for \(\delta\), depending on the distribution of training data. Meta-learning on the micro dataset can only bring the initial parameters closer to the ground truth point in the hyperspace, but the final parameters still depend on the training set. Thanks to the high stability of the BFI data, we can always get a reasonable accuracy in the experiments unless \(\delta\) is extremely small. Fig. 5: Example of Few-Shot Learning. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. #### Iii-B2 Learning Architecture In the last decade, convolutional neural networks (CNNs) have achieved tremendous success in computer vision [58, 59, 60]. The convolution layer, the basis of CNNs, can efficiently extract features by performing convolution operations on the elements of the input data. Given that in this article our aim is to investigate the effectiveness of BFI-based sensing as compared to CSI-based sensing, we propose to use a VGG-based [59] CNN architecture as the human activity classifier. The network is depicted in Figure 6 and entails stacking three convolutional blocks (conv-block) and a max-pooling (MaxPool) layer. Softmax is applied to the flattened output to obtain the probability distribution over the activity labels. The conv-block is a stack of two convolution two-dimensional (2D) layers. Following the design of VGG [59], each convolution layer has a kernel size of \(3\times 3\) and a step size of \(1\). To introduce non-linearity in the model, we apply a rectified linear units (ReLU) activation function at the end of each conv-block. Batch normalization is also used in conv-blocks to avoid gradient explosion or vanishing. Our VGG-based CNN consists of three conv-blocks with 128, 64 and 32 filters, respectively. We choose a descending order of filters to reduce the model size since features in lower layers are usually sparser and thus require extracting more activation maps to be properly captured. ## IV Performance Evaluation ### _Experimental Setup and Data Collection_ We collected experimental data in three environments: a kitchen, a living room, and a classroom, as depicted in Figure 7. We considered three human subjects and twenty different activities: _jogging, clapping, push forward, boxing, writing, brushing teeth, rotating, standing, eating, reading a book, waiving, walking, browsing phone, drinking, hands-up-down, phone call, side bend, check the wrist (watch), washing hands, and browsing laptop_. The activities are performed independently by each subject within a designated rectangular region in each of the three environments. Both BFI and CSI data is collected for the same duration of 300 seconds for each of the twenty activities. **To create the ground truth, we captured the synchronous video streams of the subjects performing the activity**. The video streams are synchronized with the data to show what the subject is doing during the transmission of the NDP frame triggering the BFI computation. As an example, three frames from the captured video streams are shown in Figure 8. **MU-MIMO Setup and Equipment.** We set up an 802.11ac MU-MIMO network operating on channel 153 with center frequency \(f_{\text{c}}\)=5.77 GHz and 80 MHz bandwidth. This allows sounding \(K\)=234 sub-channels, i.e., 256 available sub-channels on 80 MHz channels minus 14 control sub-channels and 8 pilots. We use one AP (beamformer) and three STAs (beamformees), as depicted in Figure 9 in orange. The AP and the STAs are implemented through Netgear Nighthawk X4S AC2600 routers with \(M\)=3 and \(N\)=1 antennas enabled respectively for the AP and each of the STAs. The three STAs are served with \(N_{\text{ss}}=1\) spatial stream each and placed at three different heights and significantly spaced from each other to form a \(3\times 3\) MU-MIMO system. According to the IEEE 802.11ac standard, four beamforming feedback angles (two Fig. 8: Sample frames from the video capture. Fig. 6: Learning-based activity classifier. Fig. 7: Sites of experimental data collection. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. IEEE 802.11ac-compliant Asus RT-AC86U routers (referred to as _CSI monitors_) equipped with the Nexmon CSI extraction tool ([8]) have been deployed, as depicted in Figure 9 in green. To have the same setup as in the MU-MIMO network, the CSI AP is enabled with \(M\) = 3 antennas whereas the CSI monitors are set up to sense the channel through \(N=1\) antenna over \(N_{ss}\)= 1 spatial stream each. UDP packets are sent from the CSI AP to the CSI client to trigger the channel estimation on the three CSI monitors. Note that, as shown in Figure 9, the CSI AP and one of the CSI monitors (M1) are respectively placed at the same location as the MU-MIMO AP and one of the stations (ST1) to allow for baseline performance comparison. To show the challenges of using CSI-based sensing, we place both the BFI capturing device and the CSI monitors M2 and M3 beyond the wall of the activity zone. _The CSI monitor captures the channel between itself and the CSI AP_, and, in turn, the performance decrease when CSI collectors are placed far from the monitored environment, as detailed in Section IV-B1. ### _Performance Analysis_ In the following, all the results are obtained with a time window size of 0.1 s with ten packets/sample with the data of three subjects combined, unless specified otherwise. #### Iv-B1 Comparison between BFI and CSI-based Sensing Figure 11 shows the classification accuracy of BeamSense as compared to the state-of-the-art CSI-based SignFi algorithm [61] in the three environments. For a baseline comparison, we only consider M1 and ST1 as the CSI collection device and BFI STA respectively which are co-located. We first Fig. 10: BFI angles for each sub-channel for four activities. Each plot shows the values of 10 different packets (superimposed lines with different colors). The x-axis reports the indices of the sensed sub-channels. Fig. 9: Experimental setups for data collection. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. ## References * [1]W. Li, M. J. Bocus, C. Tang, S. Vishwakarma, R. J. Piechocki, K. Woodbridge, and K. Chetty (2020) A Taxonomy of WiFi Sensing: CSI vs passive Wi-Fi Radar. In 2020 IEEE Globecom Workshops (GC Wkshps), pp. 1-6. Cited by: SSI. * [2]W. Li, R. J. Piechocki, K. Woodbridge, C. Tang, and K. Chetty (2020) Passive WiFi Radar for Human Sensing Using a Stand-alone Access Point. IEEE Transactions on Geoscience and Remote Sensing59 (3), pp. 1986-1998. Cited by: SSI. * [3]C. Li, Y. Chen, and M. Lu (2020) Device-free indoor human activity recognition using Wi-Fi RSSI: machine learning approaches. In 2020 IEEE International Conference on Consumer Electronics-Tusun (ICCE-Taiwan), pp. 1-2. Cited by: SSI. * [4]C. Li, Y. Chen, and M. Lu (2020) A survey of wireless This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
2305.15120
Expected Values of $L$-functions Away from the Central Point
We compute the expected value of Dirichlet $L$-functions defined over $\mathbb{F}_q[T]$ attached to cubic characters evaluated at an arbitrary $s \in (0,1)$. We find a transition term at the point $s=\frac{1}{3}$, reminiscent of the transition at the point $s=\frac{1}{2}$ of the bound for the size of an $L$-function implied by the Lindel\"of hypothesis. We show that at $s=\frac{1}{3}$, the expected value matches corresponding statistics of the group of unitary matrices multiplied by a weight function.
Chantal David, Patrick Meisner
2023-05-24T13:11:26Z
http://arxiv.org/abs/2305.15120v1
# Expected values of \(L\)-functions away from the central point ###### Abstract. We compute the expected value of Dirichlet \(L\)-functions defined over \(\mathbb{F}_{q}[T]\) attached to cubic characters evaluated at an arbitrary \(s\in(0,1)\). We find a transition term at the point \(s=\frac{1}{3}\), reminiscent of the transition at the point \(s=\frac{1}{2}\) of the bound for the size of an \(L\)-function implied by the Lindelof hypothesis. We show that at \(s=\frac{1}{3}\), the expected value matches corresponding statistics of the group of unitary matrices multiplied by a weight function. Key words and phrases:Moments of Dirichlet L-functions. Cubic characters. Function fields. Random Matrix Theory. 2020 Mathematics Subject Classification: 11M06, 11M38, 11R16, 11R58 ## 1. Introduction ### Setup and Main Result Let \(q=p^{a}\) be a prime power and consider the ring of polynomials \(\mathbb{F}_{q}[T]\) consisting of polynomials with coefficients in the finite field \(\mathbb{F}_{q}\). Denote \(\chi\) as a Dirichlet character of \(\mathbb{F}_{q}[T]\) and \(\mathcal{M}\) the set of monic polynomials in \(\mathbb{F}_{q}[T]\). Then define the \(L\)-function attached to \(\chi\) as \[L(s,\chi)=\sum_{F\in\mathcal{M}}\frac{\chi(F)}{|F|^{s}}. \tag{1.1}\] The Riemann Hypothesis implies that there exists a conjugacy class of unitary matrices \(\Theta_{\chi}\), called the Frobenius class, such that \[L(s,\chi)=(1-q^{-s})^{1-\delta_{\chi}}\det(1-q^{\frac{1}{2}-s}\Theta_{\chi}) \tag{1.2}\] where \(\delta_{\chi}=0\) or \(1\) depending on the parity of \(\chi\). From here we can impose some trivial bound on the size of \(L(s,\chi)\), and we have as \(q\) tends to infinity \[|L(s,\chi)|\ll\begin{cases}q^{(\frac{1}{2}-s)g}&s<\frac{1}{2}\\ 2^{g}&s=\frac{1}{2}\\ 1&s>\frac{1}{2}\end{cases} \tag{1.3}\] where \(g\) is the dimension of \(\Theta_{\chi}\). That is, it will be bounded for \(s\) above \(\frac{1}{2}\), while the bound grows exponentially as \(s\) decreases below \(\frac{1}{2}\). The bound at \(s=\frac{1}{2}\) depends only on the genus and is believed to grow polynomially in \(g\) on average. Conrey, Farmer, Keating, Rubinstein and Snaith [15] developed a recipe for conjecturing moments of \(L\)-functions over \(\mathbb{Q}\) at \(s=\frac{1}{2}\). Andrade and Keating [1] then adapt the recipe to \(L\)-functions over \(\mathbb{F}_{q}(T)\) and conjecture that \[\frac{1}{|\mathfrak{M}_{2}(2g)|}\sum_{\chi\in\mathfrak{M}_{2}(2g)}L(\tfrac{1} {2},\chi)^{k}=P_{k}(g)+o(1) \tag{1.4}\] where \(\mathfrak{M}_{2}(2g)\) is the set of primitive quadratic characters of genus \(2g\) and \(P_{k}\) is an explicit polynomial of degree \(\frac{k(k+1)}{2}\). They prove this for \(k=1\), while Florea [17, 18, 19] confirms it in the cases \(k=2,3,4\), improving the quality of error terms from the number field case (for \(k=4\), only the leading three terms of the polynomial \(P_{k}\) of degree \(10\) can be obtained). In addition to the papers mentionned above, many papers have been published on moments of quadratic characters at the central point both in number fields and function fields, including [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 250, 209, 226, 201, 202, 204, 206, 208, 209, 211, 223, 243, 251, 252, 253, 254, 255, 256, 257, 258, 261, 270, 271, 272, 273, 274, 275, 276, 277, 278, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 202, 203, 204, 205, 206, 207, 208, 209, 211, 222, 231, 243, 251, 252, 254, 256, 257, 258, 261, 271, 272, 274, 276, 278, 289, 292, 293, 294, 295, 296, 297, 298, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 351, 352, 353, 36, 371, 385, 393, 394, 307, 308, 311, 323, 335, 36, 371, 385, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 423, 441, 442, 443, 445, 46, 471, 481, 490, 412, 443, 447, 413, 448, 414, 449, 425, 46, 481, 449, 435, 46, 481, 449, 449, 450, 406, 407, 408, 409, 410, 411, 423, 441, 44, 445, 46, 481, 449, 490, 425, 46, 481, 449, 491, 449, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 51, 51, 52, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 84, 85, 86, 88, 89, 91, 85, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 111, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 142, 131, 143, 144, 145, 146, 147, 148, 149, 150, 151, 166, 176, 177, 188, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 223, 243, 251, 253, 254, 255, 256, 257, 258, 261, 270, 208, 209, 211, 209, 226, 207, 209, 211, 209, 232, 209, 240, 209, 211, 241, 253, 254, 256, 257, 258, 261, 272, 283, 284, 285, 286, 287, 293, 294, 295, 296, 297, 298, 300, 311, 323, 334, 351, 352, 353, 36, 371, 385, 398, 398, 400, 411, 423, 443, 44, 45, 46, 471, 481, 490, 425, 46, 481, 449, 493, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 409, 425, 409, 436, 407, 408, 409, 411, 440, 44, 445, 46, 481, 449, 493, 400, 411, 423, 44, 44, 45, 46, 471, 481, 490, 409, 400, 410, 409, 411, 409, 425, 409, 43, 400, 411, 44, 44, 45, 46, 471, 481, 490, 409, 425, 409, 44, 409, 44, 409, 45, 46, 471, 481, 490, 409, 410, 409, 425, 409, 43, 409, 44, 45, 46, 471, 481, 490, 44, 493, 409, 410, 409, 425, 409, 441, 45, 46, 481, 49, 493, 409, 410, 425, 409, 43, 409, 44, 411, 409, 45, 46, 471, 481, 490, 471, 493, 409, 410, 411, 409, 450, 46, 471, 481, 49, 493, 409, 410, 409, 425, 409, 409, 411, 409, 431, 409, 44, 4 To our knowledge, this is the first result in the literature computing the first moment at \(s=1/3\) for any family of cubic Dirichlet characters, over function fields or number fields. Based on the random matrix model of Section 1.2, we could speculate that a similar asymptotic would hold for the the first moment at \(s=1/\ell\) for families of Dirichlet characters of order \(\ell\), at least for \(\ell\) prime, but there is no hope to prove such a result with the current knowledge, as only very partial results are known for the residues of the Dirichlet series of \(\ell\)-th order Gauss sums, which are shrouded in mystery. _Remark 1.2_.: When \(q\) is fixed and \(g\to\infty\), we should write the second result as \[\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L(\tfrac{1}{3}, \chi)=C_{q}g\left(1+o(1)\right),\] since the constants are the same size as the error. But we will need the more precise expression for the \(q\)-limit result of the next theorem. Moreover, we believe there should be an explicit constant appearing in the shape of a prime sum. We write the result as above to illustrate this fact but make no claims that the prime sum we write is correct. Determining the correct prime sum would involve improving Lemma 4.8. _Remark 1.3_.: The values \(L(s,\chi)\) are not real, but the sum is since \(L(s,\chi)+L(s,\overline{\chi})\) is real (when \(s\) is). When \(s=\tfrac{1}{2}\), we recover a result similar to the main results of [10], but for a different family, as they considered the full families. Computations for thin families of cubic characters as (1.6) are typically easier, but maybe paradoxically, computing the dual term for the thin family presented new challenges, as one needs to consider _weighted_ averages of Gauss sums. The case \(s=\tfrac{1}{3}\) is specially challenging as one needs to compute exactly the main term and the dual term with error terms good enough to see the cancellation. In [10], the authors attempted to see the corresponding cancellation of the secondary term for \(s=\tfrac{1}{2}\) which is achieved in this paper (compare Propositions 3.1 and 4.1) for the full families, but the cancellation occurred inside the error term. Comparing Theorem 1.1 to (1.3), we see similar phenomenon occurring, except with a transition at \(s=\tfrac{1}{3}\). When \(s>\tfrac{1}{3}\), the error term decays with \(g\) and we get an explicit constant. At \(s=\tfrac{1}{3}\), the error term stops decaying, but the main term has a pole, resulting in a contribution of roughly a constant times one-third of the genus (since the characters in \(\mathfrak{H}(3g)\) have genus \(3g\)). When \(s<\tfrac{1}{3}\), the error term begins growing exponentially, making the main term no longer a main term. The main technique in proving Theorem 1.1 is to use the approximate functional equation to write the \(L\)-function as a principal sum and a dual sum, and we average each of them over the family. In typical applications where one computes the moments at \(s=\tfrac{1}{2}\), the main term will come from the principal sum and the oscillations of the sign of the functional equation will make the dual sum smaller. In fact, for \(s>\tfrac{1}{3}\), we find that the principal sum has \(M(s)\) as a main term as well as a secondary term that decays. However, this secondary term has a pole at \(s=\tfrac{1}{3}\) and starts growing exponentially for \(s<\tfrac{1}{3}\). While for the dual sum we find a main term which is exactly the negative of the secondary term of the principal sum. Analyzing the poles at \(s=\tfrac{1}{3}\) of each of the principal and dual sum then yields the result for \(s=\tfrac{1}{3}\). The principal sum is a straight forward double character sum and we apply standard number theory techniques to compute it. However, the dual sum comes with an extra factor of the sign of the functional equation. To handle this, we then need to compute exactly the average of cubic Gauss sums coming from the functional equation. Over number fields, such averages were first considered by [10] following the ideas of Kubota [14, 15] involving "metaplectic forms" (see also [13]), and over function fields, by Hoffstein [12] and Patterson [10]. The precise formulas that are needed to evaluate the average of cubic Gauss sums were developed in [11], building on [12, 10]. We will follow the notation of [11] closely and rely on their results when computing the dual sum in Section 4. In particular, we will extend the main result of [11] in order to average Gauss sums weighted by Euler products (see Proposition 4.7). Finally, we would like to bring attention to the transition of \(E_{s}(g)\) at \(s=\frac{2}{3}\). This is completely analogous with what happens to the main term at \(s=\frac{1}{3}\) and is not too surprising since the functional equation relates the value at \(\frac{1}{3}\) to the value at \(\frac{2}{3}\). With this, it seems reasonable that the true behaviour of \(E_{\frac{2}{3}}(g)\) should be a constant times \(g\), and not \(g^{2}\). However, it is not immediately clear what to expect for the true behaviour beyond \(\frac{2}{3}\). It is very natural to ask if Theorem 1.1 would hold over number fields, i.e averaging over a similar thin family of cubic characters over \(\mathbb{Q}(\xi_{3})\), where \(\xi_{3}\) is a primitive third root of unity. The exact estimates for average of cubic Gauss sums of [11] and the present paper would have to be developed over number fields, which should not be a problem. We are also using the Riemann Hypothesis which is true over function fields (or the Lindelof Hypothesis, see Lemma 3.2) to estimate the principal sum, but this could presumably be avoided by using the cubic large sieve, as in [12] and [10]. For the thin family of cubic characters over \(\mathbb{Q}(\xi_{3})\), it was show by the first author and Guloglu [12] that it is possible to break the "trivial" support of the Fourier transform in the one level-density (under GRH) to obtain a positive proportion of non-vanishing of L-functions of cubic characters, by bounding averages of Gauss sums over primes, which seems encouraging. For the full families of cubic characters, a (much smaller) positive proportion of non-vanishing can be found by using different techniques, based on the mollified moments, which was done over function fields for the non-Kummer family in [11] and over \(\mathbb{Q}(\xi_{3})\) for the Kummer family in [12] (under GRH). We are not aware of any result like Theorem 1.1 over number fields. ### Random Matrix Model Another reason why looking at statistics when \(s=\frac{1}{2}\) is interesting, is that we see that when \(\delta_{\chi}=1\) the \(q\)-dependence on the right-hand side of (1.2) disappears, and \[L(\tfrac{1}{2},\chi)=\det(1-\Theta_{\chi}).\] That is, we may apply the Katz-Sarnak [14] philosophy which states that the Frobenii of a family of \(L\)-functions should equidistribute in some compact matrix Lie group. Specifically, if \(f\) is any continuous class function and \(\mathcal{F}_{g}\) is a "nice" family of \(L\)-functions of fixed genus \(g\), then the Katz-Sarnak philosophy predicts that there is some compact matrix Lie group, \(G_{g}\)1, such that Footnote 1: \(G_{g}\) will be a subgroup of \(U(g)\) and will typically be one of \(U(g)\), \(USp(g)\), \(SO(g)\), \(SO_{even}(g)\), \(SO_{odd}(g)\). \[\lim_{q\to\infty}\frac{1}{|\mathcal{F}_{g}|}\sum_{L\in\mathcal{F}_{g}}f(\Theta _{L})=(1+o(1))\int_{G_{g}}f(U)dU. \tag{1.7}\] where \(dU\) is the Haar measure and the \(o(1)\) term vanishes as \(g\) tends to infinity. Applying this with the continuous class function \(f(U)=\det(1-U)^{k}\) allows us to predict that the moments of \(L\)-functions in "nice" families at \(s=\frac{1}{2}\) should behave like the moments of the characteristic polynomial of a matrix in a compact matrix Lie group at \(1\), and \[\lim_{q\to\infty}\frac{1}{|\mathcal{F}_{g}|}\sum_{L\in\mathcal{F}_{g}}L( \tfrac{1}{2})^{k}=(1+o(1))\int_{G_{g}}\det(1-U)^{k}dU.\] This framework can further help to explain the recipe in [10] and the ensuing conjecture in (1.4). That is, it is known that the Frobenii attached to quadratic characters are symplectic and that \[\int_{USp(2g)}\det(1-U)^{k}dU=Q_{k}(g)\] for some explicit polynomials of degree \(\frac{k(k+1)}{2}\), which coincides with the conjecture of Andrade and Keating [1]. Many results ([1, 10, 11, 12, 13, 14, 15]) suggest that the compact matrix Lie group attached to cubic characters is the group of unitary matrices. This is further reinforced by (1.5) and Theorem 1.1 with the observation that \[\lim_{q\to\infty}A_{q}=\lim_{q\to\infty}M(\tfrac{1}{2})=\int_{U(g)}\det(1-U)du =1.\] However, this framework can not, a priori, help to explain the transition term at \(s=\frac{1}{3}\) since the right hand side of \[L(\tfrac{1}{3},\chi)=\det(1-q^{1/6}\Theta_{\chi}) \tag{1.8}\] is not independent of \(q\) and, hence, can not be identified with a _single_ continuous class function \(f\) of the unitaries as \(q\) grows. ### Weighted Random Matrix Model An important set of continuous class functions are the mixed trace functions \[P_{\lambda}(U):=\prod_{j=1}^{\infty}\operatorname{Tr}(U^{j})^{\lambda_{j}}\] where \(\lambda=1^{\lambda_{1}}2^{\lambda_{2}}\cdots\) is the partition consisting of \(\lambda_{1}\) ones, \(\lambda_{2}\) twos, etc. These form a basis for continuous class functions of the unitary matrices and so it would be enough to prove (1.7) for \(f=P_{\lambda}\) for all \(\lambda\). Towards this, the second author in [14] proved a partial result towards (1.7) for some family of cubic characters \(\mathcal{F}_{3}(N)\). Specifically, for all \(\lambda\) such that \(|\lambda|:=\sum j\lambda_{j}<N\) we have \[\lim_{q\to\infty}\frac{1}{|\mathcal{F}_{3}(N)|}\sum_{\chi\in\mathcal{F}_{3}(N)}P_ {\lambda}(\Theta_{\chi})=\int_{U(N)}P_{\lambda}(U)dU. \tag{1.9}\] One unsatisfying aspect of (1.9) is that the right hand side is always \(0\). The main goal of [10] was to find an appropriate normalization for the left hand side to be non-zero and see how that could effect the random matrix interpretation. Indeed, it was shown that for \(|\lambda|<\frac{3N}{4}\)[10, Theorem 1.1 with \(r=3\)] \[\lim_{q\to\infty}\frac{1}{|\mathcal{F}_{3}(N)|}\sum_{\chi\in\mathcal{F}_{3}(N )}q^{\frac{|\lambda|}{6}}P_{\lambda}(\Theta_{\chi})=\int_{U(N)}P_{\lambda}(U) \overline{\det(1-\wedge^{3}U)}dU \tag{1.10}\] where \[\det(1-\wedge^{3}U):=\prod_{1\leq i_{1}<i_{2}<i_{3}\leq N}\left(1-x_{i_{1}}x_{ i_{2}}x_{i_{3}}\right)\] and the \(x_{i}\) are the eigenvalues of \(U\). While extending (1.9) to all \(\lambda\) would give us a result like (1.7) it is not immediately clear what extending (1.10) for all \(\lambda\) would give us. One possible interpretation is that, by the definition of \(P_{\lambda}\), we have \(q^{\frac{|\lambda|}{6}}P_{\lambda}(\Theta_{\chi})=P_{\lambda}(q^{\frac{1}{6}} \Theta_{\chi})\) and so an extension of (1.10) would imply the following conjecture. **Conjecture 1.4**.: _Let \(\mathcal{F}(N)\) be a "nice" family of cubic characters defined over \(\mathbb{F}_{q}[T]\) and with conductor bounded by \(N\), and let \(f\) be a continuous class function. Then_ \[\lim_{q\to\infty}\frac{1}{|\mathcal{F}(N)|}\sum_{\chi\in\mathcal{F}(N)}f(q^{ \frac{1}{6}}\Theta_{\chi})=(1+o(1))\int_{U(N)}f(U)\overline{\det(1-\wedge^{3} U)}dU\] _where the \(o(1)\) tends to \(0\) as \(N\) tends to infinity._ Notice that the result of [10] implies that Conjecture 1.4 is true for the family of \(\mathcal{F}_{3}(N)\) and all continuous class functions in the span of \(\{P_{\lambda}:|\lambda|<3N/4\}\). Similar techniques would prove similar results for the family \(\mathfrak{H}(3g)\) as well. However, we see by (1.8) that \[L(\tfrac{1}{3},\chi)=f(q^{\frac{1}{6}}\Theta_{\chi})\] where \(f(U)=\det(1-U)\) is not in the span of \(\{P_{\lambda}:|\lambda|<3N/4\}\). Regardless of this, we can use Theorem 1.1 to prove Conjecture 1.4 in this case. **Theorem 1.5**.: _Conjecture 1.4 is true with the family \(\mathfrak{H}(3g)\) and the continuous class function \(f(U)=\det(1-U)\). Specifically,_ \[\lim_{q\to\infty}\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L( \tfrac{1}{3},\chi)=\left(1+O\left(\frac{1}{g}\right)\right)\int_{U(3g)}\deg( 1-U)\overline{\det(1-\wedge^{3}U)}dU.\] We see here another "natural" reason to restrict to \(\mathfrak{H}(3g)\). That is, we are aiming to mimic the connection between quadratic characters and symplectic matrices. Since symplectic matrices only exist in even dimensions it seems "natural" that whatever analogous connection can be made would be most prevalent in dimensions divisible by \(3\). ### Structure of the paper We present in Section 2 some background on L-functions and cubic characters over function fields, and we describe the thin family of cubic characters that we are using. In particular, we generalize the approximate functional equation of [10] from \(s=\frac{1}{2}\) to general \(s\). Using the approximate functional equation, the average value can be written as a principal sum and a dual sum as in (2.21). We compute the principal sum in Section 3, the dual sum in Section 4, and using those estimates, the proof of theorems 1.1 and 1.5 are given in Section 5. ## 2. Background on \(L\)-functions ### Cubic characters over \(\mathbb{F}_{q}[T]\) We denote by \(\mathcal{M}\) the set of monic polynomials in \(\mathbb{F}_{q}[t]\), and by \(\mathcal{M}_{d}\), repectively \(\mathcal{M}_{\leq d}\), the set of monic polynomials in \(\mathbb{F}_{q}[t]\) of degree \(d\), respectively of degree \(\leq d\). Let \(q\) be a prime power with \(q\equiv 1\mod 6\). We fix once and for all an isomorphism \(\Omega\) from the cube roots of unity in \(\mathbb{F}_{q}^{*}\) to \(\mu_{3}=\{1,\xi_{3},\xi_{3}^{2}\}\), the cube roots of unity in \(\mathbb{C}^{*}\), where \(\xi_{3}=e^{2\pi i/3}\). We then denote by \(\chi_{3}\) the cubic residue symbol of \(\mathbb{F}_{q}^{*}\) given by \[\chi_{3}(a)=\Omega(a^{(q-1)/3}),\ \ \text{for all $a\in\mathbb{F}_{q}^{*}$.} \tag{2.1}\] For each prime \(P\in\mathbb{F}_{q}[t]\), we define the cubic residue symbol of conductor \(P\) \[\chi_{P}:\mathbb{F}_{q}[t]/(P)\to\mu_{3}\] as follows: for \(a\in\mathbb{F}_{q}[t]\), if \(P\mid a\), then \(\chi_{P}(a)=0\), and otherwise, \(\chi_{P}(a)=\alpha\), where \(\alpha\in\mu_{3}\) is such that \[a^{\frac{q\deg P-1}{3}}\equiv\Omega^{-1}(\alpha)\bmod P.\] There are then 2 cubic characters of conductor \(P\), \(\chi_{P}\) and \(\chi_{P}^{2}=\overline{\chi}_{P}\). We extend the definition to \(F\in\mathcal{M}\) by multiplicativity. Writing \(F=P_{1}^{e_{1}}\dots P_{s}^{e_{s}}\) where the \(P_{i}\) are distinct primes and the \(e_{i}\) are positive integers, we define \[\chi_{F}(a)=\chi_{P_{1}}(a)^{e_{1}}\dots\chi_{P_{s}}(a)^{e_{s}}. \tag{2.2}\] Then, \(\chi_{F}\) is a cubic character of conductor \(\operatorname{rad}(F)=P_{1}\dots P_{s}\). Conversely, all the primitive cubic characters of conductor \(P_{1}\dots P_{s}\) are given by \(\chi_{P_{1}}^{e_{1}}\dots\chi_{P_{s}}^{e_{s}}\) with \(1\leq e_{i}\leq 2\), and there are \(2^{s}\) such characters. We say that a cubic character \(\chi\) is even if \(\chi|_{\mathbb{F}_{q}^{*}}=\chi_{0}\), the trivial character, and that \(\chi\) is odd if \(\chi|_{\mathbb{F}_{q}^{*}}=\chi_{3}\) or \(\chi_{3}^{2}\). We define \[\delta_{\chi}=\begin{cases}1&\text{when $\chi$ is odd}\\ 0&\text{when $\chi$ is even.}\end{cases}\] The best classification of cubic characters is by genus. From the Riemann-Hurwitz formula (Theorem 7.16 of [14]), we compute \[g=\deg\operatorname{cond}(\chi)-2+\delta_{\chi},\] and we denote by \(\mathfrak{M}_{3}(g)\) the set of primitive cubic characters over \(\mathbb{F}_{q}[t]\) of genus \(g\). \(\mathfrak{M}_{3}(g)\) is naturally divided in three disjoint subsets \(\mathfrak{M}_{3}^{0}(g),\mathfrak{M}_{3}^{1}(g),\mathfrak{M}_{3}^{2}(g)\), depending on the restriction of \(\chi\) over \(\mathbb{F}_{q}\), and we define for \(j=0,1,2\), \[\mathfrak{M}_{3}^{j}(g)=\{\chi\in\mathfrak{M}_{3}(g)\;:\;\chi|_{\mathbb{F}_{q }^{*}}=\chi_{3}^{j}\} \tag{2.3}\] where we identify \(\chi_{3}^{0}=\chi_{0}\). In particular, we get \[\mathfrak{M}_{3}(g)=\mathfrak{M}_{3}^{0}(g)\cup\mathfrak{M}_{3}^{1}(g)\cup \mathfrak{M}_{3}^{2}(g).\] Using the observation that \(\frac{q^{n}-1}{3}\equiv\frac{n(q-1)}{3}\mod q-1\), we get that if \(a\in\mathbb{F}_{q}^{*}\), then for any prime \(P\), \[a^{\frac{q^{\deg(P)}-1}{3}}=a^{\frac{\deg(P)(q-1)}{3}}\equiv\Omega^{-1}\left( \chi_{3}^{\deg(P)}(a)\right)\mod P.\] Extending this multiplicatively, we find that \(\chi_{F}|_{\mathbb{F}_{q}^{*}}=\chi_{3}^{\deg(F)}.\) Hence, if we define \[\mathcal{M}_{3}^{j}(d):=\{F\in\mathbb{F}_{q}[T]:F\text{ cube-free},\deg( \operatorname{rad}(F))=d,\deg(F)\equiv j\bmod 3\}\] then we have \[\mathfrak{M}_{3}^{0}(g)=\{\chi_{F}\;:\;F\in\mathcal{M}_{3}^{0}(g+2)\}\] and for \(j=1,2\), \[\mathfrak{M}_{3}^{j}(g)=\{\chi_{F}\;:\;F\in\mathcal{M}_{3}^{j}(g+1)\}\] Let \[\mathcal{H}(d):=\{F\in\mathbb{F}_{q}[T]:F\text{ square-free},\deg(F)=d\}.\] For \(g\equiv 0\bmod 3\), let \[\mathfrak{H}(g):=\{\chi_{F}:F\in\mathcal{H}(g+1)\}\subset\mathfrak{M}_{3}^{1}( g)\subset\mathfrak{M}_{3}(g)\] while if \(g\equiv 1\bmod 3\), let \[\mathfrak{H}(g):=\{\chi_{F}:F\in\mathcal{H}(g+2)\cup\mathcal{H}(g+1)\}\subset \mathfrak{M}_{3}^{0}(g)\cup\mathfrak{M}_{3}^{2}(g)\subset\mathfrak{M}_{3}(g).\] Somewhat surprisingly, if \(g\equiv 2\bmod 3\), then we find that there are no elements of \(\mathfrak{M}_{3}(g)\) with a square-free discriminant. Hence we set \(\mathfrak{H}(g)=\emptyset\) in this case. We see that there is a natural bijection from \(\mathfrak{H}(3g)\) to \(\mathcal{H}(3g+1)\), which is reminiscent of the family usually considered for quadratic characters \(\mathfrak{M}_{2}(2g)\) which comes with a natural bijection to \(\mathcal{H}(2g+1)\). From this point of view, the family \(\mathfrak{H}(3g)\) is a "natural" extension of the quadratic family. As the functional equation of the \(L\)-functions depends on the parity of the character, it will be useful to distinguish them. Thus, we define \[\mathfrak{H}_{e}(g):=\{\chi\in\mathfrak{H}(g):\chi\text{ is even}\}\qquad \text{ and }\qquad\mathfrak{H}_{o}(g):=\{\chi\in\mathfrak{H}(g):\chi\text{ is odd}\}.\] Notice that the even characters are exactly those in \(\mathfrak{M}_{3}^{0}(g)\) so that if \(g\not\equiv 1\bmod 3\), \(\mathfrak{H}_{e}(g)=\emptyset\). Now, the standard square-free sieve tells us that for \(d\geq 1\) \[|\mathcal{H}(d)|=\frac{q^{d}}{\zeta_{q}(2)}=q^{d}-q^{d-1}, \tag{2.4}\] so that we get \[|\mathfrak{H}_{e}(g)|=\begin{cases}0&g\not\equiv 1\bmod 3\\ |\mathcal{H}(g+2)|&g\equiv 1\bmod 3\end{cases}=\begin{cases}0&g\not\equiv 1 \bmod 3\\ q^{g+2}-q^{g+1}&g\equiv 1\bmod 3\end{cases}\] and \[|\mathfrak{H}_{o}(g)|=\begin{cases}|\mathcal{H}(g+1)|&g\not\equiv 2\bmod 3\\ 0&g\equiv 2\bmod 3\end{cases}=\begin{cases}q^{g+1}-q^{g}&g\not\equiv 2 \bmod 3\\ 0&g\equiv 2\bmod 3\end{cases}\] from which we may conclude that \[|\mathfrak{H}(g)|=|\mathfrak{H}_{e}(g)|+|\mathfrak{H}_{o}(g)|=\begin{cases}q^{g+1}- q^{g}&g\equiv 0\mod 3\\ q^{g+2}-q^{g}&g\equiv 1\mod 3\\ 0&g\equiv 2\mod 3\end{cases}.\] ### Functional Equation The affine zeta function over \(\mathbb{F}_{q}[t]\) is defined by \[\mathcal{Z}_{q}(u)=\sum_{f\in\mathcal{M}}u^{\deg f}=\prod_{P}\left(1-u^{\deg P }\right)^{-1}=\frac{1}{(1-qu)} \tag{2.5}\] for \(|u|<q^{-1}\). The right-hand side provides an analytic continuation to the entire complex plane, with a simple pole at \(u=1/q\) with residue \(-1/q\). We also define \[\zeta_{q}(s)=\mathcal{Z}_{q}(q^{-s}). \tag{2.6}\] Replacing in (2.4), we express the size of \(\mathfrak{H}(g)\) in terms of values of \(\zeta_{q}(s)\): \[|\mathfrak{H}(g)|=\begin{cases}\frac{q^{g+1}}{\zeta_{q}(2)}&g\equiv 0\mod 3\\ \frac{q^{g+2}}{\zeta_{q}(3)}&g\equiv 1\mod 3\\ 0&g\equiv 2\mod 3.\end{cases}\] Let \(\chi\) be a primitive cubic Dirichlet character as defined in Section 2.1, and let \(h\in\mathcal{M}\) be its conductor. We define the \(L\)-function in the \(u\)-variable as \[\mathcal{L}(u,\chi)=\sum_{F\in\mathcal{M}}\chi(F)u^{\deg(F)}\] so that \(L(s,\chi)=\mathcal{L}(q^{-s},\chi)\), where \(L(s,\chi)\) is defined in (1.1). If \(\chi\) is even, we have that \(\mathcal{L}(1,\chi)=0\), and we define the completed \(L\)-function \[\mathcal{L}_{C}(u,\chi)=\frac{\mathcal{L}(u,\chi)}{(1-u)^{1-\delta_{\chi}}}. \tag{2.7}\] Let \(g\) be the genus of the character \(\chi\). It follows from the Weil conjectures [20] that \(\mathcal{L}_{C}(u,\chi)\) is a polynomial of degree \(g\) and it satisfies the functional equation \[\mathcal{L}_{C}(u,\chi)=\omega(\chi)(\sqrt{q}u)^{g}\mathcal{L}_{C}\left(\frac {1}{qu},\overline{\chi}\right) \tag{2.8}\] where \(\omega(\chi)\) is the sign of the functional equation. To give a formula for \(\omega(\chi)\), we need to define the Gauss sums of characters over \(\mathbb{F}_{q}[t]\). We first start with Gauss sums for characters over \(\mathbb{F}_{q}^{\ast}\). If \(\chi\) is a non-trivial character of \(\mathbb{F}_{q}\), we define \[\tau(\chi):=\sum_{a\in\mathbb{F}_{q}^{\ast}}\chi(a)e^{\frac{2\pi itr_{\mathbb{ F}_{q}}/\mathbb{F}_{p}(a)}{p}}.\] Then, \(\tau(\overline{\chi})=\overline{\tau(\chi)}\) and \(|\tau(\chi)|=q^{1/2}\), and we denote the sign on the Gauss sum by \[\epsilon(\chi):=q^{-1/2}\tau(\chi).\] We extend the definition to trivial characters by defining \(\epsilon(\chi_{0})=1\). To define the Gauss sums of general characters over \(\mathbb{F}_{q}[t]\), we define the exponential over \(\mathbb{F}_{q}(t)\) as follows: for any \(a\in\mathbb{F}_{q}((1/t))\), we have \[e_{q}(a)=e^{\frac{2\pi itr_{\mathbb{F}_{q}}/\mathbb{F}_{p}(a_{1})}{p}},\] where \(a_{1}\) is the coefficient of \(1/t\) in the Laurent expansion of \(a\). We then have the usual properties: \(e_{q}(a+b)=e_{q}(a)e_{q}(b)\) and \(e_{q}(a)=1\) for \(a\in\mathbb{F}_{q}[t]\). Also, if \(a,b,h\in\mathbb{F}_{q}[t]\) with \(a\equiv b\bmod h\), then \(e_{q}(a/h)=e_{q}(b/h)\). For \(\chi\) a primitive cubic character of modulus \(h\) over \(\mathbb{F}_{q}[t]\), the Gauss sum of \(\chi\) is \[G(\chi)=\sum_{a\bmod h}\chi(a)e_{q}\left(\frac{a}{h}\right).\] It is not hard to show that \(G(\overline{\chi})=\overline{G(\chi)}\) and \(|G(\chi)|=q^{\deg h/2}\). **Lemma 2.1**.: _[_1_, Corollary 2.3]_ _Let \(\chi\) be a primitive cubic character of conductor \(h\). Then,_ \[\omega(\chi)=\overline{\epsilon(\chi_{3}^{\deg h})}\;\frac{G(\chi)}{q^{\deg h /2}}\] _where \(\chi_{3}\) is the cubic character defined in (2.1). We then have_ \[\omega(\chi)=\overline{\omega(\chi)}\qquad\text{ and }\qquad|\omega(\chi)|=1.\] The following result generalizes [1, Proposition 2.4] which gives the approximate functional equation when \(s=\frac{1}{2}\). **Proposition 2.2** (Approximate functional equation).: _Let \(\chi\) be a primitive character of genus \(g\), and \(A\) be a positive integer. If \(\chi\) is odd, then_ \[\mathcal{L}(q^{-s},\chi)=\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s} }+\omega(\chi)(q^{1/2-s})^{g}\sum_{f\in\mathcal{M}_{\leq g-A-1}}\frac{\overline {\chi}(f)}{|f|^{1-s}} \tag{2.9}\] _If \(\chi\) is even, then_ \[\mathcal{L}(q^{-s},\chi)= \frac{1}{1-q^{1-s}}\left[\sum_{f\in\mathcal{M}_{\leq A+1}}\frac{ \chi(f)}{|f|^{s}}-q^{1-s}\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s} }\right] \tag{2.10}\] \[+\frac{1}{1-q^{s}}\frac{\omega(\chi)}{q^{(s-1/2)g}}\frac{\zeta_{ q}(2-s)}{\zeta_{q}(s+1)}\left[\sum_{f\in\mathcal{M}_{\leq g-A}}\frac{ \overline{\chi}(f)}{|f|^{1-s}}-q^{s}\sum_{f\in\mathcal{M}_{\leq g-A-1}}\frac{ \overline{\chi}(f)}{|f|^{1-s}}\right]\] Proof.: Since \(\mathcal{L}(u,\chi)\) is a polynomial of degree \(g+1-\delta_{\chi}\), we may write \[\mathcal{L}(u,\chi)=\sum_{n=0}^{g+1-\delta_{\chi}}a_{n}(\chi)u^{n} \tag{2.11}\] where \[a_{n}(\chi)=\sum_{f\in\mathcal{M}_{n}}\chi(f). \tag{2.12}\] Similarly, for \(\mathcal{L}_{C}(u,\chi)\) we write \[\mathcal{L}_{C}(u,\chi)=\sum_{n=0}^{g}b_{n}(\chi)u^{n}. \tag{2.13}\] Substituting (2.13) into (2.8) and comparing coefficients we get \[b_{n}(\chi)=\omega(\chi)q^{n-g/2}b_{g-n}(\overline{\chi}). \tag{2.14}\] Applying (2.7), we can write \(a_{n}(\chi)\) in terms of \(b_{n}(\chi)\) such that for \(n=0,\ldots,g\) \[a_{n}(\chi)=\begin{cases}b_{n}(\chi)&\delta_{\chi}=1\\ b_{n}(\chi)-b_{n-1}(\chi)&\delta_{\chi}=0\end{cases} \tag{2.15}\] while if \(\delta_{\chi}=0\) then \(a_{g+1}(\chi)=-b_{g}(\chi)\). Reversing (2.15), we can write \(b_{n}(\chi)\) in term of \(a_{n}(\chi)\) such that \[b_{n}(\chi)=\begin{cases}a_{n}(\chi)&\delta_{\chi}=1\\ \sum_{m=0}^{n}a_{m}(\chi)&\delta_{\chi}=0\end{cases} \tag{2.16}\] for \(n=0,\ldots,g\). Finally, if \(\delta_{\chi}=0\), then we may apply (2.14) to (2.15) to obtain for any \(0\leq A\leq g\), \[a_{g-A}(\chi) =b_{g-A}(\chi)-b_{g-A-1}(\chi)\] \[=\omega(\chi)q^{g/2-A}b_{A}(\overline{\chi})-\omega(\chi)q^{g/2-A -1}b_{A+1}(\overline{\chi})\] \[=\omega(\chi)q^{g/2-A}b_{A}(\overline{\chi})-\omega(\chi)q^{g/2- A-1}\left(a_{A+1}(\overline{\chi})+b_{A}(\overline{\chi})\right).\] Rearranging and taking conjugates we then obtain \[\frac{b_{A}(\chi)}{q^{(A+1)s}}=\frac{1}{q-1}\left(\frac{\omega(\chi)}{q^{g/2} }a_{g-A}(\overline{\chi})q^{(A+1)(1-s)}+\frac{a_{A+1}(\chi)}{q^{(A+1)s}}\right) \tag{2.17}\] where we have used \(1/\omega(\overline{\chi})=\omega(\chi)\). Replacing \(A\) with \(g-A-1\), \(s\) with \(1-s\) and \(\chi\) with \(\overline{\chi}\) we obtain \[\frac{b_{g-A-1}(\overline{\chi})}{q^{(g-A)(1-s)}}=\frac{1}{q-1}\left(\frac{ \overline{\omega(\chi)}}{q^{g/2}}a_{A+1}(\chi)q^{(g-A)s}+\frac{a_{g-A}( \overline{\chi})}{q^{(g-A)(1-s)}}\right). \tag{2.18}\] Splitting the sum in (2.13) at an arbitrary point \(A\), and applying (2.14), we obtain \[\mathcal{L}_{C}(u,\chi) =\sum_{n=0}^{A}b_{n}(\chi)u^{n}+\sum_{n=A+1}^{g}b_{n}(\chi)u^{n}\] \[=\sum_{n=0}^{A}b_{n}(\chi)u^{n}+\omega(\chi)\sum_{n=A+1}^{g}q^{n- g/2}b_{g-n}(\overline{\chi})u^{n}\] \[=\sum_{n=0}^{A}b_{n}(\chi)u^{n}+\omega(\chi)(\sqrt{q}u)^{g}\sum_{ n=0}^{g-A-1}\frac{b_{n}(\overline{\chi})}{q^{n}u^{n}}.\] If \(\delta_{\chi}=1\), then \(\mathcal{L}(u,\chi)=\mathcal{L}_{C}(u,\chi)\) and \(a_{n}(\chi)=b_{n}(\chi)\) and so we get the odd approximate function equation \[\mathcal{L}(q^{-s},\chi) =\sum_{n=0}^{A}\frac{a_{n}(\chi)}{q^{sn}}+\omega(\chi)(q^{1/2-s} )^{g}\sum_{n=0}^{g-A-1}\frac{a_{n}(\overline{\chi})}{q^{(1-s)n}}\] \[=\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s}}+\omega( \chi)(q^{1/2-s})^{g}\sum_{f\in\mathcal{M}_{\leq g-A-1}}\frac{\overline{\chi}( f)}{|f|^{1-s}}. \tag{2.19}\] If \(\delta_{\chi}=0\), then we get \(\mathcal{L}(u,\chi)=(1-u)\mathcal{L}_{C}(u,\chi)\) and we get \[\mathcal{L}(q^{-s},\chi) =\sum_{n=0}^{A}\frac{b_{n}(\chi)}{q^{sn}}(1-q^{-s})+\omega(\chi)q^ {(1/2-s)g}\sum_{n=0}^{g-A-1}\frac{b_{n}(\overline{\chi})}{q^{(1-s)n}}(1-q^{-s})\] \[=\sum_{n=0}^{A}\frac{b_{n}(\chi)}{q^{sn}}\left(1-q^{-s}\right)+ \omega(\chi)\frac{\zeta_{q}(2-s)}{\zeta_{q}(s+1)}\frac{q^{g/2}}{q^{sg}}\sum_{n =0}^{g-A-1}\frac{b_{n}(\overline{\chi})}{q^{(1-s)n}}\left(1-q^{-(1-s)}\right).\] Expanding the \((1-q^{-s})\) term in the first series and applying (2.15) and (2.17) with the observation that \(b_{0}(\chi)=a_{0}(\chi)\), we obtain \[\sum_{n=0}^{A}\frac{b_{n}(\chi)}{q^{sn}}\left(1-q^{-s}\right) =b_{0}(\chi)+\sum_{n=1}^{A}\frac{b_{n}(\chi)-b_{n-1}(\chi)}{q^{sn }}-\frac{b_{A}(\chi)}{q^{(A+1)s}}\] \[=b_{0}(\chi)+\sum_{n=1}^{A}\frac{a_{n}(\chi)}{q^{sn}}-\frac{1}{q- 1}\left(\frac{\omega(\chi)}{q^{g/2}}a_{g-A}(\overline{\chi})q^{(A+1)(1-s)}+ \frac{a_{A+1}(\chi)}{q^{(A+1)s}}\right)\] \[=\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s}}-\frac{1}{q -1}\left(\frac{\omega(\chi)}{q^{g/2}}a_{g-A}(\overline{\chi})q^{(A+1)(1-s)}+ \frac{a_{A+1}(\chi)}{q^{(A+1)s}}\right).\] Likewise, expanding the \((1-q^{-(1-s)})\) and applying (2.15) and (2.18), the second series becomes \[\sum_{f\in\mathcal{M}_{\leq g-A-1}}\frac{\overline{\chi}(f)}{|f|^{1-s}}-\frac{ 1}{q-1}\left(\frac{\overline{\omega(\chi)}}{q^{g/2}}a_{A+1}(\chi)q^{(g-A)s}+ \frac{a_{g-A}(\overline{\chi})}{q^{(g-A)(1-s)}}\right).\] Combining everything, when \(\delta_{\chi}=0\), we obtain the even approximate function equation \[\mathcal{L}(q^{-s},\chi) =\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s}}+\frac{ \zeta_{q}(2-s)}{\zeta_{q}(s+1)}\frac{\omega(\chi)}{q^{(s-1/2)g}}\sum_{f\in \mathcal{M}_{\leq g-A-1}}\frac{\overline{\chi}(f)}{|f|^{1-s}} \tag{2.20}\] \[\quad+\frac{1}{1-q^{1-s}}\frac{a_{A+1}(\chi)}{q^{(A+1)s}}+\frac{1 }{1-q^{s}}\frac{\zeta_{q}(2-s)}{\zeta_{q}(s+1)}\frac{\omega(\chi)}{q^{(s-1/2)g }}\frac{a_{g-A}(\overline{\chi})}{q^{(g-A)(1-s)}}.\] where we note that \[-\frac{1}{q-1}\left(1+\frac{\zeta_{q}(2-s)}{\zeta_{q}(s+1)}q^{s}\right)=\frac {1}{1-q^{1-s}}\text{ and }-\frac{1}{q-1}\left(1+\frac{\zeta_{q}(s+1)}{\zeta_{q}(2-s)}q^{1-s}\right)= \frac{1}{1-q^{s}}.\] Finally, rewriting \[\frac{a_{A+1}(\chi)}{q^{A+1}s}=\sum_{f\in\mathcal{M}_{\leq A+1}}\frac{\chi(f)} {|f|^{s}}-\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s}}\] and \[\frac{a_{g-A}(\chi)}{q^{(g-A)(1-s)}}=\sum_{f\in\mathcal{M}_{\leq g-A}}\frac{ \chi(f)}{|f|^{1-s}}-\sum_{f\in\mathcal{M}_{\leq g-A-1}}\frac{\chi(f)}{|f|^{1-s}}\] we obtain the even approximate functional equation. ### Principal and Dual Terms We see that since \(\mathfrak{H}(3g)=\mathfrak{H}_{o}(3g)\), we always have the functional equation \[\mathcal{L}(q^{-s},\chi)=\sum_{f\in\mathcal{M}_{\leq A}}\frac{\chi(f)}{|f|^{s}} +\omega(\chi)q^{(\frac{1}{2}-s)3g}\sum_{f\in\mathcal{M}_{\leq 3g-A-1}}\frac{ \overline{\chi}(f)}{|f|^{1-s}}.\] Further, it will be convenient for computations if we split at an integer that is divisible by \(3\). Therefore, we define the principal sum \[\mathcal{P}_{s}(3g,3A):=\sum_{\chi\in\mathfrak{H}(3g)}\sum_{f\in\mathcal{M}_{ \leq 3A}}\frac{\chi(f)}{|f|^{s}}\] and the dual sum \[\mathcal{D}_{s}(3g,3A)=q^{(\frac{1}{2}-s)3g}\sum_{\chi\in\mathfrak{H}(3g)} \omega(\chi)\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}}\frac{\overline{\chi}(f)}{|f|^{1-s }},\] so that, for any \(0<A<g\), we have \[\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}\mathcal{L}(q^{-s}, \chi)=\frac{\mathcal{P}_{s}(3g,3A)+\mathcal{D}_{s}(3g,3A)}{|\mathfrak{H}(3g)|} \tag{2.21}\] ## 3. The Principal Sum ### Contributions of the Principal Sum This section is devoted to proving contribution of the principal sum. **Proposition 3.1**.: _For any \(\epsilon>0\) and \(s>\epsilon\), we have if \(s\neq\frac{1}{3}\)_ \[\frac{\mathcal{P}_{s}(3g,3A)}{|\mathfrak{H}(3g)|}=M(s)+C_{q}\frac{q^{(1-3s)A}} {1-q^{3s-1}}+O_{\epsilon}\Big{(}q^{(\epsilon-3s)A}+q^{A-(1-\epsilon)(3g+1)}+q ^{(1-s+\epsilon)3A-\frac{1}{2}(3g+1)}\Big{)}\] _where \(M(s)\) and \(C_{q}\) are as an in Theorem 1.1. If \(s=\frac{1}{3}\), then_ \[\frac{P_{\frac{1}{3}}(3g,3A)}{|\mathfrak{H}(3g)|}=C_{q}\left(A+1+\sum_{P}\frac {\deg(P)}{|P|^{2}+|P|-1}\right)+O_{\epsilon}\Big{(}q^{(\epsilon-1)A}+q^{A-(1- \epsilon)(3g+1)}+q^{(2+\epsilon)A-\frac{1}{2}(3g+1)}\Big{)}\] ### Error Term Since we are assuming \(q\equiv 1\mod 6\), we have by cubic reciprocity (Theorem 3.5 of [10]) that \[\chi_{F}(f)=\left(\frac{f}{F}\right)_{3}=\left(\frac{F}{f}\right)_{3}=\chi_{ f}(F).\] Therefore, since \(\mathfrak{H}(3g)=\{\chi_{F}:F\in\mathcal{H}(3g+1)\}\) we may rewrite \[P_{s}(3g,3A)=\sum_{f\in\mathcal{M}_{\leq 3A}}\frac{1}{|f|^{s}}\sum_{F\in \mathcal{H}(3g+1)}\chi_{f}(F).\] We now wish to compute the innermost sum. To do this, we consider the generating series \[\mathfrak{P}(u;f):=\sum_{d=0}^{\infty}\sum_{F\in\mathcal{H}(d)}\chi_{f}(F)u^{ d}.\] This series converges for \(|u|<q^{-1}\). **Lemma 3.2**.: _If \(f\) is not a cube, then_ \[\mathfrak{P}(u;f)=\frac{\mathcal{L}(u,\chi_{f})}{\mathcal{L}(u^{2},\overline{ \chi}_{f})}.\] _In particular, \(\mathfrak{P}(u;f)\) can be analytically extended to the region \(|u|<q^{-1/4-\epsilon}\). Moreover, if \(\Gamma_{1}=\{u:|u|=q^{-1/2}\}\), then for any \(\epsilon>0\),_ \[\max_{u\in\Gamma_{*}}|\mathfrak{P}(u,f)|\ll q^{\epsilon\deg(f)}\] Proof.: Since we are summing over all square-free polynomials, we get the Euler product \[\mathfrak{P}(u;f) =\prod_{P}\left(1+\chi_{f}(P)u^{\deg(P)}\right)=\prod_{P}\left( \frac{1-\chi_{f}^{2}(P)u^{2\deg(P)}}{1-\chi_{f}(P)u^{\deg(P)}}\right)\] \[=\frac{\mathcal{L}(u,\chi_{f})}{\mathcal{L}(u^{2},\overline{ \chi}_{f})}\] where we have used the fact that \(\chi_{f}\) is a cubic character and thus \(\chi_{f}^{2}=\overline{\chi_{f}}\). Now, since \(\chi_{f}\) is non-trivial (\(f\) is not a cube), \(\mathcal{L}(u,\chi_{f})\) and \(\mathcal{L}(u^{2},\overline{\chi}_{f})\) are analytic for all \(u\), and \(\mathfrak{P}(u;f)\) is analytic for \(|u|<q^{-1/4-\epsilon}\), since this region does not contain the zeroes of \(\mathcal{L}(u^{2},\overline{\chi}_{f})\). Furthermore, since again \(\chi_{f}\) is not trivial, for \(|u|=q^{-1/2}\), we have bounds \[\mathcal{L}(u,\chi_{f}) \ll_{\epsilon}q^{\epsilon\deg f}\] \[\mathcal{L}(u^{2},\overline{\chi_{f}}) \gg_{\epsilon}q^{-\epsilon\deg f}\] for any \(\epsilon>0\). The first bound is the Lindelof hypothesis [1, Theorem 5.1] and the second bound is proven in [1, Lemma 2.6]. **Lemma 3.3**.: _For any \(\epsilon>0\), we have_ \[\frac{1}{|\mathcal{H}(d)|}\sum_{\begin{subarray}{c}f\in\mathcal{M}_{\leq \epsilon}\\ f\neq\overline{\mathbb{Q}}\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{ H}(d)}\chi_{f}(F)\ll_{\epsilon}q^{(1-s+\epsilon)A-\frac{1}{2}d}.\] Proof.: We first show that when \(f\) is not a cube, \[\sum_{F\in\mathcal{H}(d)}\chi_{f}(F)\ll_{\epsilon}q^{\epsilon\deg(f)}q^{ \frac{1}{2}d}. \tag{3.1}\] With \(\Gamma_{1}\) as in the previous lemma, we get that \(\frac{\mathfrak{P}(u;f)}{u^{d+1}}\) is meromorphic in the region bounded by \(\Gamma_{1}\) with a pole only at \(u=0\). Hence, \[\frac{1}{2\pi i}\oint_{\Gamma_{1}}\frac{\mathfrak{P}(u;f)}{u^{d+1}}du=\text{ Res}_{u=0}\left(\frac{\mathfrak{P}(u;f)}{u^{d+1}}\right)=\sum_{F\in\mathcal{H}(d)} \chi_{f}(F),\] and \[\frac{1}{2\pi i}\oint_{\Gamma_{1}}\frac{\mathfrak{P}(u;f)}{u^{d+1}}du\ll_{ \epsilon}\max_{u\in\Gamma_{1}}\left|\frac{\mathfrak{P}(u;f)}{u^{d+1}}\right| \ll_{\epsilon}q^{\epsilon\deg(f)}q^{\frac{1}{2}d}\] which completes the proof of (3.1). Applying this result, we then get that \[\sum_{\begin{subarray}{c}f\in\mathcal{M}_{\leq A}\\ f\neq\emptyset\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{H}(d)}\chi_{f}(F) \ll_{\epsilon}q^{\frac{1}{2}d}\sum_{\begin{subarray}{c}f\in \mathcal{M}_{\leq A}\\ f\neq\emptyset\end{subarray}}|f|^{(\epsilon-s)}\] \[\ll q^{\frac{1}{2}d}\sum_{n\leq A}q^{n}q^{n(\epsilon-s)}\ll q^{ \frac{1}{2}d}q^{(1-s+\epsilon)A}\] and the result follows from the fact that \(|\mathcal{H}(d)|=\frac{q^{d}}{\zeta_{q}(2)}\). ### Main Term In the case that \(f=h^{3}\) is a perfect cube, then \[\chi_{f}(F)=\left(\frac{F}{f}\right)_{3}=\left(\frac{F}{h}\right)_{3}^{3}= \begin{cases}1&(F,f)=1\\ 0&(F,f)\neq 1\end{cases}.\] Hence, in the case that \(f\) is a perfect cube, we get that \[\sum_{F\in\mathcal{H}(d)}\chi_{f}(F)=|\{\mathcal{H}(d,f)\}|\] where \[\mathcal{H}(d,f)=\{F\in\mathcal{H}(d):(F,f)=1\}.\] Therefore, we consider the generating series \[\mathcal{Q}(u;f) =\sum_{d=0}^{\infty}|\mathcal{H}(d,f)|u^{d}=\sum_{(F,f)=1}\mu^{2} (F)u^{\deg(F)}\] \[=\prod_{P|f}(1+u^{\deg(P)})=\prod_{P|f}(1+u^{\deg(P)})^{-1}\prod_{ P}\left(\frac{1-u^{2\deg(P)}}{1-u^{\deg(P)}}\right)\] \[=\prod_{P|f}(1+u^{\deg(P)})^{-1}\frac{\mathcal{Z}_{q}(u)}{ \mathcal{Z}_{q}(u^{2})}=\prod_{P|f}(1+u^{\deg(P)})^{-1}\frac{1-qu^{2}}{1-qu}\] We see that \(\mathcal{Q}(u;f)\) can be meromorphically extended to the region \(|u|<1\) with a simple pole at \(u=q^{-1}\). Thus, if we consider the contour \[\Gamma_{2}=\left\{u:|u|=q^{-\epsilon}\right\},\] we get that \[\frac{1}{2\pi}\oint_{\Gamma_{2}}\frac{\mathcal{Q}(u;f)}{u^{d+1}}du=\operatorname {Res}_{u=0}\left(\frac{\mathcal{Q}(u;f)}{u^{d+1}}\right)+\operatorname{Res}_ {u=q^{-1}}\left(\frac{\mathcal{Q}(u;f)}{u^{d+1}}\right)\] We see that \[\operatorname{Res}_{u=0}\left(\frac{\mathcal{Q}(u;f)}{u^{d+1}}\right)=| \mathcal{H}(d;f)|\] and for \(d\geq 1\), \[\operatorname{Res}_{u=q^{-1}}\left(\frac{\mathcal{Q}(u;f)}{u^{d+ 1}}\right) =\lim_{u\to q^{-1}}\left(\frac{(u-q^{-1})}{u^{d+1}}\prod_{P|f}(1+u ^{\deg(P)})^{-1}\frac{1-qu^{2}}{1-qu}\right)\] \[=-\prod_{P|f}\left(1+\frac{1}{|P|}\right)^{-1}\left(q^{d}-q^{d-1 }\right).\] Now, for \(u\in\Gamma_{2}\), we find that \[|\mathcal{Q}(u;f)| =\left|\prod_{P|f}(1-u^{\deg(P)})^{-1}\frac{1-qu^{2}}{1-qu}\right|\] \[\ll\prod_{P|f}(1-q^{-1})^{-1}\frac{1+q^{1-2\epsilon}}{1-q^{1- \epsilon}}\ll_{\epsilon}(1+q^{-1})^{\deg(f)}\] and we conclude that \[\mathcal{H}(d;f)=\prod_{P|f}\left(1+\frac{1}{|P|}\right)^{-1}|\mathcal{H}(d)|+O \left((1+q^{-1})^{\deg(f)}q^{cd}\right).\] **Lemma 3.4**.: _For any \(\epsilon>0\) let \(s>\epsilon\). Then if \(s\neq\frac{1}{3}\), we get_ \[\frac{1}{|\mathcal{H}(d)|}\sum_{\begin{subarray}{c}f\in\mathcal{M}_{\leq 3A} \\ f=\text{\tiny{GB}}\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{H}(d)} \chi_{f}(F)=M(s)+C_{q}\frac{q^{(1-3s)A}}{1-q^{3s-1}}+O_{\epsilon}\left(q^{( \epsilon-3s)A}+q^{A-(1-\epsilon)d}\right)\] _where \(M(s)\) and \(C_{q}\) are as in the Theorem 1.1. If \(s=\frac{1}{3}\), then we get_ \[\frac{1}{|\mathcal{H}(d)|}\sum_{\begin{subarray}{c}f\in\mathcal{M }_{\leq 3A}\\ f=\text{\tiny{GB}}\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{H}(d)} \chi_{f}(F)=C_{q}\left(A+1+\sum_{P}\frac{1}{|P|^{2}+|P|-1}\right)\\ +O_{\epsilon}\left(q^{(\epsilon-1)A}+q^{A-(1-\epsilon)d}\right).\] Proof.: Indeed, if we write \(f=h^{3}\), then we get \[\sum_{\begin{subarray}{c}f\in\mathcal{M}_{\leq 3A}\\ f=\text{\tiny{GB}}\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{H}(d)} \chi_{f}(F)=\sum_{h\in\mathcal{M}_{\leq A}}\frac{|\mathcal{H}(d,h)|}{|h|^{3s}} \tag{3.2}\] \[=\sum_{h\in\mathcal{M}_{\leq A}}\frac{1}{|h|^{3s}}\left[\prod_{P |h}\left(1+\frac{1}{|P|}\right)^{-1}|\mathcal{H}(d)|+O_{\epsilon}\left((1+q^{ -1})^{\deg(h)}q^{cd}\right)\right].\] First, we compute the error term and find \[\sum_{h\in\mathcal{M}_{\leq A}}\frac{(1+q^{-1})^{\deg(h)}q^{ed}}{|h|^{3s}}=q^{ ed}\sum_{m\leq A}\left(\frac{q+1}{q^{3s}}\right)^{m}\ll q^{ed+A} \tag{3.3}\] For the main term, we define the generating series \[\mathcal{G}_{s}(v):=\sum_{h\in\mathcal{M}}\prod_{P|h}\left(1+\frac{1}{|P|} \right)^{-1}\frac{v^{\deg(h)}}{|h|^{3s}}.\] Expanding it as an Euler product, we see that \[\mathcal{G}_{s}(v) =\prod_{P}\left(1+\left(1+\frac{1}{|P|}\right)^{-1}\sum_{k=1}^{ \infty}\left(\frac{v}{q^{3s}}\right)^{k\deg(P)}\right)\] \[=\prod_{P}\left(\frac{1-\left(\frac{v}{q^{3s}}\right)^{\deg(P)}+ \left(1-\frac{1}{|P|+1}\right)\left(\frac{v}{q^{3s}}\right)^{\deg(P)}}{1-\left( \frac{v}{q^{3s}}\right)^{\deg(P)}}\right)\] \[=\mathcal{Z}_{q}\left(\frac{v}{q^{3s}}\right)\prod_{P}\left(1- \frac{v^{\deg(P)}}{|P|^{3s}(|P|+1)}\right) \tag{3.4}\] Thus we see that that \(\mathcal{G}_{s}(v)\) can be meromorphically continued to the region \(|v|\leq q^{3s-\epsilon}\) with a simple pole when \(v=q^{3s-1}\). Therefore, if \(s>\epsilon\) and \(s\neq 1/3\), and we define \(\Gamma_{3}=\{v:|v|=q^{3s-\epsilon}\}\), then we get that \[\frac{1}{2\pi i}\oint_{\Gamma_{3}}\frac{\mathcal{G}_{s}(v)}{1-v }\frac{dv}{v^{A+1}}=\text{Res}_{v=0}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{ A+1}}\right) +\text{Res}_{v=q^{3s-1}}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{A+ 1}}\right)\] \[+\text{Res}_{v=1}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{A+1}} \right).\] Expanding \(\frac{1}{1-v}\) as a Taylor series we get that \[\text{Res}_{v=0}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{A+1}}\right) =\sum_{n=0}^{\infty}\text{Res}_{v=0}\left(\frac{\mathcal{G}_{s}( v)}{v^{A-n+1}}\right)=\sum_{h\in\mathcal{M}_{\leq A}}\frac{1}{|h|^{3s}}\prod_{P |h}\left(1+\frac{1}{|P|}\right)^{-1}\] \[=\frac{1}{|\mathcal{H}(d)|}\sum_{\begin{subarray}{c}f\in\mathcal{ M}_{\leq 3}\\ f-\mathcal{G}\end{subarray}}\frac{1}{|f|^{s}}\sum_{F\in\mathcal{H}(d)}\chi_{f}(F )+O_{\epsilon}\left(q^{A}q^{(\epsilon-1)d}\right)\] using (3.2) and (3.3). We also have \[\left|\frac{1}{2\pi i}\oint_{\Gamma_{3}}\frac{\mathcal{G}_{s}(v)}{1-v}\frac{ dv}{v^{A+1}}\right|\ll\frac{\max_{v\in\Gamma_{3}}\left|\frac{\mathcal{G}_{s}(v)}{ 1-v}\right|}{q^{(3s-\epsilon)A}}\ll_{\epsilon}q^{(\epsilon-3s)A}.\] So, we have obtained the error terms of the lemma, and it remains to compute the two other residues. By (3.4), we see that \[\text{Res}_{v=q^{3s-1}}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{A+1}}\right)=- \frac{q^{(1-3s)A}}{1-q^{3s-1}}\prod_{P}\left(1-\frac{1}{|P|(|P|+1)}\right)\] while \[\text{Res}_{v=1}\left(\frac{\mathcal{G}_{s}(v)}{(1-v)v^{A+1}}\right)=-\zeta_{ q}(3s)\prod_{P}\left(1-\frac{1}{|P|^{3s}(|P|+1)}\right)=-M(s)\] which converges because \(s>\epsilon\), \(s\neq\frac{1}{3}\). This completes the proof for \(s\neq\frac{1}{3}\). Now, if \(s=\frac{1}{3}\), we get the same residue at \(0\) and the same error terms (replacing \(s\) by \(\frac{1}{3}\)), but we now have a double pole at \(v=1\), and it remains to compute the residue at \(v=1\). Denoting \[\mathcal{K}(v):=\prod_{P}\left(1-\frac{v^{\deg(P)}}{|P|(|P|+1)}\right)\] we obtain that \[\operatorname{Res}_{v=1}\left(\frac{\mathcal{G}_{\frac{1}{3}}(v)}{(1-v)v^{A+1}} \right)=\lim_{v\to 1}\frac{d}{dv}\frac{\mathcal{K}(v)}{v^{A+1}}=-\mathcal{K}(1)(A+1)+ \mathcal{K}^{\prime}(1)\] The result now follows from the fact that \(C_{q}=\mathcal{K}(1)\) and that \[\frac{\mathcal{K}^{\prime}(v)}{\mathcal{K}(v)}=\frac{d}{dv}\log(\mathcal{K}(v ))=-\sum_{P}\frac{\deg(P)v^{\deg(P)-1}}{|P|(|P|+1)-v^{\deg(P)}}.\] Proof of Proposition 3.1.: We can now combine Lemmas 3.3 (with \(d=3g+1\) and replacing \(A\) by \(3A\)) and 3.4 (with \(d=3g+1\)) to prove Proposition 3.1. ## 4. The Dual Term ### Contributions of the Dual Sum This section is devoted to proving contribution of the dual terms. **Proposition 4.1**.: _For any \(\epsilon>0\) and \(s<1-\epsilon\), we have that if \(s\neq\frac{1}{3}\), then_ \[\frac{\mathcal{D}_{s}(3g,3A)}{|\mathfrak{H}(3g)|}=-C_{q}\frac{q^{(1-3s)A}}{1-q ^{3s-1}}+O_{\epsilon}\left(\frac{q^{3(1-s+\epsilon)A}}{q^{(2-\epsilon)g}}+ \frac{q^{(\frac{3}{4}+\epsilon)g}}{q^{3(s+\frac{1}{4}+\epsilon)A}}+q^{(\frac{ 1}{3}-s)(3g+1)}E_{s}(3g,3A)\right)\] _where \(C_{q}\) is as in Theorem 1.1 and_ \[E_{s}(3g,3A)=\begin{cases}1&s<2/3\\ (g-A)^{2}&s=2/3\\ (g-A)q^{(3s-2)(g-A)}&s>2/3.\end{cases}\] _If \(s=\frac{1}{3}\), then_ \[\frac{\mathcal{D}_{\frac{1}{3}}(3g,3A)}{|\mathfrak{H}(3g)|}=C_{q}\left(g-A- \frac{1}{3}\frac{q+2}{q-1}+\sum_{P}\frac{\deg(P)}{|P|^{3}+2|P|^{2}-1}\right)+O _{\epsilon}\left(\frac{q^{(2+\epsilon)A}}{q^{(2-\epsilon)g}}+\frac{q^{(\frac{ 3}{4}+\epsilon)g}}{q^{(\frac{3}{4}+\epsilon)A}}+1\right).\] _Remark 4.2_.: As in Theorem 1.1, when \(q\) is fixed and \(g-A\to\infty\), we could write the second result as \[\frac{\mathcal{D}_{\frac{1}{3}}(3g,3A)}{|\mathfrak{H}(3g)|}=C_{q}(g-A)\left(1+ o(1)\right).\] We keep it as is to give evidence of an explicit constant appearing in the shape of a prime sum. ### Applying Results of [16] Let \(\chi_{F}\) be any cubic character as defined by (2.2). This is then a character modulo \(F\), but not necessarily primitive. We define the generalized cubic Gauss sum \[G(V,F):=\sum_{a\bmod F}\chi_{F}(a)e_{q}\left(\frac{aV}{F}\right). \tag{4.1}\] It is important to notice that if \(\chi_{F}\) has conductor \(F^{\prime}\) with \(\deg(F^{\prime})<\deg(F)\) then \(G(1,F)\neq G(\chi_{F})\). Conversely, if \(F\) is, say square-free, then we do have that \(G(1,F)=G(\chi_{F})\). **Lemma 4.3**.: _[_11_, Corollary 2.3, Equation (22)]_ _For any \((f,F)=1\) and \(V\), we have_ \[\overline{\chi}_{F}(f)G(V,F)=G(fV,F).\] Applying Lemmas 2.1 and 4.3, with the observation that \(\chi_{f}(F)=0\) if \((F,f)=1\), we get that \[\mathcal{D}_{s}(3g,3A) =q^{(\frac{1}{2}-s)3g}\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}}\frac{ 1}{|f|^{1-s}}\sum_{\begin{subarray}{c}F\in\mathcal{H}(3g+1)\\ (F,f)=1\end{subarray}}\overline{\chi}_{F}(f)\omega(\chi_{F})\] \[=\frac{\overline{\epsilon(\chi_{3})}}{q^{3gs+\frac{1}{2}}}\sum_{ f\in\mathcal{M}_{\leq 3g-3A-1}}\frac{1}{|f|^{1-s}}\sum_{\begin{subarray}{c}F\in \mathcal{H}(3g+1)\\ (F,f)=1\end{subarray}}G(f,F).\] Finally, we state another result that can be found in Lemma 2.12 of [11]. **Lemma 4.4**.: _If \((F_{1},F_{2})=1\), then_ \[G(V,F_{1}F_{2}) =\chi_{F_{1}}^{2}(F_{2})G(V,F_{1})G(V,F_{2})\] \[=G(VF_{2},F_{1})G(V,F_{2}).\] _Moreover, if there exists a prime \(P\) such that \(P^{2}|F\) and \(P\nmid V\), then_ \[G(V,F)=0.\] The second part of Lemma 4.4 implies that if \(F\in\mathcal{M}_{3g+1}\) is not square-free and \((F,f)=1\) then \(G(f,F)=0\). Therefore, in the formula of \(\mathcal{D}_{s}(3g,3A)\) above, we may remove the condition that \(F\) is square-free, and write \[\mathcal{D}_{s}(3g,3A)=\frac{\overline{\epsilon(\chi_{3})}}{q^{3gs+\frac{1}{2 }}}\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}}\frac{1}{|f|^{1-s}}\sum_{ \begin{subarray}{c}F\in\mathcal{M}_{3g+1}\\ (F,f)=1\end{subarray}}G(f,F). \tag{4.2}\] **Proposition 4.5** (Proposition 3.1 of [11]).: _Let \(f=f_{1}f_{2}^{2}f_{3}^{3}\) with \(f_{1}\) and \(f_{2}\) square-free and coprime. We have, for any \(\epsilon>0\)_ \[\sum_{\begin{subarray}{c}F\in\mathcal{M}_{d}\\ (F,f)=1\end{subarray}}G(f,F)=\delta_{f_{2}=1}\rho(d;f)\frac{\overline{G(1,f_{ 1})}}{|f_{1}|^{2/3}}\frac{q^{4d/3}}{\zeta_{q}(2)}\prod_{P\mid f}\left(1+\frac{ 1}{|P|}\right)^{-1}+O_{\epsilon}\left(\delta_{f_{2}=1}\frac{q^{(\frac{1}{3}+ \epsilon)d}}{|f_{1}|^{\frac{1}{6}}}+q^{d}|f|^{\frac{1}{3}+\epsilon}\right)\] _where_ \[\rho(d;f)=\begin{cases}1&d+\deg(f)\equiv 0\mod 3\\ \frac{\tau(\chi_{3})}{q^{1/3}}&d+\deg(f)\equiv 1\mod 3\\ 0&d+\deg(f)\equiv 2\mod 3\end{cases}.\] Proof.: In [11, Proposition 3.1], the second error term is of the form \[\frac{1}{2\pi i}\int_{|u|=q^{-\sigma}}\frac{\widetilde{\psi}(f,u)}{u^{d}} \frac{du}{u}\] for any \(\frac{2}{3}<\sigma<\frac{4}{3}\). In that region, we have the convexity bound \[\widetilde{\psi}(f,u)\ll|f|^{\frac{1}{2}\left(\frac{3}{2}-\sigma\right)+\epsilon}\] from [11, Proposition 3.11]. Taking \(\sigma=1\), we get the result. Using the above proposition in (4.2), we write \[\mathcal{D}_{s}(3g,3A)=MT_{s}(3g,3A)+O_{\epsilon}(ET_{s}(3g,3A))\] where \[MT_{s}(3g,3A):=\frac{\overline{\epsilon(\chi_{3})}q^{(\frac{4}{3}-s)3g+\frac{5} {6}}}{\zeta_{q}(2)}\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}}\frac{\delta_{f_{2}=1} \rho(1;f)\overline{G(1,f_{1})}}{|f|^{1-s}|f_{1}|^{2/3}}\prod_{P|f}\left(1+ \frac{1}{|P|}\right)^{-1}\] and \[ET_{s}(3g,3A):=\frac{1}{q^{3gs+\frac{1}{2}}}\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}} \frac{1}{|f|^{1-s}}\left(\delta_{f_{2}=1}\frac{q^{(\frac{1}{3}+\epsilon)(3g+1) }}{|f_{1}|^{\frac{1}{6}}}+q^{3g+1}|f|^{\frac{1}{4}+\varepsilon}\right),\] where we use the fact that \(|\epsilon(\chi_{3})|=1\). ### Bounding the Error Term **Lemma 4.6**.: _For any \(\epsilon>0\) and \(s\geq 0\), we get that_ \[ET_{s}(3g,3A)\ll q^{(\frac{15}{4}+\epsilon)g-3(s+\frac{1}{4}+\epsilon)A}\] Proof.: For the second sum of \(ET_{s}(3g,3A)\), we have \[\frac{q^{3g+1}}{q^{3gs+\frac{1}{2}}}\sum_{f\in\mathcal{M}_{\leq 3g-3A-1}} \frac{|f|^{\frac{1}{4}+\epsilon}}{|f|^{1-s}}=q^{3g(1-s)+\frac{1}{2}}\sum_{n \leq 3g-3A-1}q^{(s+\frac{1}{4}+\epsilon)n}\ll q^{(\frac{15}{4}+\epsilon)g-3 (s+\frac{1}{4}+\epsilon)A}.\] For the first sum of \(ET_{s}(3g,3A)\), if \(f_{2}=1\), then we can write \(f=f_{1}f_{3}^{3}\) where \(f_{1}\) is square-free and \(f_{3}\) is anything. Hence, setting \(B=3g-3A-1\), we get \[\frac{1}{q^{3gs+\frac{1}{2}}}\sum_{f\in\mathcal{M}_{\leq B}}\frac{ \delta_{f_{2}=1}q^{(\frac{1}{3}+\epsilon)(3g+1)}}{|f|^{1-s}|f_{1}|^{\frac{1}{ 6}}} =\frac{q^{\frac{6}{6}+\epsilon}}{q^{(s-\frac{1}{3}-\epsilon)3g}} \sum_{n_{1}+3n_{3}\leq B}\sum_{\begin{subarray}{c}f_{1}\in\mathcal{H}(n_{1}) \\ f_{3}\in\mathcal{M}_{n_{3}}\end{subarray}}\frac{1}{q^{(7/6-s)n_{1}+3(1-s)n_{3}}}\] \[\ll\frac{q^{\frac{5}{6}+\epsilon}}{q^{(s-\frac{1}{3}-\epsilon)3g} }\sum_{n_{1}\leq B}q^{(s-1/6)n_{1}}\sum_{n_{3}\leq(B-n_{1})/3}q^{(3s-2)n_{3}}\] \[\ll\frac{q^{\frac{5}{6}+\epsilon}}{q^{(s-\frac{1}{3}-\epsilon)3g} }\sum_{n_{1}\leq B}q^{(s-1/6)n_{1}}\begin{cases}1&s<2/3\\ \frac{B-n_{1}}{3}&s=2/3\\ q^{(s-\frac{2}{3})(B-n_{1})}&s>2/3,\end{cases}\] and working case by case with trivial bounds, it is straightforward to see that this error term is also bounded by \(q^{(\frac{15}{4}+\epsilon)g-3(s+\frac{1}{4}+\epsilon)A}\). ### Extending Proposition 4.5 Since there is a factor of \(\delta_{f_{2}=1}\) in the sum over \(f\) in \(MT_{s}(3g,3A)\), we can write \(f=f_{1}f_{3}^{3}\) where \(f_{1}\in\mathcal{H}(n_{1})\) and \(f_{3}\in\mathcal{M}_{n_{3}}\). Moreover, since \(G(1,f)=0\) unless \(f\) is square-free, we may extend the sum over \(\mathcal{H}(n_{1})\) to the sum over \(\mathcal{M}_{n_{1}}\). Hence, we can rewrite \[MT_{s}(3g,3A) =\frac{\overline{\epsilon(\chi_{3})}q^{(\frac{4}{3}-s)3g+\frac{5} {6}}}{\zeta_{q}(2)}\sum_{n_{1}+3n_{3}\leq 3g-3A-1}\frac{\rho(n_{1}+1;1)}{q^{(1-s)(n_{1}+3n_{3})}}\] \[\quad\times\sum_{\begin{subarray}{c}f_{1}\in\mathcal{M}_{n_{1}} \\ f_{3}\in\mathcal{M}_{n_{3}}\end{subarray}}\frac{\overline{G(1,f_{1})}}{|f_{1}|^{ 2/3}}\prod_{P|f_{1}f_{3}}\left(1+\frac{1}{|P|}\right)^{-1}\] We now extend Proposition 4.5 to get an estimate when \(G(1,F)\) is multiplied by an Euler product. **Proposition 4.7**.: _For any values \(a_{P}\) such that \(|a_{P}|=O\left(\frac{1}{|P|}\right)\), we have for any \(H\in\mathcal{M}\),_ \[\sum_{F\in\mathcal{M}_{n}}\frac{G(1,F)}{|F|^{2/3}}\prod_{\begin{subarray}{c}P| F\\ P|H\end{subarray}}(1-a_{P})=\frac{\rho(n;1)q^{2n/3}}{\zeta_{q}(2)}\prod_{P|H} \left(1-\frac{a_{P}}{|P|+1}\right)+O\left(q^{n/3}\right).\] Proof.: We first expand the Euler product as \[\prod_{\begin{subarray}{c}P|F\\ P|H\end{subarray}}(1-a_{P})=\sum_{\begin{subarray}{c}D|F\\ (D,H)=1\end{subarray}}\mu(D)a_{D},\qquad\text{ where }\qquad a_{D}:=\prod_{P|D}a_{P}.\] Notice that by hypothesis on \(a_{P}\), have \(a_{D}=O\left(\frac{1}{|D|}\right)\) which we will use often. Expanding the Euler product like this, we obtain \[\sum_{F\in\mathcal{M}_{n}}\frac{G(1,F)}{|F|^{2/3}}\prod_{ \begin{subarray}{c}P|F\\ P|H\end{subarray}}(1-a_{P}) =\frac{1}{q^{2n/3}}\sum_{F\in\mathcal{M}_{n}}G(1,F)\sum_{ \begin{subarray}{c}D|F\\ (D,H)=1\end{subarray}}\mu(D)a_{D} \tag{4.3}\] \[=\frac{1}{q^{2n/3}}\sum_{\begin{subarray}{c}D\in\mathcal{M}_{ \leq n}\\ (D,H)=1\end{subarray}}\mu(D)a_{D}\sum_{F\in\mathcal{M}_{n-\deg(D)}}G(1,FD).\] The idea now is to use the work of [10] (specifically Proposition 4.5 which is [10, Proposition 3.1]) to evaluate this innermost sum. However, we notice that if \(D\in\mathcal{M}_{n}\), then \(F\in\mathcal{M}_{0}\) and hence \(F=1\), and Proposition 4.5 won't apply (the main term and the error term are the same size), so we must treat this case separately. That is, we note that \[\sum_{\begin{subarray}{c}D\in\mathcal{M}_{n}\\ (D,H)=1\end{subarray}}\mu(D)a_{D}G(1,D)\ll\sum_{D\in\mathcal{M}_{n}}\frac{|G( 1,D)|}{|D|}=\sum_{D\in\mathcal{M}_{n}}\frac{1}{|D|^{1/2}}=q^{n/2}.\] Now, Lemmas 4.3 and 4.4 shows that \[G(1,FD)=\begin{cases}G(D,F)G(1,D)&(F,D)=1\\ 0&(F,D)\neq 1\end{cases}.\] Hence, if \(\deg(D)<n\) then by applying Proposition 4.5 we get (since \(D\) is always square-free) that \[\sum_{F\in\mathcal{M}_{n-\deg(D)}}G(1,FD)= G(1,D)\sum_{\begin{subarray}{c}F\in\mathcal{M}_{n-\deg(D)}\\ (F,D)=1\end{subarray}}G(D,F)\] \[= G(1,D)\rho(n-\deg(D);D)\frac{\overline{G(1,D)}}{|D|^{2/3}}\frac{q ^{4(n-\deg(D))/3}}{\zeta_{q}(2)}\prod_{P|D}\left(1+\frac{1}{|P|}\right)^{-1}\] \[+O_{\epsilon}\left(G(1,D)\left(\frac{q^{\left(\frac{1}{3}+ \epsilon\right)(n-\deg(D))}}{|D|^{\frac{1}{6}}}+q^{(n-\deg(D))}|D|^{\frac{1}{4 }+\epsilon}\right)\right)\] \[=\frac{\rho(n;1)}{\zeta_{q}(2)}\frac{q^{4n/3}}{|D|}\prod_{P|D} \left(1+\frac{1}{|P|}\right)^{-1}+O_{\epsilon}\left(\frac{q^{(\frac{1}{3}+ \epsilon)n}}{|D|^{\epsilon}}+\frac{q^{n}}{|D|^{\frac{1}{4}-\epsilon}}\right).\] Hence, summing the main term over \(D\), we find that \[\sum_{\begin{subarray}{c}D\in\mathcal{M}_{<n}\\ (D,H)=1\end{subarray}}\frac{\mu(D)a_{D}}{|D|}\prod_{P|D}\left(1+\frac{1}{|P|} \right)^{-1} =\sum_{\begin{subarray}{c}D\in\mathcal{M}\\ (D,H)=1\end{subarray}}\frac{\mu(D)a_{D}}{|D|}\prod_{P|D}\left(1+\frac{1}{|P|} \right)^{-1}+O\left(\frac{1}{q^{n}}\right)\] \[=\prod_{P\nmid H}\left(1-\frac{a_{P}}{|P|+1}\right)+O\left(\frac{ 1}{q^{n}}\right),\] and summing the error term over \(D\), we find that \[\sum_{\begin{subarray}{c}D\in\mathcal{M}_{<n}\\ (D,H)=1\end{subarray}}a_{D}\left(\frac{q^{(\frac{1}{3}+\epsilon)n}}{|D|^{ \epsilon}}+\frac{q^{n}}{|D|^{\frac{1}{4}-\varepsilon}}\right)\ll q^{n}.\] Replacing in (4.3), this finishes the proof. Applying the proposition, we obtain \[\sum_{\begin{subarray}{c}f_{1}\in\mathcal{M}_{n_{1}}\\ f_{3}\in\mathcal{M}_{n_{3}}\end{subarray}}\frac{\overline{G(1,f_{1})}}{|f_{1} |^{2/3}}\prod_{P|f_{1}f_{3}}\left(1+\frac{1}{|P|}\right)^{-1}\] \[=\sum_{f_{3}\in\mathcal{M}_{n_{3}}}\prod_{P|f_{3}}\left(1+\frac{1 }{|P|}\right)^{-1}\sum_{f_{1}\in\mathcal{M}_{n_{1}}}\frac{\overline{G(1,f_{1} )}}{|f_{1}|^{2/3}}\prod_{\begin{subarray}{c}P|f_{1}\\ P|f_{3}\end{subarray}}\left(1-\frac{1}{|P|+1}\right)\] \[=\frac{\overline{\rho(n_{1};1)}q^{2n_{1}/3}}{\zeta_{q}(2)}\prod_{ P}\left(1-\frac{1}{(|P|+1)^{2}}\right)\sum_{f_{3}\in\mathcal{M}_{n_{3}}}\prod_{P|f_{3}} \left(1+\frac{1}{|P|+1}\right)^{-1}\] \[\quad+O_{\epsilon}\left(q^{n_{1}/3+n_{3}}\right)\] Now, with the observation that \(\rho(m+3n;1)=\rho(m;1)\) and \[q^{2n_{1}/3}=\sum_{f_{1}\in\mathcal{M}_{n_{1}}}\frac{1}{|f_{1}|^{1/3}}\] we can write \[\sum_{n_{1}+3n_{3}\leq 3g-3A-1}\frac{\rho(n_{1}+1;1)\overline{\rho(n_{1 };1)}q^{2n_{1}/3}}{q^{(1-s)(n_{1}+3n_{3})}}\sum_{f_{3}\in\mathcal{M}_{n_{3}}} \prod_{P|f_{3}}\left(1+\frac{1}{|P|+1}\right)^{-1}\] \[=\sum_{n\leq 3g-3A-1}\frac{\rho(n+1;1)\overline{\rho(n;1)}}{q^{(1-s)n }}\sum_{f_{1}f_{3}^{3}\in\mathcal{M}_{n}}\frac{1}{|f_{1}|^{1/3}}\prod_{P|f_{3}} \left(1+\frac{1}{|P|+1}\right)^{-1}.\] Using the definition of \(\rho(d;f)\), we find that \[\overline{\epsilon(\chi_{3})}q^{5/6}\rho(n+1;1)\overline{\rho(n;1)}=\begin{cases} q&3|n\\ 0&3\nmid n\end{cases}\] Hence we can write \[MT_{s}(3g,3A)=MMT_{s}(3g,3A)+O(MET_{s}(3g,3A))\] where \[MMT_{s}(3g,3A):=\frac{q^{(\frac{4}{3}-s)3g+1}}{\zeta_{q}^{2}(2)} \prod_{P}\left(1-\frac{1}{(|P|+1)^{2}}\right)\sum_{m\leq g-A-1}\frac{1}{q^{(1- s)3m}}\\ \sum_{f_{1}f_{3}^{3}\in\mathcal{M}_{3m}}\frac{1}{|f_{1}|^{1/3}} \prod_{P|f_{3}}\left(1+\frac{1}{|P|+1}\right)^{-1}\] and \[MET_{s}(3g,3A):=q^{(\frac{4}{3}-s)3g+1}\sum_{n_{1}+3n_{3}\leq 3g-3A-1}\frac{q^{n_ {1}/3+n_{3}}}{q^{(1-s)(n_{1}+3n_{3})}}.\] **Lemma 4.8**.: _For any value of \(A\), we get that_ \[MET_{s}(3g,3A)\ll q^{(\frac{4}{3}-s)(3g+1)}E_{s}(3g,3A)\] Proof.: Indeed, we have \[MET_{s}(3g,3A) =q^{(\frac{4}{3}-s)3g+1}\sum_{n\leq 3g-3A-1}q^{(s-2/3)n}\sum_{n_{1 }+3n_{3}=n}1\] \[\ll q^{(\frac{4}{3}-s)3g+1}\begin{cases}1&s<2/3\\ (g-A)^{2}&s=2/3\\ (g-A)q^{(3s-2)(g-A)}&s>2/3\end{cases}\] ### Computing the Main Dual Term **Lemma 4.9**.: _Let \(C_{q}\) is as in Theorem 1.1. For any \(\epsilon>0\) and \(s<1-\epsilon\), if \(s\neq\frac{1}{3}\), we have_ \[MMT_{s}(3g,3A)=-C_{q}\frac{q^{3g+1}}{\zeta_{q}(2)}\frac{q^{(1-3s)A}}{1-q^{3s- 1}}+O_{\epsilon}\left(q^{(1+\epsilon)g+3(1-s+\epsilon)A}+q^{(\frac{4}{3}-s)3g +1}E_{s}(3g,3A)\right)\] _while if \(s=\frac{1}{3}\), we have_ \[MMT_{s}(3g,3A)=\frac{C_{q}q^{3g+1}}{\zeta_{q}(2)}\left(g-A-\frac{1}{3}+\frac{ 1}{q-1}+\sum_{P}\frac{\deg(P)}{|P|^{3}+2|P|^{2}-1}\right)+O_{\epsilon}\left(q^ {(1+\epsilon)g+(2+\epsilon)A}+q^{3g+1}\right)\] Proof.: We first note that we can rewrite the series in \(MMT_{s}(3g,3A)\) as \[\sum_{m\leq g-A-1}\sum_{f_{1}f_{3}^{3}\in\mathcal{M}_{3m}}\frac{1}{|f_{1}|^{\frac {4}{3}-s}|f_{3}|^{3(1-s)}}\prod_{P|f_{3}}\left(1+\frac{1}{|P|+1}\right)^{-1}. \tag{4.4}\] We consider the generating series \[\mathcal{D}(v) :=\sum_{f_{1},f_{3}\in\mathcal{M}}\frac{1}{|f_{1}|^{\frac{4}{3}-s }|f_{3}|^{3(1-s)}}\prod_{P|f_{3}}\left(1+\frac{1}{|P|+1}\right)^{-1}v^{\deg(f_{ 1}f_{3}^{3})}\] \[=\left(\sum_{f_{1}\in\mathcal{M}}\frac{v^{\deg(f_{1})}}{|f_{1}|^{ \frac{4}{3}-s}}\right)\left(\sum_{f_{3}\in\mathcal{M}}\prod_{P|f_{3}}\left(1+ \frac{1}{|P|+1}\right)^{-1}\frac{v^{3\deg(f_{3})}}{|f_{3}|^{3(1-s)}}\right),\] and we compute \[\sum_{f_{1}\in\mathcal{M}}\frac{v^{\deg(f_{1})}}{|f_{1}|^{\frac{4}{3}-s}}= \sum_{f_{1}\in\mathcal{M}}\left(\frac{v}{q^{\frac{4}{3}-s}}\right)^{\deg(f_{ 1})}=\zeta_{q}\left(\frac{v}{q^{\frac{4}{3}-s}}\right)=\frac{1}{1-q^{s-\frac{ 3}{3}}v}\] and \[\sum_{f_{3}\in\mathcal{M}}\prod_{P|f_{3}}\left(1+\frac{1}{|P|+1} \right)^{-1}\frac{v^{3\deg(f_{3})}}{|f_{3}|^{3(1-s)}} =\prod_{P}\left(1+\left(1+\frac{1}{|P|+1}\right)^{-1}\frac{\left( \frac{v}{q^{1-s}}\right)^{3\deg(P)}}{1-\left(\frac{v}{q^{1-s}}\right)^{3\deg(P )}}\right)\] \[=\prod_{P}\left(1-\left(\frac{v}{q^{1-s}}\right)^{3\deg(P)}\right) \prod_{P}\left(1-\frac{v^{3\deg(P)}}{|P|^{3(1-s)}|P|+2}\right)\] \[=\zeta_{q}\left(\frac{v^{3}}{q^{3(1-s)}}\right)\prod_{P}\left(1- \frac{v^{3\deg(P)}}{|P|^{3(1-s)}(|P|+2)}\right).\] Let \[\mathcal{K}_{s}(v):=\prod_{P}\left(1-\frac{v^{3\deg(P)}}{|P|^{3(1-s)}(|P|+2)}\right)\] which is an analytic function on the region \(|v|<q^{1-s}\). Then \[\mathcal{D}(v)=\frac{\mathcal{K}_{s}(v)}{(1-q^{3s-2}v^{3})(1-q^{s-\frac{1}{3}} v)}, \tag{4.5}\] can be meromorphically extended to the region \(|v|\leq q^{1-s-\epsilon}\) with poles at \(v=q^{\frac{1}{3}-s}\) and \(v=\xi_{3}^{j}q^{\frac{2}{3}-s}\) for \(j=0,1,2\) where \(\xi_{3}\) is a primitive root of unity. Notice that \(\mathcal{K}_{s}(v)\) is uniformly bounded for \(|v|\leq q^{1-s-\epsilon}\). Therefore, as long as \(s<1-\epsilon\), if we set \(\Gamma_{4}=\{v:|v|=q^{1-s-\epsilon}\}\), we get that if \(s\neq\frac{1}{3},\frac{2}{3}\) then \[\frac{1}{2\pi i}\oint_{\Gamma_{4}}\frac{\mathcal{D}(v)}{1-v^{3}} \frac{dv}{v^{3(g-A-1)+1}}\] \[=\operatorname{Res}_{v=0}\left(\frac{\mathcal{D}(v)}{(1-v^{3})v^{ 3(g-A-1)+1}}\right)+\operatorname{Res}_{v=q^{\frac{1}{3}-s}}\left(\frac{ \mathcal{D}(v)}{(1-v^{3})v^{3(g-A-1)+2}}\right)\] \[+\sum_{j=0}^{2}\left(\operatorname{Res}_{v=\xi_{3}^{j}q^{\frac{2} {3}-s}}\left(\frac{\mathcal{D}(v)}{(1-v^{3})v^{3(g-A-1)+1}}\right)+\operatorname {Res}_{v=\xi_{3}^{j}}\left(\frac{\mathcal{D}(v)}{(1-v^{3})v^{3(g-A-1)+1}} \right)\right)\] By the same computation as in Lemma 3.4 we have that (4.4) is exactly the residue at \(v=0\) and the integral over \(\Gamma_{4}\) is bounded by \(q^{(s-1+\epsilon)(3(g-A-1)+1)}\). Multiplying by \[\frac{q^{(\frac{4}{3}-s)3g+1}}{\zeta_{q}^{2}(2)}\prod_{P}\left(1-\frac{1}{(|P|+ 1)^{2}}\right), \tag{4.6}\] the residue at \(s=0\) gives \(MMT_{s}(3g,3A)\), and the contribution of the integral over \(\Gamma_{4}\) is bounded by \[q^{(\frac{4}{3}-s)3g}q^{3(s-1+\epsilon)(g-A)}=q^{(1+\epsilon)g+3(1-s+\epsilon) A}. \tag{4.7}\] So, it remains to compute the contribution (to \(MMT_{s}(3g,3A)\)) of the other residues. For \(v=q^{\frac{1}{3}-s}\), we have \[\operatorname{Res}_{v=q^{\frac{1}{3}-s}}\left(\frac{\mathcal{D}(v )}{(1-v^{3})v^{3(g-A-1)+1}}\right) =\lim_{v=q^{\frac{1}{3}-s}}\left(\frac{-q^{\frac{1}{3}-s}\mathcal{ K}_{s}(v)}{(1-q^{3s-2}v^{3})(1-v^{3})v^{3(g-A-1)+1}}\right)\] \[=-\frac{\mathcal{K}_{s}(q^{\frac{1}{3}-s})}{1-q^{-1}}\frac{q^{(3 s-1)(g-A-1)}}{1-q^{1-3s}}.\] We now observe that \(\zeta_{q}(2)=\frac{1}{1-q^{-1}}\) and \[\prod_{P}\left(1-\frac{1}{(|P|+1)^{2}}\right)\mathcal{K}_{s}(q^{ \frac{1}{3}-s}) =\prod_{P}\left(1-\frac{1}{(|P|+1)^{2}}\right)\left(1-\frac{1}{|P| ^{2}(|P|+2)}\right)\] \[=\prod_{P}\left(1-\frac{1}{|P|(|P|+1)}\right)=C_{q}.\] Therefore, multiplying the residue at \(v=q^{\frac{1}{3}-s}\) by (4.6), the contribution of the residue to \(MMT_{s}(3g,3A)\) is \[C_{q}\frac{q^{(\frac{4}{3}-s)3g+1}}{\zeta_{q}(2)}\frac{q^{(3s-1)(g-A-1)}}{1-q^ {1-3s}}=-C_{q}\frac{q^{3g+1}}{\zeta_{q}(2)}\frac{q^{(1-3s)A}}{1-q^{3s-1}}\] which gives the main term of \(MMT_{s}(3g,3A)\) for \(s\neq\frac{1}{3}\). We now use the fact that \(\mathcal{K}_{s}(v)\) is invariant under multiplication of \(v\) by cube roots of unity to get that the sum the residues at \(v=\xi_{3}^{j}q^{\frac{2}{3}-s}\) will be \[\frac{\mathcal{K}_{s}(q^{\frac{2}{3}-s})}{1-q^{2-3s}}q^{(3s-2)(g- A-1)}\sum_{j=0}^{2}\operatorname{Res}_{v=\xi_{3}^{j}q^{\frac{2}{3}-s}}\left(\frac{ \xi_{3}^{-j}q^{s-\frac{2}{3}}}{(1-\xi_{3}^{j}q^{\frac{1}{3}})(1-q^{3s-2}v^{3} )}\right)\] \[=\frac{\mathcal{K}_{s}(q^{\frac{2}{3}-s})}{1-q^{2-3s}}q^{(3s-2)( g-A-1)}\sum_{j=0}^{2}\lim_{v\to\xi_{3}^{j}q^{\frac{2}{3}-s}}\left(\frac{\xi_{3}^{-j}q ^{s-\frac{2}{3}}(v-\xi_{3}^{j}q^{\frac{2}{3}-s})}{(1-\xi_{3}^{j}q^{\frac{1}{ 3}})(1-q^{3s-2}v^{3})}\right)\] \[=-\frac{\mathcal{K}_{s}(q^{\frac{2}{3}-s})}{1-q^{2-3s}}q^{(3s-2)( g-A-1)}\sum_{j=0}^{2}\lim_{v\to\xi_{3}^{j}q^{\frac{2}{3}-s}}\left(\frac{1}{(1-\xi_{3}^ {j}q^{\frac{1}{3}})\prod_{i\neq-j}(1-q^{s-\frac{2}{3}}\xi_{3}^{i}v)}\right)\] \[=\frac{\mathcal{K}_{s}(q^{\frac{2}{3}-s})}{1-q^{2-3s}}\frac{q^{(3 s-2)(g-A-1)}}{q-1}\] and, similarly, the sum over residues at \(v=\xi_{3}^{j}\) will be \[\frac{\mathcal{K}_{s}(1)}{(1-q^{3s-2})}\sum_{j=0}^{2}\operatorname{ Res}_{v=\xi_{3}^{j}}\left(\frac{\xi_{3}^{-j}}{(1-q^{s-\frac{1}{3}}\xi_{3}^{j})(1-v^{3})}\right)\] \[=-\frac{\mathcal{K}_{s}(1)}{(1-q^{3s-2})}\sum_{j=0}^{2}\frac{1}{(1 -q^{s-\frac{1}{3}}\xi_{3}^{j})\prod_{i\neq-j}(1-\xi_{3}^{i}\xi_{3}^{j})}\] \[=\frac{\mathcal{K}_{s}(1)}{(1-q^{3s-2})(1-q^{3s-1})}.\] Furthermore, we see that \(\mathcal{K}_{s}(q^{\frac{2}{3}-s})\) is independent in \(s\) so that for any \(s\neq\frac{2}{3}\), we have that multiplying the sum of the residues at \(v=\xi_{3}^{j}q^{\frac{2}{3}-s}\) by (4.6), we get a contribution bounded by \[q^{(\frac{4}{3}-s)3g+1}q^{(3s-2)(g-A-1)}\ll q^{(\frac{4}{3}-s)3g+1}\begin{cases} 1&s<2/3\\ q^{(3s-2)(g-A)}&s>2/3\end{cases}.\] Similarly, for \(s<1-\epsilon\), we have that \(\mathcal{K}_{s}(1)\) is bounded and so the contribution from the residues at \(v=\xi_{3}^{j}\) will be bounded, and multiplying by (4.6), this gives a contribution bounded by \(q^{(\frac{4}{3}-s)3g+1}\). This completes the proof for \(s\neq\frac{1}{3},\frac{2}{3}\). For \(s=\frac{2}{3}\), the residue at \(v=q^{s-\frac{1}{3}}\) remain the same but we now get double poles at \(v=\xi_{3}^{j}\). In this case we get \[\sum_{j=0}^{2}\operatorname{Res}_{v=\xi_{3}^{j}}\left(\frac{ \mathcal{D}(v)}{(1-v^{3})v^{3(g-A-1)+1}}\right) =\sum_{j=0}^{2}\lim_{v\to\xi_{3}^{j}}\frac{d}{dv}\left(\frac{ \mathcal{K}_{\frac{2}{3}}(v)(v-\xi_{3}^{j})^{2}}{(1-q^{\frac{1}{3}}v)(1-v^{3} )^{2}v^{3(g-A-1)+1}}\right)\] \[\ll g-A.\] For \(s=\frac{1}{3}\), the residues at \(v=\xi_{3}^{j}q^{s-\frac{2}{3}}\), \(j=0,1,2\) and \(\xi_{3}^{j}\), \(j=1,2\) remain the same but we now have a a double pole at \(v=1\) with residue \[\operatorname{Res}_{v=1}\left(\frac{\mathcal{D}(v)}{(1-v^{3})v^{ 3(g-A-1)+1}}\right) =\frac{d}{dv}\left.\left(\frac{\mathcal{K}_{\frac{1}{3}}(v)}{(1-q ^{-1}v^{3})(1+v+v^{2})v^{3(g-A-1)+1}}\right)\right|_{v=1}\] \[=-\frac{\mathcal{K}_{\frac{1}{3}}(1)}{1-q^{-1}}\left(g-A-\frac{1} {3}+\frac{1}{q-1}-\frac{1}{3}\frac{\mathcal{K}_{\frac{1}{3}}^{\prime}(1)}{K_{ \frac{1}{3}}(1)}\right)\] Multiplying by (4.6), we get a contribution of \[-\frac{C_{q}q^{3g+1}}{\zeta_{q}(2)}\left(g-A-\frac{1}{3}+\frac{1}{q-1}-\frac{ 1}{3}\frac{\mathcal{K}_{\frac{1}{3}}^{\prime}(1)}{\mathcal{K}_{\frac{1}{3}}(1 )}\right),\] and we finish the proof with the observation that \[\frac{K_{\frac{1}{3}}^{\prime}(v)}{K_{\frac{1}{3}}(v)}=\frac{d}{dv}\log K_{ \frac{1}{3}}(v)=-3\sum_{P}\frac{\deg(P)v^{3\deg(P)-1}}{|P|^{3}+2|P|^{2}-v^{3 \deg(P)}}\] We can now combine Lemmas 4.6, 4.8 and 4.9 to prove Proposition 4.1. ## 5. Proof of Main Theorems ### Proof of Theorem 1.1 By (2.21) and Propositions 3.1 and 4.1, we have for any \(A=cg\) with \(0<c<1\), and \(\epsilon<s<1-\epsilon\) but \(s\neq\frac{1}{3}\) that \[\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L(s,\chi) =\frac{\mathcal{P}_{s}(3g,3A)+\mathcal{D}_{s}(3g,3A)}{|\mathfrak{H }(3g)|}\] \[=M(s)+C\frac{q^{(1-3s)A}}{1-q^{3s-1}}+O_{\epsilon}\Big{(}q^{( \epsilon-3s)A}+q^{A-(1-\epsilon)(3g+1)}+\frac{q^{3(1-s+\epsilon)A}}{q^{\frac{ 3s}{2}}}\Big{)}\] \[-C_{q}\frac{q^{(1-3s)A}}{1-q^{3s-1}}+O_{\epsilon}\left(\frac{q^{( \frac{3}{4}+\epsilon)g}}{q^{3A(s+\frac{1}{4}+\epsilon)}}+\frac{q^{3(s-1+ \epsilon)A}}{q^{(2+\epsilon)g}}+q^{(1-3s)g}E_{s}(3g,3A)\right)\] \[=M(s)+O_{\epsilon}\Big{(}\frac{q^{3(1-s+\epsilon)A}}{q^{\frac{3s }{2}}}+\frac{q^{(\frac{3}{4}+\epsilon)g}}{q^{3A(s+\frac{1}{4}+\epsilon)}}+q^{ (1-3s)g}E_{s}(3g,3A)\Big{)}\] Optimizing the error term, we chose the cut off that equalizes the first two term error terms. Hence, we chose \(A=\left\lfloor\frac{3}{5}g\right\rfloor\) and conclude that for \(s\neq\frac{1}{3}\) \[\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L(s,\chi)=M(s)+O_{ \epsilon}\left(q^{\frac{3}{10}(1-6s+\epsilon)g}+q^{(1-3s)g}E_{s}(g)\right)\] In the case \(s=\frac{1}{3}\), we apply Propositions 3.1 and 4.1 with \(A=\frac{3}{5}g\) which gives \[\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L(\tfrac{1}{3},\chi )=C_{q}\left(g+1+\sum_{P}\deg(P)\frac{|P|+2}{|P|^{3}+2|P|^{2}-1}-\frac{1}{3} \frac{q}{q-1}\right)+O_{\epsilon}(1)\] ### Proof of Theorem 1.5 We see from the definition of \(C_{q}\) that \(\lim_{q\to\infty}C_{q}=1\) and by the prime polynomial theorem we get \[\sum_{P}\deg(P)\frac{|P|+2}{|P|^{3}+2|P|^{2}-1}\ll\sum_{n=1}^{\infty}\frac{1} {q^{n}}=\frac{1}{q-1}\] which tends to \(0\) as \(q\) tends to \(\infty\). Hence we get from Theorem 1.1 that \[\lim_{q\to\infty}\frac{1}{|\mathfrak{H}(3g)|}\sum_{\chi\in\mathfrak{H}(3g)}L( \tfrac{1}{3},\chi)=g+O(1) \tag{5.1}\] and it remains to compute the matrix integral \[\int_{U(3g)}\deg(1-U)\overline{\det(1-\wedge^{3}U)}dU\] to see that it matches. First we need to write the functions \(\det(1-U)\) and \(\det(1-\wedge^{3}U)\) in a common basis. For any infinite tuple \((\lambda_{j})\) of non-negative integers with only finitely many non-zero entries we may write the partition \[\lambda=\prod_{j=1}^{\infty}j^{\lambda_{j}} \tag{5.2}\] consisting of \(\lambda_{j}\) copies of \(j\). The Newton identities then tells us for any \(U\in U(N)\), we can write \[\det(1-U)=\sum_{|\lambda|\leq N}\frac{(-1)^{\ell(\lambda)}}{z_{\lambda}}P_{ \lambda}(U)\] where \(\ell(\lambda)=\sum\lambda_{j}\) is the length of \(\lambda\) and \[P_{\lambda}(U)=\prod_{j=1}^{\infty}\operatorname{Tr}(U^{j})^{\lambda_{j}}\text { and }z_{\lambda}=\prod_{j=1}^{\infty}j^{\lambda_{j}}\lambda_{j}!.\] A result of Diaconis and Evans then shows that these \(P_{\lambda}(U)\) form an orthogonal basis. **Theorem 5.1** (Theorem 2.1 from [10]).: _For any partitions \(\lambda,\mu\) with \(\min(|\lambda|,|\mu|)\leq N\), we have_ \[\int_{U(N)}P_{\lambda}(U)\overline{P_{\mu}(U)}dU=\delta_{\lambda\mu}z_{\lambda}\] _where \(\delta_{\lambda\mu}\) is the indicator function \(\lambda=\mu\)._ So it remains to write \(\det(1-\wedge^{3}U)\) in the basis formed by \(P_{\lambda}(U)\). For this, we will need some new notation. If \(\lambda,\mu\) are partitions, written as in (5.2), then define the product \(\lambda\cdot\mu\) as \[\lambda\cdot\mu=\prod_{j=1}^{\infty}j^{\lambda_{j}+\mu_{j}},\] and by repeating this process, we can define positive integer powers of partitions \[\lambda^{k}=\prod_{j=1}^{\infty}j^{k\lambda_{j}}.\] We denote the the zero partition as \[\mathbb{0}=\prod_{j=1}^{\infty}j^{0}\] and so we obtain \(\lambda\cdot\mathbb{0}=\lambda\) and \(\lambda^{0}=\mathbb{0}\). Finally, for any positive integer \(k\), we then define \(k\lambda=\prod_{j=1}^{\infty}(kj)^{\lambda_{j}}\) the partition consisting of \(\lambda_{j}\) copies of \(kj\). Now, let \(P_{3}\) denote the set of partitions of \(3\) and consider a tuple of non-negative integers \((a_{j\mu})_{j,\mu}\) where \(\mu\) runs over the elements of \(P_{3}\), \(j\) runs over the positive integers and only finitely many of the \(a_{j\mu}\) are non-zero. Then, for any such tuple we define the partition \[\lambda(a_{j\mu})=\prod_{j=1}^{\infty}\prod_{\mu\in P_{3}}(j\mu)^{a_{j\mu}}\] Section 5.2 of [10] shows that we can write \[\det(1-\wedge^{3}U)=\sum_{(a_{j\mu})_{j,\mu}}\left[\prod_{j=1}^{\infty}\prod_{ \mu\in P_{3}}\frac{1}{a_{j\mu}!}\left(\frac{(-1)^{\ell(\mu)}}{jz_{\mu}}\right) ^{a_{j\mu}}\right]P_{\lambda(a_{j\mu})}(U).\] Now, combining these facts and using Theorem 5.1, we obtain \[\int_{U(N)}\det(1-U)\overline{\det(1-\wedge^{3}U)}dU\] \[= \sum_{(a_{j\mu})_{j,\mu}}\left[\prod_{j=1}^{\infty}\prod_{\mu\in P_{ 3}}\frac{1}{a_{j\mu}!}\left(\frac{(-1)^{\ell(\mu)}}{jz_{\mu}}\right)^{a_{j\mu} }\right]\frac{(-1)^{\ell(\lambda)}}{z_{\lambda}}\int_{U(N)}P_{\lambda}(U) \overline{P_{\lambda(a_{j\mu})}(U)}dU\] \[= \sum_{\begin{subarray}{c}(a_{j\mu})_{j,\mu}\\ |\lambda(a_{j\mu})|\leq N\end{subarray}}\prod_{j=1}^{\infty}\prod_{\mu\in P_{ 3}}\frac{1}{a_{j\mu}!}\left(\frac{1}{jz_{\mu}}\right)^{a_{j\mu}}\] where we have used the fact that if \(\lambda=\lambda(a_{j\mu})\) then \[\ell(\lambda)=\sum_{j=1}^{\infty}\sum_{\mu\in P_{3}}a_{j\mu}\ell(\mu).\] To compute the sum over the \((a_{j\mu})_{j,\mu}\), we consider the generating series \[F(x):=\sum_{(a_{j\mu})_{j,\mu}}\prod_{j=1}^{\infty}\prod_{\mu\in P_{3}}\frac{1 }{a_{j\mu}!}\left(\frac{1}{jz_{\mu}}\right)^{a_{j\mu}}x^{|\lambda(a_{j\mu})|}.\] We use the fact that \[|\lambda(a_{j\mu})|=3\sum_{j,\mu}ja_{j\mu}\] to write \[F(x) =\sum_{(a_{j\mu})_{j,\mu}}\prod_{j=1}^{\infty}\prod_{\mu\in P_{3}} \frac{1}{a_{j\mu}!}\left(\frac{x^{3j}}{jz_{\mu}}\right)^{a_{j\mu}}\] \[=\prod_{j=1}^{\infty}\prod_{\mu\in P_{3}}\left[\sum_{a_{j\mu}=0}^ {\infty}\frac{1}{a_{j\mu}!}\left(\frac{x^{3j}}{jz_{\mu}}\right)^{a_{j\mu}}\right]\] \[=\prod_{j=1}^{\infty}\prod_{\mu\in P_{3}}\exp\left[\frac{x^{3j}}{ jz_{\mu}}\right]=\exp\left(\sum_{j=1}^{\infty}\sum_{\mu\in P_{3}}\frac{x^{3j}}{ jz_{\mu}}\right)\] \[=\frac{1}{1-x^{3}}=\sum_{n=0}^{\infty}x^{3n}\] where we used the well-known fact that for any \(r\) \[\sum_{\lambda\in P_{r}}\frac{1}{z_{\lambda}}=1.\] Hence, we may conclude that \[\int_{U(N)}\det(1-U)\overline{\det(1-\wedge^{3}U)}dU=\sum_{3n\leq N}1=\left\lfloor \frac{N}{3}\right\rfloor+1.\] and Theorem 1.5 then follows from this and (5.1), using \(N=3g\).
2306.03212
StabJGL: a stability approach to sparsity and similarity selection in multiple network reconstruction
In recent years, network models have gained prominence for their ability to capture complex associations. In statistical omics, networks can be used to model and study the functional relationships between genes, proteins, and other types of omics data. If a Gaussian graphical model is assumed, a gene association network can be determined from the non-zero entries of the inverse covariance matrix of the data. Due to the high-dimensional nature of such problems, integrative methods that leverage similarities between multiple graphical structures have become increasingly popular. The joint graphical lasso is a powerful tool for this purpose, however, the current AIC-based selection criterion used to tune the network sparsities and similarities leads to poor performance in high-dimensional settings. We propose stabJGL, which equips the joint graphical lasso with a stable and accurate penalty parameter selection approach that combines the notion of model stability with likelihood-based similarity selection. The resulting method makes the powerful joint graphical lasso available for use in omics settings, and outperforms the standard joint graphical lasso, as well as state-of-the-art joint methods, in terms of all performance measures we consider. Applying stabJGL to proteomic data from a pan-cancer study, we demonstrate the potential for novel discoveries the method brings. A user-friendly R package for stabJGL with tutorials is available on Github at https://github.com/Camiling/stabJGL.
Camilla Lingjærde, Sylvia Richardson
2023-06-05T19:45:06Z
http://arxiv.org/abs/2306.03212v2
StabJGL: a stability approach to sparsity and similarity selection in multiple network reconstruction ###### Abstract In recent years, network models have gained prominence for their ability to capture complex associations. In statistical omics, networks can be used to model and study the functional relationships between genes, proteins, and other types of omics data. If a Gaussian graphical model is assumed, a gene association network can be determined from the non-zero entries of the inverse covariance matrix of the data. Due to the high-dimensional nature of such problems, integrative methods that leverage similarities between multiple graphical structures have become increasingly popular. The joint graphical lasso is a powerful tool for this purpose, however, the current AIC-based selection criterion used to tune the network sparsities and similarities leads to poor performance in high-dimensional settings. We propose stabJGL, which equips the joint graphical lasso with a stable and accurate penalty parameter selection approach that combines the notion of model stability with likelihood-based similarity selection. The resulting method makes the powerful joint graphical lasso available for use in omics settings, and outperforms the standard joint graphical lasso, as well as state-of-the-art joint methods, in terms of all performance measures we consider. Applying stabJGL to proteomic data from a pan-cancer study, we demonstrate the potential for novel discoveries the method brings. A user-friendly R package for stabJGL with tutorials is available on Github: [https://github.com/Camiling/stabJGL](https://github.com/Camiling/stabJGL). High-dimensional inference Network models Joint graphical model Joint graphical lasso Gaussian graphical model Genomics Gene networks Protein-protein interaction networks Integrative analysis ## 1 Introduction Network models have in recent years gained great popularity in many areas. In statistical omics, networks can be used to decode aspects of unknown structures, and hence study the relationships between genes, proteins, and other types of omics data. In health data sciences, rich data sets are more and more frequently encountered, enabling the development of models integrating a variety of biological resources. In the high-dimensional setting commonly found in omics, sharing information between data sources with shared structures - which could be different tissues, conditions, patient subgroups, or different omics types - can give a valuable increase in statistical power while elucidating shared biological function. A key question is how to combine the different data sources into a single model. If a Gaussian graphical model is assumed, a conditional (in)dependence network can be estimated by determining the non-zero entries of the inverse covariance (precision) matrix of the data. With its good performance in numerical studies, the _graphical lasso_ of Friedman et al. (2008) is a state-of-the-art method for precision matrix estimation in the setting of Gaussian graphical models. The method combines \(L_{1}\) regularization with maximum likelihood estimation. Other notable methods include the neighborhood selection approach of Meinshausen and Buhlmann (2006) and the graphical SCAD (Fan et al., 2009). Notable Bayesian methods include the Bayesian graphical lasso (Wang et al., 2012), Bayesian spike-and-slab approaches (Wang, 2015) and the graphical horseshoe (Li et al., 2019). If multiple related data sets are available, there are several ways to leverage common network structures. If focusing on one data type's network structure, data from other types can enhance inference via weighted graphical lasso methods (Li and Jackson, 2015; Lingjerde et al., 2021). However, to compare network structures across data sets, such as patient subgroups, a joint approach that leverages common information while preserving the differences can increase statistical power and provide interpretable insight. In the area of multiple Gaussian graphical models, existing methods include the group extension of the graphical lasso to multiple networks of Guo et al. (2011), the Bayesian spike-and-slab joint graphical lasso (Li et al., 2019) and the Markov random field approach of Peterson et al. (2015). The widely used joint graphical lasso (JGL) of Danaher et al. (2014) extends the graphical lasso to a multiple network setting and provides a powerful tool for inferring graphs with common traits. It employs two different penalty functions - group (GGL) and fused (FGL) - with the latter recommended for most applications. From this point forward, any mention of the joint graphical lasso will imply the fused version, unless otherwise specified. The method needs tuning of two regularization parameters for controlling (i) the number of non-zero effects, and (ii) the similarity between networks, respectively. However, the default parameter selection routine based on the AIC (Akaike et al., 1973) often results in severe over-selection in high-dimensional data, potentially impacting performance negatively (Liu et al., 2010; Foygel and Drton, 2010). In such settings, selection approaches based on model stability have demonstrated competitive performance (Liu et al., 2010; Angelini et al., 2022). We propose a stable and accurate penalty parameter selection method for the joint graphical lasso, combining the model stability principle of Liu et al. (2010) with likelihood-based selection for high-dimensional data (Foygel and Drton, 2010). The resulting method inherits the powerful traits of the joint graphical lasso while mitigating the risk of severe under- or over-selection of edges in high-dimensional settings. We provide an R package, stabJGL (stable sparsity and similarity selection for the joint graphical lasso), which implements the method. The paper is organized as follows. In Section 2, we first describe the Gaussian graphical model framework and the penalized log-likelihood problem we aim to solve. We then describe our proposed algorithm. In Section 3, we demonstrate the performance of our proposed method on simulated data and apply it proteomic data from a pan-cancer study of hormonally responsive cancers. Finally, we highlight possible extensions in Section 4. ## 2 Materials and methods ### Gaussian graphical models In a gene network model, genes are represented by _nodes_ and associations between them are represented by _edges_. Given measurable molecular units each corresponding to one gene (e.g., the encoded protein or mRNA), a network, or graph, can be constructed from their observed values. Consider \(n\) observed values of the multivariate random vector \(\mathbf{x}=(X_{1},\ldots,X_{p})^{T}\) of node attributes, with each entry corresponding to one of \(p\) nodes. If we assume multivariate Gaussian node attributes, with an \(n\times p\) observation matrix \(\mathbf{X}\) with i.i.d. rows \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\), a _partial correlation network_ can be determined by estimating the inverse covariance matrix, or precision matrix, \(\mathbf{\Theta}=\mathbf{\Sigma}^{-1}\). Specifically, the partial correlation between nodes \(i\) and \(j\), conditional upon the rest of the graph, is given by \[\rho_{ij|V\setminus\{i,j\}}=-\frac{\theta_{ij}}{\sqrt{\theta_{ii}\theta_{jj}}},\] where the \(\theta_{ij}\)'s are the entries of \(\mathbf{\Theta}\) and \(V\) the set of all node pairs (Lauritzen, 1996). The partial correlations coincide with the conditional correlations in the Gaussian setting. Because correlation (resp. partial correlation) equal to zero is equivalent to independence (resp. conditional independence) for Gaussian variables, a conditional independence graph can thus be constructed by determining the non-zero entries of the precision matrix. To ensure invertibility, the precision matrix also required to be positive definite, \(\mathbf{\Theta}\succ 0\). In high-dimensional settings, the sample covariance matrix \(\mathbf{S}=\frac{1}{n-1}\mathbf{X}^{T}\mathbf{X}\) is rarely of full rank and thus its inverse cannot be estimated directly. It is common to assume sparse network, meaning the number of edges in the edge set \(E\) is small relative to the number of potential edges in the graph (i.e., the sparsity measure \(2|E|/(p^{2}-p)\) is small). Penalized methods such as the graphical lasso (Friedman et al., 2008) are well established for sparse Gaussian graphical model estimation. In the case of there being multiple (related) data sets available, such as from different tissue types, rather than estimating each network separately much statistical power could be gained by sharing information across networks through a joint approach. ### Penalized log-likelihood problem Assume a network inference problem with \(K\) groups. We let \(\{\mathbf{\Theta}\}=(\mathbf{\Theta}^{(1)},\ldots,\mathbf{\Theta}^{(K)})\) be the set of their (unknown) precision matrices, and assume that the set of \(\sum_{k=1}^{K}n_{k}\) observations are independent. We aim to solve the penalized log-likelihood problem (Danaher et al., 2014) \[\{\widehat{\mathbf{\Theta}}\} =\operatorname*{arg\,max}_{\{\mathbf{\Theta}\succ 0\}}\Big{\{}\sum_{k=1} ^{K}n_{k}[\log(\det(\mathbf{\Theta}^{(k)}))-\text{tr}(\mathbf{S}^{(k)}\mathbf{\Theta}^{(k) })] \tag{1}\] \[-\text{P}(\{\mathbf{\Theta}\})\Big{\}}\] where \(\mathbf{S}^{(k)}\) is the sample covariance matrix of group \(k\) and \(\text{P}(\cdot)\) is a penalty function. In (1), \(\det(\cdot)\) denotes the determinant and \(\text{tr}(\cdot)\) denotes the trace. The joint graphical lasso employs the fused penalty function \[\text{P}(\{\mathbf{\Theta}\})=\lambda_{1}\sum_{k=1}^{K}\sum_{i\neq j}\text{abs}( \theta_{ij}^{(k)})+\lambda_{2}\sum_{k<k^{\prime}}\|\mathbf{\Theta}^{(k)}-\mathbf{ \Theta}^{(k^{\prime})}\|_{1} \tag{2}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are positive penalty parameters, \(\text{abs}(\cdot)\) denotes the absolute value function and \(\|\cdot\|_{1}\) denotes the \(\text{L}_{1}\) penalty. This penalty applies \(\text{L}_{1}\) penalties to each off-diagonal element of the \(K\) precision matrices as well as to the differences between corresponding elements of each pair of precision matrices. As for the graphical lasso, the parameter \(\lambda_{1}\) controls the sparsity. The similarity parameter \(\lambda_{2}\) controls the degree to which the \(K\) precision matrices are forced towards each other, encouraging not only similar network structures but also similar precision matrix entries. The current penalty parameter selection approach for \(\lambda_{1}\) and \(\lambda_{2}\) is based on the AIC (Danaher et al., 2014). While suitable for determining network similarities, likelihood-based selection criteria can lead to severe under- or over-selection and thus poor performance in high-dimensional settings (Liu et al., 2010). ### The stabJGL algorithm To improve the performance of the joint graphical lasso with the fused penalty for omics applications and other high-dimensional problems, we propose the stabJGL algorithm for stable sparsity and similarity selection in multiple network reconstruction. Below we outline the algorithm, where we first select the sparsity parameter \(\lambda_{1}\) in the fused penalty (2) based on the notion of model stability, and then the similarity parameter \(\lambda_{2}\) based on model likelihood. StabJGL jointly estimates multiple networks by leveraging their common information, and gives a basis for deeper exploration of their differences, as shown in Figure 1. #### 2.3.1 Selecting \(\lambda_{1}\) We select \(\lambda_{1}\) by extending the framework introduced by Liu et al. (2010) in their Stability Approach to regularization Criterion (StARS) to a multiple network setting. The aim is to select the least amount of penalization that makes graphs Figure 1: The workflow of stabJGL, where the network structures of different data types or conditions are jointly estimated and can then be compared. sparse as well as reproducible under random subsampling. This is done by drawing many random subsamples from each of the \(K\) data types and using them to construct joint graphical lasso graphs over a range of \(\lambda_{1}\) values. The smallest parameter value for which a given graph estimation variability measure does not surpass a specified threshold is then selected. We use a measure of edge assignment instability across subsamples to quantify the variability. Specifically, we consider a grid of \(\lambda_{1}\) values in a suitable interval, i.e., \((0,1]\) and keep the similarity parameter \(\lambda_{2}\) fixed to some small value such as \(0.01\) in the first instance. For \(\eta=1,\ldots,N_{\text{sample}}\), we draw a random subsample from each group \(k\)'s set of \(n_{k}\) observations without replacement, each of size \(b_{k}<n_{k}\). Liu et al. (2010) show that in a single network setting, \(b_{k}=\lfloor 10\sqrt{n_{k}}\rfloor\) maintains theoretical properties for containing the true graph with high probability as well as high empirical performance, and this is the value we use. For each value of \(\lambda_{1}\) to consider, we next construct the corresponding set of joint graphical lasso graphs \(\{G_{(k)}^{\eta}(\lambda_{1})\}_{k=1}^{K}\) from these \(K\) sets of subsamples, using the fused penalty (2). The following is then done for each value of \(\lambda_{1}\) we consider. For each group \(k=1,\ldots,K\) and all possible node pairs \((i,j)\) we estimate the probability of an edge between the nodes over the \(N_{\text{sample}}\) inferred sets of graphs \[\widehat{\psi}_{ij}^{(k)}(\lambda_{1})=\frac{1}{N_{\text{sample}}}\sum_{\eta= 1}^{N_{\text{sample}}}\mathbb{1}\left[(i,j)\in G_{(k)}^{\eta}(\lambda_{1}) \right], \tag{3}\] where \(\mathbb{1}\left[\cdot\right]\) is the indicator function. Using this estimated probability, we find \[\widehat{\xi}_{ij}^{(k)}(\lambda_{1})=2\widehat{\psi}_{ij}^{(k)}(\lambda_{1}) (1-\widehat{\psi}_{ij}^{(k)}(\lambda_{1})), \tag{4}\] which is an estimate of two times the variance of the Bernoulli indicator of the edge \((i,j)\) in group \(k\). It lies in \([0,0.5]\) and can be regarded as an estimate of the fraction of times two inferred graphs for group \(k\) found with the joint graphical lasso with the given \(\lambda_{1}\) value will disagree on the presence of the edge \((i,j)\). Due to the \(L_{1}\) penalty in (2), the number of inferred edges will decrease as \(\lambda_{1}\) is increased. For a given \(\lambda_{1}\), \(\widehat{\xi}_{ij}^{(k)}(\lambda_{1})\) can be regarded as a measure of the variability of the edge \((i,j)\) in group \(k\) across subsamples, and the total variability of graph \(k\) can be measured by averaging over all edges, yielding the estimate \[\widehat{D}_{(k)}(\lambda_{1})=\frac{1}{\binom{p}{2}}\sum_{i<j}\widehat{\xi}_ {ij}^{(k)}(\lambda_{1}). \tag{5}\] For each value of \(\lambda_{1}\), the total variability of the whole set of graphs found by the joint graphical lasso is then found by averaging the variability over all \(K\) networks \[\widehat{D}(\lambda_{1})=\frac{1}{K}\sum_{k=1}^{K}\widehat{D}_{(k)}(\lambda_{1 }). \tag{6}\] For sufficiently large \(\lambda_{1}\), all edges are excluded from the model and so the variability \(\widehat{D}(\lambda_{1})\) will be \(0\). The variability will in general increase as the penalty \(\lambda_{1}\) decreases, however, for small enough \(\lambda_{1}\) the graphs will become so dense that the variability starts to decrease again. As sparse network inference is the aim, we therefore monotonize the variability function by letting \(\bar{D}(\lambda_{1})=\sup_{0\leq t\leq\lambda_{1}}\widehat{D}(t)\). Finally, for a given variability threshold \(\beta_{1}\), the optimal penalty is chosen to be \(\widehat{\lambda}_{1}=\sup\{\lambda_{1}:\bar{D}(\lambda_{1})\leq\beta_{1}\}\). As opposed to \(\lambda_{1}\), \(\beta_{1}\) is an interpretable quantity and we propose a default threshold of \(\beta_{1}=0.1\) as suggested by Liu et al. for the original StARS algorithm, which reflects an acceptance of \(10\%\) variability in the edge assignments. #### 2.3.2 Selecting \(\lambda_{2}\) After \(\lambda_{1}\) has been selected, we select \(\lambda_{2}\) with a multiple-network version of the extended BIC (eBIC or BIC\({}_{\gamma}\)) of Foygel and Drton (2010). The eBIC is an extension of the Bayesian Information Criterion of Schwarz (1978), where the prior is reformulated to account for high-dimensional graphical settings. We propose an adaptation the eBIC to a multiple-network setting, \[\text{BIC}_{\gamma}(\lambda_{1},\lambda_{2}) =\sum_{k=1}^{K}\Big{[}n_{k}\text{tr}(\mathbf{S}^{(k)}\widehat{\mathbf{ \Theta}}_{\lambda_{1},\lambda_{2}}^{(k)})-n_{k}\log(\det(\widehat{\mathbf{\Theta}}_ {\lambda_{1},\lambda_{2}}^{(k)}))\] \[+|E_{k}|\log n_{k}+4|E_{k}|\gamma\log p\Big{]}, \tag{7}\] where \(\widehat{\mathbf{\Theta}}_{\lambda_{1},\lambda_{2}}^{(k)}\) is the estimated precision matrix of network \(k\) obtained with the penalty parameters \(\lambda_{1}\) and \(\lambda_{2}\), and \(|E_{k}|\) is the size of the corresponding edge set. A grid of \(\lambda_{2}\) values is considered, with \(\lambda_{1}\) fixed to the value selected in the previous step. The value of \(\lambda_{2}\) that minimizes (7) is selected. Like for the standard eBIC, the additional edge penalty parameter \(\gamma\in[0,1]\) must be chosen. However, since we are using the eBIC for similarity selection rather than sparsity selection, the choice of \(\gamma\) is not as important because we are comparing graphs with the same value of \(\lambda_{1}\) and hence similar levels of sparsity. We typically use \(\gamma=0\), which corresponds to the ordinary BIC, for most applications. Our implementation includes the eBIC generalization to give the user the option of additional penalization in extremely high-dimensional cases. ``` 0:\(n_{k}\times p\) data matrix \(\mathbf{X}^{(k)}\) for \(k=1,\dots,K\) 1:\(\Lambda_{1}\leftarrow\{0.01,0.02,\dots,1\}\), \(\Lambda_{2}\leftarrow\{0,0.01,\dots,0.1\}\) 2:\(\lambda_{2}^{(\text{init})}\gets 0.01\) 3:\(\beta_{1}\gets 0.1\) 4:\(N_{\text{sample}}\gets 20\) 5:\(\gamma\gets 0\) 6:\(b_{k}\leftarrow\lfloor 10\sqrt{n_{k}}\rfloor\) for \(k=1,\dots,K\) 7:\(\mathbf{S}^{(k)}\leftarrow\frac{1}{n_{k}-1}{\mathbf{X}^{(k)}}^{T}\mathbf{X}^{(k)}\) for \(k=1,\dots,K\) 8:for\(\lambda_{1}\) in \(\Lambda_{1}\)do 9:for\(\eta=1\) to \(N_{\text{sample}}\)do 10:for\(k=1\) to \(K\)do 11: Sample \(b_{k}\) indices \(I_{k}\subset\{1,\dots,n_{k}\}\) 12:\(\mathbf{X}_{\text{sample}}^{(k)}\leftarrow\mathbf{X}^{(k)}[I_{k},]\) 13:endfor 14:\(\{G_{(b)}^{\eta}(\lambda_{1})\}_{k=1}^{K}\leftarrow\text{JGL}\left(\{\mathbf{X}_{ \text{sample}}^{(k)}\}_{k=1}^{K}\mid\lambda_{1},\lambda_{2}^{(\text{init})}\right)\) 15:endfor 16:for\(k=1\) to \(K\)do 17:for\(j=1\) to \(p\)do 18:for\(i=1\) to \(j-1\)do 19:\(\widehat{\psi}_{ij}^{(k)}(\lambda_{1})\leftarrow\frac{1}{N_{\text{sample}}} \sum_{\eta=1}^{N_{\text{sample}}}1\left[(i,j)\in G_{(k)}^{\eta}(\lambda_{1})\right]\) 20:\(\widehat{\xi}_{ij}^{(k)}(\lambda_{1})\gets 2\widehat{\psi}_{ij}^{(k)}( \lambda_{1})(1-\widehat{\psi}_{ij}^{(k)}(\lambda_{1}))\) 21:endfor 22:endfor 23:\(\widetilde{D}_{(k)}(\lambda_{1})\leftarrow\frac{1}{(\widehat{\xi})}\sum_{i<j }\widehat{\xi}_{ij}^{(k)}(\lambda_{1})\) 24:endfor 25:\(\widetilde{D}(\lambda_{1})\leftarrow\frac{1}{K}\sum_{k=1}^{K}\widehat{D}_{(k )}(\lambda_{1})\) 26:\(\widetilde{D}(\lambda_{1})\leftarrow\sup_{0\leq t\leq\lambda_{1}}\widetilde{D }(t)\) 27:endfor 28:\(\widehat{\lambda}_{1}\leftarrow\sup\{\lambda_{1}\in\Lambda_{1}:\widetilde{D}( \lambda_{1})\leq\beta_{1}\}\) 29:for\(\lambda_{2}\) in \(\Lambda_{2}\)do 30:\(\{\widehat{\mathbf{\Theta}}_{\widehat{\lambda}_{1}\lambda_{2}}^{(k)},E_{k}\}_{k =1}^{K}\leftarrow\text{JGL}\left(\{\mathbf{X}^{(k)}\}_{k=1}^{K}\mid\widehat{ \lambda}_{1},\lambda_{2}\right)\) 31:\(\text{BIC}_{\gamma}(\widehat{\lambda}_{1},\lambda_{2})\leftarrow\sum_{k=1}^{K} \left[n_{k}\text{tr}(\mathbf{S}^{(k)}\widehat{\mathbf{\Theta}}_{\widehat{\lambda}_{1 },\lambda_{2}}^{(k)})-n_{k}\log(\det(\widehat{\mathbf{\Theta}}_{\widehat{\lambda}_{ 1},\lambda_{2}}^{(k)}))+|E_{k}|\log n_{k}+4|E_{k}|\gamma\log p\right]\) 32:endfor 33:\(\widehat{\lambda}_{2}\leftarrow\arg\min_{\lambda_{2}\in\Lambda_{2}}\text{BIC}_ {\gamma}(\widehat{\lambda}_{1},\lambda_{2})\) 34:\(\{\widehat{\mathbf{\Theta}}_{\text{subGLGL}}^{(k)}\}_{k=1}^{K}\leftarrow\text{JGL} \left(\{\mathbf{X}^{(k)}\}_{k=1}^{K}\mid\widehat{\lambda}_{1},\widehat{\lambda}_{2}\right)\) ``` **Algorithm 1** The stabJGL algorithm #### 2.3.3 Algorithm The full stabJGL algorithm is given in Algorithm 1. \(\text{JGL}(\cdot)\) indicates that the joint graphical lasso function with the fused penalty is applied. The output of the JGL function can either be a set of graphs, a set of precision matrices or an edge set, depending on what is required Algorithm 1. #### 2.3.4 Implementation details StabJGL is implemented in R, and available as an R package at [https://github.com/Camiling/stabJGL](https://github.com/Camiling/stabJGL). The subsampling routine is implemented so it can be done in parallel. The joint graphical lasso fittings are done as in Danaher et al. (2014), using an ADMM (Alternating Direction Method of Multipliers) algorithm (Boyd et al., 2011) for general penalty functions to solve the penalized log-likelihood problem (1), By default, \(20\) subsamples are used and we evaluate \(20\) values each of \(\lambda_{1}\in[0.01,1]\) and \(\lambda_{2}\in[0,0.1]\). As in StARS, we use a subsample size of \(\left\lfloor 10\sqrt{n_{k}}\right\rfloor\) for group \(k=1,\ldots,K\)(Liu et al., 2010). The additional penalty parameter \(\gamma\) in the eBIC for similarity selection is set to \(0\) by default, corresponding to the standard BIC. We found this value to be suitable in most applications but leave the option to increase the penalization. We employ a default variability threshold of \(\beta_{1}=0.1\). ## 3 Results ### Simulated data We first assess the performance of stabJGL on simulated data. We compare the network reconstruction ability of stabJGL to that of state-of-the-art methods, including the joint graphical lasso with the fused penalty (FGL) and group penalty (GGL) with penalty parameters selected with the default AIC-based criterion (Danaher et al., 2014). To assess the performance of another selection criterion specifically designed for high-dimensional graph selection, we also consider FGL with penalty parameters tuned by the extended BIC for multiple graphs (7) with a moderate value of \(\gamma=0.2\)(Foygel and Drton, 2010). We further include the Bayesian spike-and-slab joint graphical lasso (SSJGL) of Li et al. (2019), as well as the graphical lasso (Glasso) of Friedman et al. (2008) tuned by StARS (Liu et al., 2010). The latter estimates each network separately. We generate data that closely resembles our omics application of interest, featuring partial correlations between \(0.1\) and \(0.2\) in absolute value, while also exhibiting the _scale-free_ property - a typical assumption for omics data where the _degree distribution_ (i.e., the distribution of the number of edges that are connected to the nodes) adheres to a power-law distribution (Chen and Sharp, 2004). In the main simulation scenario, we simulate \(K=3\) networks with \(p=100\) nodes, manipulating the degree of similarity in their "true" graphical structures to assess the performance of the method over a wide range of scenarios. We maintain a sparsity of \(0.02\) across all networks and generate data sets from the corresponding multivariate Gaussian distributions with \(n_{1}=150\), \(n_{2}=200\) and \(n_{3}=300\) observations. We then apply different network reconstruction techniques to determine the networks from the data. For FGL and GGL, the two penalty parameters are chosen in a sequential fashion with the default AIC-based criterion proposed by Danaher et al. (2014), with \(20\) values of \(\lambda_{1}\in[0.01,1]\) and \(\lambda_{2}\in[0,0.1]\) respectively being evaluated. We consider the eBIC criterion on the same grid of values for FGL. We consider the same set of \(\lambda_{1}\) and \(\lambda_{2}\) values in the stabJGL algorithm and let \(\gamma=0\) in the eBIC criterion for similarity selection. For stabJGL and the graphical lasso tuned by StARS, we use a variability threshold of \(0.1\) and use \(20\) subsamples. For the Bayesian spike-and-slab joint graphical lasso all parameter specifications are as suggested by Li et al. (2019). In addition to the above setup, we consider additional settings with \(K\in\{2,4\}\) graphs and \(p=100\) nodes. We only show a summarizing plot of these additional results, but the full tables for these simulations, as well as from additional scenarios with other values of \(K\) and \(p\), are given in the Supplement. We also investigate the effect of the variability threshold \(\beta_{1}\) in stabJGL on the results in a setting with \(p=100\) nodes and \(K=2\) networks. Finally, to compare the scalability of the respective methods we consider the time needed to infer networks for various \(p\) and \(K\). Further details and code for the simulation study can be found at [https://github.com/Camiling/stabJGL_simulations](https://github.com/Camiling/stabJGL_simulations). Estimation accuracy is assessed with the _precision_ (positive predictive value), and the _recall_ (sensitivity). The precision gives the fraction of predicted edges that were correct, while the recall is the fraction of edges in the true graph that were identified by the inference. Because the sparsity of estimated networks will vary between methods, the precision-recall trade-off should be taken into consideration. In general, the recall will increase with the number of selected edges while the precision will decrease. Since sparsity selection is a main feature of our proposed method, we do not consider threshold-free comparison metrics such as the AUC. We therefore put emphasis on the following characteristics in our comparative simulation study; (i) suitable sparsity level selection, (ii) utilization of common information at any level of network similarity, i.e., inference improves with increased network similarity, and (iii) a suitable precision-recall trade-off that overly favours either measure. ### Simulation results The results are summarized in Table 1. First, we observe that the fused and group joint graphical lasso with the default AIC-based penalty parameter selection strongly over-select edges in all cases. This leads to high recall, but very low precision. Second, they do not appear to sufficiently utilize network similarities; the performance of the two methods, particularly GGL, differs little between completely unrelated and identical networks. Notably, in all cases the selected value of \(\lambda_{2}\) is smaller for FGL and GGL tuned by AIC than it is for stabJGL. Consequently, similarity is not sufficiently encouraged even in settings where the networks are identical. The AIC criterion does not seem to provide sufficient penalization to encourage suitable sparsity and similarity. On the other hand, we observe that the alternative eBIC criterion gives extremely sparse FGL estimates, resulting in high precision but very low recall. In half of the cases, it selects an empty graph, i.e., no edges. Although the extended BIC is developed specifically for graphical model selection, likelihood-based criteria for sparsity selection tend to perform poorly in high-dimensional settings and risk both severe under- and over-selection (Foygel and Drton, 2010). This issue is avoided in the stabJGL algorithm as the eBIC only is used to select similarity and not sparsity. The Bayesian spike-and-slab joint graphical lasso tends to select very few edges, leading to high precision but low recall. Its performance deteriorates drastically as the network differences increase, leading to extremely low recall. This implies a lack of flexibility to adapt to varying network similarity levels, as has previously been observed (Lingjerde et al., 2022). Out of all the joint methods, stabJGL gives the most accurate sparsity estimate. This ensures that we neither get very low precision like FGL and GGL tuned by AIC, nor very low recall like SSJGL and FGL tuned by eBIC. StabJGL also appears to adapt well to the similarity between networks, with the prediction accuracy increasing with the number of shared edges. As a result, the method either outperforms the graphical lasso tuned by StARS for highly similar networks or performs comparably to it for unrelated networks. The similar performance for unrelated networks can be explained by the fact that the sparsity controlling penalty parameter of both methods are tuned with a stability-based approach. The results suggest that stabJGL can be used agnostically in settings where there is no prior knowledge about the level of network similarity and does not run any risk of decreased accuracy should the networks have nothing in common. The results for \(K=2\) and \(K=4\) networks are summarized in Figure 2. The results for FGL tuned with eBIC are not shown as it did not select any edges in any of the settings. The findings from the \(K=3\) case are echoed here, with FGL and GGL. having high recall but very low precision and particularly GGL exhibiting a lack of adaption to increased network similarity. On the contrary, SSJGL selects very few edges and thus has high precision but very low recall, with its performance quickly deteriorating for less similar networks. StabJGL achieves a balanced precision-recall trade-off and adapts well to the level of network similarity. Consequently, stabJGL performs comparably or better than the graphical lasso depending on the degree of similarity between the networks. A key question is whether stabJGL can achieve as high precision as the methods that give sparser networks (i.e., SSJGL) by using a lower variability threshold. Similarly, we want to see if stabJGL can achieve as high recall as the methods that infer more edges (i.e., FGL and GGL). To investigate this, we consider the same setting as in Figure 2 with \(K=2\) networks, focusing specifically on the case where the two networks have \(20\%\) edge agreement. Table 2 compares the performance of stabJGL for different values of the variability threshold \(\beta_{1}\) to the other methods. For \(\beta_{1}=0.01\), stabJGL gives very sparse estimates and obtains comparable precision and recall to SSJGL. For the higher threshold \(\beta_{1}=0.2\), stabJGL selects a large number and obtains comparable recall to FGL and GGL while retaining a higher precision level. A complete comparison for all levels of edge agreement is given in the Supplement (Figure S3), where we similarly find that by varying the variability threshold \(\beta_{1}\) we can obtain at least as high precision and/or recall as the other methods at any level of similarity. The fact that stabJGL allows the user to obtain higher or lower sparsity by changing the variability threshold means that the method can be adapted to reflect the priorities of the user (i.e., concern for false positives versus false negatives). For most applications, a middle-ground value such as \(0.1\) yields a good balance between false positives and false negatives as demonstrated in the simulations. ### Runtime profiling Figure 3 shows the CPU time used to jointly infer networks for \(K\in\{2,3,4\}\) networks and various numbers of nodes \(p\), with \(n\in\{100,150\}\) observations, for the joint graphical lasso with the fused penalty (FGL) with penalty parameters tuned with the AIC and stabJGL with the same parameter specifications as in the previously described simulations. Due to an efficient parallelized implementation, stabJGL has an almost identical run time to FGL when the same number of \(\lambda_{1}\) and \(\lambda_{2}\) values are considered. Thus, the increased estimation accuracy of stabJGL does not come at a computational cost. It is important to note that due to the generalized fused lasso problem having a closed-form solution in the special case of \(K=2\)(Danaher et al., 2014), stabJGL is substantially faster for only two networks than for \(K>2\). As stabJGL uses the fused penalty this comparison is the most relevant, but a run time comparison of all methods considered in our simulation study can be found in the Supplement (Figure S1). In the Supplement, we also demonstrate that stabJGL can be applied to problems with \(p>1,000\) nodes and \(K>2\) networks within reasonable time (Figure S2). ### Pan-cancer data We perform a proteomic network analysis of Reverse Phase Protein Array (RPPA) data from The Cancer Genome Atlas (TCGA) across different pan-Cancer tumor types (Cancer Genome Atlas Network and others, 2012). In a large proteomic pan-Cancer study of 11 TCGA tumor types, Akbani et al. (2014) identified a major tumor super cluster consisting of hormnally responsive "women's cancers". (Luminal breast cancer, ovarian cystadenocarcinoma, and uterine corpus endometrial carcinoma). Our objective is to map the proteomic network structure of the respective tumor types, so that we can get a better grasp of the common mechanisms at play in the hormnally responsive tumors. We are also interested in highlighting the differences. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Similarity} & \multirow{3}{*}{Method} & \multirow{3}{*}{\(\lambda_{1}\)} & \multirow{3}{*}{\(\lambda_{2}\)} & \multirow{3}{*}{Sparsity} & \multicolumn{3}{c}{\(n_{1}=150\)} & \multicolumn{3}{c}{\(n_{2}=200\)} & \multicolumn{3}{c}{\(n_{3}=300\)} \\ \cline{6-13} & & & & & & & & & & & & & \\ \hline 100 \(\%\) & Glasso & 0.208 & - & 0.026 (0.007) & 0.41 (0.08) & 0.51 (0.06) & 0.018 (0.004) & 0.57 (0.08) & 0.51 (0.05) & 0.014 (0.002) & 0.76 (0.07) & 0.54 (0.05) \\ & FGL & 0.114 & 0.021 & 0.087 (0.028) & 0.22 (0.08) & 0.88 (0.04) & 0.064 (0.020) & 0.31 (0.09) & 0.90 (0.03) & 0.041 (0.010) & 0.46 (0.09) & 0.91 (0.03) \\ & FGL (eBIC) & 0.365 & 0.021 & 0.004 (0.004) & 0.97 (0.06) & 0.20 (0.20) & 0.004 (0.004) & 0.99 (0.03) & 0.20 (0.20) & 0.004 (0.004) & 0.99 (0.01) & 0.20 (0.20) \\ & GGL & 0.114 & 0.007 & 0.152 (0.024) & 0.11 (0.03) & 0.81 (0.04) & 0.114 (0.019) & 0.15 (0.04) & 0.84 (0.04) & 0.069 (0.013) & 0.27 (0.06) & 0.87 (0.03) \\ & SSIGL & - & - & 0.011 (0.001) & 1.00 (0.00) & 0.54 (0.04) & 0.011 (0.001) & 1.00 (0.00) & 0.54 (0.04) & 0.011 (0.001) & 1.00 (0.00) & 0.54 (0.04) \\ & stabIGL & 0.166 & 0.067 & 0.015 (0.001) & 0.88 (0.05) & 0.66 (0.04) & 0.015 (0.001) & 0.90 (0.04) & 0.66 (0.04) & 0.015 (0.001) & 0.91 (0.03) & 0.66 (0.04) \\ \hline 80 \(\%\) & Glasso & 0.202 & - & 0.026 (0.007) & 0.40 (0.08) & 0.50 (0.06) & 0.018 (0.005) & 0.57 (0.12) & 0.48 (0.07) & 0.015 (0.002) & 0.75 (0.07) & 0.55 (0.06) \\ & FGL & 0.114 & 0.015 & 0.105 (0.030) & 0.18 (0.06) & 0.84 (0.04) & 0.072 (0.023) & 0.25 (0.08) & 0.82 (0.04) & 0.045 (0.012) & 0.41 (0.10) & 0.86 (0.03) \\ & FGL (eBIC) & 0.480 & 0.001 & 0.000 (0.000) & - & - & - & - & 0.000 (0.000) & - & - \\ & GGL & 0.115 & 0.008 & 0.149 (0.028) & 0.11 (0.03) & 0.80 (0.05) & 0.107 (0.023) & 0.16 (0.05) & 0.80 (0.05) & 0.065 (0.015) & 0.28 (0.10) & 0.85 (0.05) \\ & SSIGL & - & - & 0.008 (0.001) & 1.00 (0.01) & 0.40 (0.04) & 0.008 (0.001) & 0.97 (0.03) & 0.39 (0.04) & 0.008 (0.001) & 0.99 (0.01) & 0.40 (0.04) \\ & stabIGL & 0.166 & 0.053 & 0.014 (0.002) & 0.84 (0.08) & 0.59 (0.04) & 0.012 (0.001) & 0.90 (0.04) & 0.54 (0.04) & 0.012 (0.001) & 0.93 (0.03) & 0.56 (0.04) \\ \hline 60 \(\%\) & Glasso & 0.206 & - & 0.026 (0.007) & 0.41 (0.08) & 0.51 (0.06) & 0.017 (0.005) & 0.59 (0.11) & 0.48 (0.08) & 0.015 (0.002) & 0.75 (0.06) & 0.55 (0.05) \\ & FGL & 0.114 & 0.010 & 0.124 (0.029) & 0.14 (0.04) & 0.81 (0.03) & 0.086 (0.022) & 0.20 (0.05) & 0.81 (0.04) & 0.054 (0.014) & 0.33 (0.08) & 0.84 (0.04) \\ & FGL (eBIC) & 0.462 & 0.003 & 0.001 (0.003) & 0.99 (0.05) & 0.04 (0.12) & 0.001 (0.002) & 0.99 (0.02) & 0.03 (0.10) & 0.001 (0.002) & 1.00 (0.01) & 0.04 (0.11) \\ & GGL & 0.114 & 0.006 & 0.156 (0.024) & 0.11 (0.02) & 0.80 (0.03) & 0.112 (0.019) & 0.15 (0.03) & 0.81 (0.04) & 0.070 (0.013) & 0.25 (0.06) & 0.85 (0.04) \\ & SSIGL & - & - & 0.006 (0.001) & 0.99 (0.02) & 0.31 (0.03) & 0.006 (0.001) & 0.95 (0.03) & 0.30 (0.03) & 0.006 (0.001) & 0.97 (0.03) & 0.31 (0.03) \\ & stabIGL & 0.166 & 0.044 & 0.015 (0.003) & 0.75 (0.09) & 0.55 (0.05) & 0.012 (0.001) & 0.87 (0.05) & 0.50 (0.04) & 0.011 (0.001) & 0.92 (0.04) & 0.52 (0.04) \\ \hline 40 \(\%\) & Glasso & 0.202 & - & 0.027 (0.008) & 0.39 (0.07) & 0.51 (0.06) & 0.018 (0.005) & 0.57 (0.10) & 0.49 (0.07) & 0.015 (0.003) & 0.77 (0.09) & 0.55 (0.07) \\ & FGL & 0.114 & 0.007 & 0.137 (0.024) & 0.12 (0.03) & 0.80 (0.04) & 0.097 (0.021) & 0.17 (0.04) & 0.80 (0.04) & 0.055 (0.013) & 0.32 (0.07) & 0.84 (0.05) \\ & FGL (eBIC) & 0.485 & 0.001 & 0.000 (0.000) & - & - & - & 0.000 (0.000) & - & - & - & 0.000 (0.000) & - & - \\ & GGL & We consider mature RPPA data from Luminal breast cancer (BRCA, \(n=273\)), high-grade serous ovarian cystadenocarcinoma (OVCA, \(n=412\)), and uterine corpus endometrial carcinoma (UCEC, \(n=404\)). All data is downloaded from the UCSC Xena Browser (Goldman et al., 2020). The data is measured with \(p=131\) high-quality antibodies that target (phospho)-proteins. To alleviate batch effects, the RPPA data is normalized with replicate-base normalization (Akbani et al., 2014). We use stabJGL to jointly estimate the proteomic networks of the respective tumor types and interpret the results and their implications. We compare the output with that obtained with the fused joint graphical lasso (FGL) of Danaher et al. (2014) with the default penalty parameter tuning with AIC as described in Subsection 3.1. Further details and code for the analysis is given at [https://github.com/Camiling/stabJGL_analysis](https://github.com/Camiling/stabJGL_analysis). ### Pan-cancer analysis results #### 3.5.1 Estimated proteomic networks The resulting stabJGL proteomic networks of the three tumor types are shown in Figure 4, where we observe plenty of common edges as well as network-specific ones. The sparsity as well as the selected penalty parameter values in the resulting stabJGL and FGL networks is shown in Table 3. The tendency as observed in the simulations of FGL tuned by the AIC to over-select edges appears to be consistent with the findings in this context. With more than two thirds of all potential edges being determined as present by FGL, the results are challenging to interpret and derive meaningful conclusions from. From a biological standpoint, we would not expect a proteomic network to be this saturated in terms of associations due to the expected scale-free property of the degree distribution (Barabasi and Oltvai, 2004). While the degree distributions of the sparse stabJGL networks all follow a power-law with many low-degree nodes and fewer high-degree ones (hubs), an expected trait for omics data (Chen and Sharp, 2004), the degree distributions of the FGL networks do not. The full degree distributions are shown in the Supplement (Figure S4). Figure 2: Performance of the Glasso, FGL and GGL tuned by AIC, SSJGL and stabJGL, reconstructing \(K\in\{2,4\}\) graphs with \(p=100\) nodes and various similarity of the true graph structures. The similarity between the graphs is shown as the percentage of edges they have in common. The results are averaged over \(N=100\) replicates and show the precision and recall for the first estimated graph in each setting, reconstructed from \(n\in\{100,150\}\) observations and \(n\in\{150,200,250,300\}\) observations for \(K=2\) and \(K=4\) respectively. Standard deviation bars are shown for all methods. All graphs have true sparsity \(0.02\). In terms of penalty parameters, we see that just like for the simulated data the AIC selects very small penalty parameters for FGL, resulting in little sparsity and similarity encouragement. Given the findings of Akbani et al. (2014) about the presence of a super cluster consisting of the three hormonally responsive cancer types, it is not unreasonable to expect at least some proteomic network similarity to be encouraged by a joint method. This is achieved by stabJGL, which selects a large enough value of \(\lambda_{2}\) to encourage similarity. A comparison of the pairwise similarities of the proteomic networks is given in Figure 5, where similarity is measured by Matthew's Correlation Coefficient (MCC), a discretized Pearson correlation coefficient that can be used to quantify pairwise network similarities (Matthews (1975)). StabJGL finds the networks of the three tumor types to be more similar than FGL, in accordance with the findings of Akbani et al. (2014). \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{\(n_{1}=100\)} & \multicolumn{3}{c}{\(n_{2}=150\)} \\ \cline{4-9} Method & \(\beta_{1}\) & \(\lambda_{1}\) & \(\lambda_{2}\) & Sparsity & Precision & Recall & Sparsity & Precision & Recall \\ \hline Glasso & - & 0.239 & - & 0.021 (0.009) & 0.38 (0.11) & 0.36 (0.08) & 0.025 (0.007) & 0.41 (0.09) & 0.49 (0.06) \\ FGL & - & 0.168 & 0.009 & 0.094 (0.016) & 0.13 (0.03) & 0.60 (0.06) & 0.051 (0.010) & 0.25 (0.05) & 0.62 (0.06) \\ GGL & - & 0.167 & 0.012 & 0.091 (0.018) & 0.14 (0.03) & 0.59 (0.06) & 0.049 (0.012) & 0.26 (0.06) & 0.61 (0.06) \\ SSJGL & - & - & - & 0.003 (0.001) & 0.88 (0.09) & 0.12 (0.02) & 0.003 (0.001) & 0.93 (0.07) & 0.12 (0.02) \\ stabJGL & 0.01 & 0.335 & 0.029 & 0.003 (0.002) & 0.86 (0.13) & 0.14 (0.07) & 0.002 (0.001) & 0.98 (0.04) & 0.12 (0.06) \\ stabJGL & 0.05 & 0.271 & 0.064 & 0.006 (0.003) & 0.81 (0.13) & 0.24 (0.07) & 0.004 (0.002) & 0.96 (0.05) & 0.20 (0.08) \\ stabJGL & 0.10 & 0.218 & 0.090 & 0.010 (0.002) & 0.64 (0.07) & 0.33 (0.04) & 0.007 (0.001) & 0.80 (0.06) & 0.28 (0.05) \\ stabJGL & 0.20 & 0.166 & 0.093 & 0.030 (0.003) & 0.36 (0.03) & 0.55 (0.05) & 0.023 (0.002) & 0.46 (0.03) & 0.52 (0.05) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of stabJGL for different values of the variability threshold \(\beta_{1}\) on simulated data, compared to other graph reconstruction methods. The methods are used to estimate graphs with \(p=100\) nodes from \(K=2\) networks, both of sparsity \(0.02\), of which \(20\%\) of their edges are in common. The performance of stabJGL is compared to that of Glasso, FGL, GGL and SSJGL. The results are averaged over \(N=100\) simulations and shows the sparsity, precision, and recall of each of the \(K=2\) estimated graphs. The corresponding standard deviations are shown in parentheses. The graphs are reconstructed from \(n_{1}=100\) and \(n_{2}=150\) observations. The average selected values of the penalty parameters \(\lambda_{1}\) and \(\lambda_{2}\) for the relevant methods is shown as well. Figure 3: CPU time in seconds on a logarithmic scale used to jointly infer networks for \(K\in\{2,3,4\}\) networks and various numbers of nodes \(p\), with \(n\in\{100,150\}\) observations, for FGL tuned with AIC and stabJGL. The computations were performed on a 16-core Intel Xeon CPU, 2.60 GHz. #### 3.5.2 Edge validation in STRING To compare the level of evidence supporting the edges detected by stabJGL and FGL tuned by the AIC in the literature, we conduct edge validation using the STRING database of known and predicted protein-protein interactions [21]. To ensure the reliability of the validation process, we only consider the experimentally validated interactions in STRING as evidence, with default confidence score threshold \(\geq 0.4\). The fraction of edges with supporting evidence in the STRING database is computed for the respective stabJGL and FGL networks and shown in Table 4. The analysis reveals that for all three tumor types investigated, a higher proportion of the edges detected by stabJGL had supporting evidence in the STRING database compared to those identified by FGL. \begin{table} \begin{tabular}{l c c} & \multicolumn{2}{c}{Edge evidence \(\%\)} \\ \cline{2-3} Data set & FGL & stabJGL \\ \hline BRCA & \(5.4\%\) & \(\mathbf{12.3}\%\) \\ UVEC & \(5.6\%\) & \(\mathbf{10.0}\%\) \\ OVCA & \(5.7\%\) & \(\mathbf{12.4}\%\) \\ \hline \end{tabular} \end{table} Table 4: Comparison of evidence for edges in the respective FGL tuned by AIC and stabJGL proteomic networks of breast cancer (BRCA), ovarian cystadenocarcinoma (OVCA) and uterine corpus endometrial carcinoma (UCEC) tumors, considering experimentally determined protein-protein interactions documented in the STRING database. The highest percentage of edges with evidence is in bold. Figure 4: Proteomic network structure identified by stabJGL for the breast cancer (BRCA), ovarian cystadenocarcinoma (OVCA) and uterine corpus endometrial carcinoma (UCEC) tumors. The blue nodes represent proteins, and edges common to all three networks are marked in red, otherwise they are grey. \begin{table} \begin{tabular}{l c c c c c} & & & & & Sparsity \\ \cline{3-6} & \(\lambda_{1}\) & \(\lambda_{2}\) & BRCA & UCEC & OVCA \\ \hline FGL & 0.010 & 0.000 & 0.689 & 0.709 & 0.679 \\ stabJGL & 0.323 & 0.008 & 0.049 & 0.036 & 0.039 \\ \hline \end{tabular} \end{table} Table 3: Network analysis results for stabJGL and FGL tuned by the AIC, applied to data from breast cancer (BRCA), ovarian cystadenocarcinoma (OVCA) and uterine corpus endometrial carcinoma (UCEC) tumors. #### 3.5.3 Findings consistent with literature StabJGL successfully identifies protein-protein interactions known from literature. To highlight the findings of the proposed methodology, we only discuss edges and central proteins identified by stabJGL but not FGL. One example is the edge between activated (S345-phosphorylated) Checkpoint kinase 1 (Chk1) and DNA repair protein RAD51 homolog 1 (Rad51) in ovarian and breast cancer. The complex between the tumor suppressor BRCA2, which manifests predominantly in ovarian and breast cancer, and Rad51, is mediated by the DNA damage checkpoint Chk1 through Rad51 phosphorylation [Nair et al., 2020, Bahassi et al., 2008]. It is also reassuring that stabJGL identifies many relevant tumor type-specific proteins as hubs in the relevant tumor type only, such as mammalian target of rapamycin (mTOR), Tuberous Sclerosis Complex 2 (Tuberin) and Ribosomal protein S6 in BRCA, all of which are involved or up/downstream of the PI3K/AKT/mTOR pathway known to frequently be deregulated in Luminal breast cancer [Miricescu et al., 2020]. Lists of the top hubs in the respective stabJGL and FGL networks of the different tumor types, and their node degree, are given in the Supplement (Tables S5 and S6). StabJGL also captures edges that we expect to be present in all three tumor types, such as the known interaction between the transcription factor Forkhead box O3 (FOXO3a) and 14-3-3-epsilon which facilitates cancer cell proliferation [Tzivion et al., 2011, Nielsen et al., 2008]. This common interaction is documented in the STRING database. Figure 6 shows the network structure identified by stabJGL that is common to all three tumor types. Central proteins in this common network structure include Oncoprotein 18 (Stathmin), which is known to be relevant in all three hormonally responsive cancers due to its role in the regulation of cell growth and motility [Bieche et al., 1998, Trovik et al., 2011, Belletti and Baldassarre, 2011]. #### 3.5.4 Potential candidate hubs The recovery of documented links in the protein networks estimated by stabJGL highlights its capability to detect numerous relevant proteins and interactions. The potential for new discoveries is however an important aspect of stabJGL, as suggested by its good performance on simulated data. For example, stabJGL identifies phosphorylated epidermal growth factor receptor (EGFR) as a central hub protein in all three tumor types. While known to be relevant in ovarian cancer Zhang et al. [2016], Yang et al. [2013], the role of activated EGFR in uterine corpus endometrial carcinoma and Luminal breast cancer and is not yet clarified. Our findings suggest it could be relevant in all three hormonally responsive tumor types. Further, Platelet endothelial cell adhesion molecule (CD31) is found to be a central protein in UCEC only. The protein is important for angiogenesis and has been implicated in other tumor types such as haemangioma [Bergom et al., 2005]. Its prominence in the proteomic UCEC network suggests it may play a crucial role in this tumor type as well. Overall, these results showcase how stabJGL can aid in generating hypotheses by identifying central proteins and associations. ## 4 Discussion Suitable sparsity and similarity selection is key for capturing and studying multiple related biological networks. We have proposed the stabJGL algorithm, which determines the penalty parameters in the fused graphical lasso for multiple networks based on the principle of model stability. StabJGL demonstrably avoids the under- or over-selection of Figure 5: Pairwise Matthew’s Correlation Coefficient between the proteomic network structures of the breast cancer (BRCA), ovarian cystadenocarcinoma (OVCA) and uterine corpus endometrial carcinoma (UCEC) tumors, identified by FGL tuned by the AIC and stabJGL respectively. edges observed in state-of-the-art selection methods based on information criteria, and succeeds at leveraging network similarities to a suitable degree. Consequently, the method can be employed in situations where the actual degree of similarity is uncertain, resulting in marked benefits with minimal risks associated with its use. StabJGL offers a fast parallelized implementation, particularly for \(K=2\) networks as a closed-form solution exists. We successfully apply the method to problems with \(p>1,000\) nodes and \(K>2\) networks. With our novel approach, we can identify both common and distinct mechanisms in the proteomic networks of different types of hormonally responsive women's cancers. The results obtained with stabJGL are in line with known biology and compliment those of Akbani et al. (2014) by offering additional understanding of the underlying mechanisms in action. By recognizing various proteins as highly critical in the proteomic networks, stabJGL suggests their possible involvement in driving the diseases. The method both identifies proteins that are central in all three hormonally responsive cancers (e.g., phosphorylated EGFR) and proteins of tumor-specific relevance (e.g., CD31 in UCEC). Future extensions of the method can include alternative measures of variability, such as the entropy (see, e.g., Lartigue et al. (2020)). Further, while the method is formulated specifically for the joint graphical lasso with the fused penalty, it can in principle be used for any joint network approach requiring the tuning of sparsity- and similarity-controlling parameters. One potential method of application is JCGL (Huang et al., 2017), which is based on a group lasso penalty and currently fixes the penalty parameters according to theoretical results. To conclude, stabJGL provides a reliable approach to joint network inference of omics data. The output can provide a better understanding of both common and data type-specific mechanisms, which can be used for hypothesis generation regarding potential therapeutic targets. Figure 6: The proteomic network structure identified by stabJGL common to all three tumor types (BRCA, UCEC and OCVA). The node size indicates the degree in the common network structure, with proteins with more edges being represented by larger nodes. ## Software A user-friendly R package for stabJGL with tutorials is available on Github: [https://github.com/Camiling/stabJGL](https://github.com/Camiling/stabJGL). R code for the simulations and data analyses in this paper is available at [https://github.com/Camiling/stabJGL_simulations](https://github.com/Camiling/stabJGL_simulations) and [https://github.com/Camiling/stabJGL_analysis](https://github.com/Camiling/stabJGL_analysis). ## Funding This research is funded by the UK Medical Research Council programme MRC MC UU 00002/10 (C.L. and S.R.) and Aker Scholarship (C.L.).
2309.02951
Interface disorder as a cause for kinetic Rashba-Edelstein effect and interface Spin-Hall effect at the metal-insulator boundary
The spin phenomena observed at a clean metall-insulator interface are typically reduced to Rashba-Edelstein effect, that leads to spin accumulation over a few monolayers. We demonstrate that the presence of interface disorder significantly expands the range of potential phenomena. Specifically, the skew scattering at the metal - insulator boundary gives rise to the "kinetic Rashba-Edelstein effect", where spin accumulation occurs on a much larger length scale comparable to mean free path. Moreover, at higher orders of spin-orbit interaction, skew scattering is accompanied with spin relaxation resulting in the interface spin-Hall effect - a conversion of electrical current to spin current at the metal surface. Unlike the conventional spin-Hall effect, this phenomenon persists even within the Born approximation. These two predicted phenomena can dominate the spin density and spin current in devices of intermediate thickness.
A. V. Shumilin, V. V. Kabanov
2023-09-06T12:39:32Z
http://arxiv.org/abs/2309.02951v2
Interface disorder as a cause for kinetic Rashba-Edelstein effect and interface Spin-Hall effect at the metal - insulator boundary. ###### Abstract The spin phenomena observed at a clean metall-insulator interface are typically reduced to Rashba-Edelstein effect, that leads to spin accumulation over a few monolayers. We demonstrate that the presence of interface disorder significantly expands the range of potential phenomena. Specifically, the skew scattering at the metal - insulator boundary gives rise to the "kinetic Rashba-Edelstein effect", where spin accumulation occurs on a much larger length scale comparable to mean free path. Moreover, at higher orders of spin-orbit interaction, skew scattering is accompanied with spin relaxation resulting in the interface spin-Hall effect - a conversion of electrical current to spin current at the metal surface. Unlike the conventional spin-Hall effect, this phenomenon persists even within the Born approximation. These two predicted phenomena can dominate the spin density and spin current in devices of intermediate thickness. ## I Introduction Spin-orbit interaction is a fundamental phenomenon that enables the exchange of angular momentum between orbital and spin degrees of freedom. One of the promising applications of this interaction is in the development of spin-orbit torque devices, which offer a scalable and field-free solution for magnetic memory that can be controlled by electrical currents [1; 2; 3]. Experimental studies have already demonstrated the magnetization switching [4; 5; 6] and domain wall motion [7] in such devices, with other potential applications including the control of magnetic skyrmions [8] and spin-waves [9]. The basic setup for an spin-orbit torque device involves a magnetic bilayer composed of a heavy metal layer with strong spin-orbit coupling and a ferromagnetic layer that acts as the detector for spin polarization and spin current from the heavy metal. Although this simple bilayer composition is already functional [10], additional layers are often added to enhance or modify the device properties. These additional layers can be added from the side of the ferromagnet [11], from the side of heavy metal [12], or between them [13]. They are usually composed from materials possessing specific spin properties, such as antiferromagnets [14] and topological insulators [15; 16]. Surprisingly, recent research has shown that even insulating non-magnetic molecules can significantly enhance the spin torque when added from the side of a heavy metal, for the heavy metal thicknesses up to 5nm [17]. These results have drawn our attention to the spin phenomena occurring at the interface between heavy metals and insulators. Spin generation in magnetic bilayers is typically attributed to either the Rashba-Edelstein surface effect or the Spin-Hall effect (see Fig. (1)). Although it can be difficult to distinguish between these effects in a given experiment [4], they differ significantly in terms of the device engineering. The Rashba-Edelstein effect results in spin polarization at the surface of heavy metal where inversion symmetry is broken. However, this polarization is confined to only a few monolayers near the interface [18; 19; 20; 21], meaning that the spin generated at the heavy metal interface with a third material can affect the ferromagnetic layer only in very thin devices. When the heavy metal layer is thick, the spin-torque in the ferromagnetic layer is usually attributed to Spin-Hall effect, which is the conversion of electrical to spin current in the heavy metal layer [22; 23]. However, the Spin-Hall effect is related to the bulk properties of the heavy metal [24; 25; 26] and is not expected to be significantly influenced by the heavy metal - insulator interface. In relatively clean samples Spin-Hall effect is dominated by skew scattering at impurities in the bulk, that is absent in the Born approximation, resulting in a suppression factor of \(V_{0}/\varepsilon_{F}\) where \(V_{0}\) is the potential of single impurity and \(\varepsilon_{F}\) is Fermi energy. The recently predicted interface Spin-Hall effect combines the properties of Rashba-Edelstein and conventional Spin-Hall effects. It refers to the electrical to spin current conversion at the interface. Its phenomenological possibility is demonstrated in [27; 28; 29]. However, it has only been studied for the interface of two metals and Figure 1: The three phenomenological effects leading to the spin polarization in the heavy metal layer. (A) Rashba-Edelstein effect, (B) Spin-Hall effect, (C) Interface spin-Hall effect. The blue arrow stands for the direction of electrical current. Grey arrows show the direction of spin flow. The red arrows correspond to the direction spin polarization. Green spheres depict the impurities. was attributed to spin filtering, which involves different probabilities for spin-up and spin-down electrons to traverse the interface between metals [30; 31; 2]. Similar spin-filtering was also predicted for tunneling through semiconductor barriers [32; 33; 34]. Here we investigate the spin kinetics near a disordered heavy metal-insulator interface. We demonstrate that the disorder significantly increases the variety of interface spin phenomena. Skew scattering at the interface impurities causes spin accumulation over a distance comparable to the mean free path from the interface. This phenomenologically corresponds to the Rashba-Edelstein effect; however, the thickness of the spin accumulation layer is significantly larger than that predicted in [19; 20]. Combined with spin relaxation, this leads to the interface Spin-Hall effect, which is absent at a clean metal-insulator interface. Both of these phenomena are sensitive to the materials that make up the interface and their properties, as well as their disorder. ## II Model of the heavy metal surface We consider a clean heavy metal interface with insulator described by the model Hamiltonian \[\widehat{H}=\frac{\widehat{\mathbf{p}}^{2}}{2m}+U(z)-\gamma\frac{\partial U} {\partial z}\left(\sigma_{x}\widehat{p}_{y}-\sigma_{y}\widehat{p}_{x}\right) \tag{1}\] Here \(\widehat{\mathbf{p}}=-i\hbar\nabla\) is the momentum operator, \(m\) is the effective mass, \(U(z)=U_{0}\theta(z)\) is the potential energy describing the abrupt barrier at \(z=0\) with the height \(U_{0}\). \(\gamma\) is effective spin-orbit interaction inside heavy metal. It's solution in Appendix A leads to the following electron wavefunction on the metal side of the interface \[\Psi_{\alpha}(\mathbf{k})=\frac{e^{i\mathbf{k}_{\perp}\mathbf{r}_{\perp}}}{ \sqrt{2V}}\left(e^{i|k_{z}|z}+\hat{r}(\mathbf{k})e^{-i|k|_{z}z}\right)u_{ \alpha}. \tag{2}\] Here \(\mathbf{k}\) is electron wavevector, \(k_{z}\) is its component along \(z\), \(\mathbf{k}_{\perp}\) is its \(xy\)-component and \(u_{\alpha}\) is arbitrary spinor. Eq. (2) includes the spin-dependent reflection amplitude \(\hat{r}(\mathbf{k})\) \[\hat{r}(\mathbf{k})=-e^{2i\phi_{0}}\left[\cos\left(\Delta\phi\right)\hat{1}+i \sin\left(\Delta\phi\right)\hat{\sigma}_{\mathbf{k}}\right], \tag{3}\] where \(\phi_{0}=(\phi_{+}+\phi_{-})/2\) is the average phase change during the reflection and \(\Delta\phi=\phi_{+}-\phi_{-}\) shows its spin dependence. \(\phi_{\pm}=\arctan[k_{z}/(\kappa\mp 2m\gamma U_{0}k_{\perp}/\hbar)]\), \(\kappa=\sqrt{2mU_{0}/\hbar^{2}-k_{z}^{2}}\). \(\hat{\sigma}_{\mathbf{k}}\) is the combination of Pauli matrices. \[\hat{\sigma}_{\mathbf{k}}=\frac{k_{y}}{|k_{\perp}|}\hat{\sigma}_{x}-\frac{k_{ x}}{|k_{\perp}|}\hat{\sigma}_{y} \tag{4}\] It is demonstrated in Appendix A that in agreement with previous studies [19; 20] the clean interface does not exhibit an interface spin Hall effect and the spin polarization is confined to several monolayers. The primary objective of this study is to incorporate interface disorder into the theory. The most common approach to the electron kinetics near disordered interface involves finite probabilities of specular and non-specular reflection [35]. However, to address the new spin phenomena it is crucial to consider a microscopic mechanism responsible for non-specular reflection. Two conventional approaches exist for modeling reflection from disordered interfaces: roughness of the interface [36; 37] and interface impurities [38; 39; 40]. Both approaches allow for the possibility of non-specular reflection, and we consider them to be interchangeable. In this work we adopt the latter one. A single impurity can be described with potential energy \(V_{0}(\mathbf{r})\). With respect to the spin-orbit interaction it leads to the additional term in the electron Hamiltonian \[V(\mathbf{r})=V_{0}(\mathbf{r})+\gamma\mathbf{\sigma}\left[\frac{\partial V_{0}}{ \partial\mathbf{r}}\times\mathbf{p}\right] \tag{5}\] In this work we consider small impurities with \(V_{0}(\mathbf{r})=V_{I}\delta(\mathbf{r})\) where \(V_{I}\) stands for the magnitude of impurity potential. The scattering from the impurities is characterized by the matrix elements \(\hat{V}(\mathbf{k}_{1},\mathbf{k}_{2})=V_{\alpha\beta}(\mathbf{k}_{1}, \mathbf{k}_{2})=\langle\Psi_{\alpha}(\mathbf{k}_{1})|V(\mathbf{r})|\Psi_{ \beta}(\mathbf{k}_{2})\rangle\), where \(\Psi_{\alpha}(\mathbf{k}_{1})\) and \(\Psi_{\beta}(\mathbf{k}_{2})\) correspond to the electron states at the clean interface, as described by Eq. (2). In order to analyze the scattering process, it is useful to decompose the scattering elements into the two terms. \[\hat{V}(\mathbf{k}_{1},\mathbf{k}_{2})=\hat{V}^{(N)}(\mathbf{k}_{1},\mathbf{k }_{2})+\hat{V}^{(SO)}(\mathbf{k}_{1},\mathbf{k}_{2}) \tag{6a}\] \[V^{(N)}_{\alpha\beta}(\mathbf{k}_{1},\mathbf{k}_{2})=\frac{1}{2}\frac{V_{I}}{V} (1+\hat{r}^{+}(\mathbf{k}_{1}))(1+\hat{r}(\mathbf{k}_{2})) \tag{6b}\] Figure 2: (a) The four possibilities for the impurity scattering from state \(\mathbf{k}\) to the state \(\mathbf{k}^{\prime}\) in the presence of the interface. (b) the spin dependence of scattering amplitude. (c) asymmetric spin rotation angle. The results presented in panels (b) and (c) correspond to \(U_{0}/\varepsilon_{F}=4\), \(\gamma p_{F}^{2}/\hbar=0.2\) and to the geometry shown in panel (d). \[\hat{V}^{(SO)}(\mathbf{k}_{1},\mathbf{k}_{2})=\frac{iV_{I}\gamma}{2 \hbar V}\left(\mathbf{\sigma}[\mathbf{p}_{1}^{(inc)}\times\mathbf{p}_{2}^{(inc)}]\right.\] \[+\left.\widehat{r}^{+}(\mathbf{k}_{1})\mathbf{\sigma}[\mathbf{p}_{1}^{ (ref)}\times\mathbf{p}_{2}^{(inc)}]+\mathbf{\sigma}[\mathbf{p}_{1}^{(inc)}\times \mathbf{p}_{2}^{(ref)}]\widehat{r}(\mathbf{k}_{2})\right.\] \[\left.+\widehat{r}^{+}(\mathbf{k}_{1})\mathbf{\sigma}[\mathbf{p}_{1}^ {(ref)}\times\mathbf{p}_{2}^{(ref)}]\widehat{r}(\mathbf{k}_{2})\right) \tag{6c}\] Here \(V_{\alpha\beta}^{(N)}(\mathbf{k}_{1},\mathbf{k}_{2})\) corresponds to the first term in r.h.s. of Eq. (5) and \(V_{\alpha\beta}^{(SO)}(\mathbf{k}_{1},\mathbf{k}_{2})\) is related to the spin-orbit correction and to the second term in r.h.s. of Eq. (5). \(\mathbf{p}^{(inc)}=\hbar(k_{x},k_{y},|k_{z}|)\) is the momentum of the incident electron and \(\mathbf{p}^{(ref)}=\hbar(k_{x},k_{y},-|k_{z}|)\) is the momentum of reflected electron. \(\widehat{r}^{+}(\mathbf{k})\) is the Hermitian conjugate of the reflection amplitude described with Eq. (3). Eqs. (6b,6c) can be represented as the four different reflection possibilities shown in Fig. 2(a). The electron can undergo scattering from the impurity without interaction with the surface, or can be specularly reflected before or after the scattering or both. The probability of scattering depends on the quantum interference between the different reflection possibilities, which can be constructive or destructive depending on the phase \(\varphi_{0}\pm\Delta\varphi\) and the relation between \(\hat{V}^{(N)}\) and \(\hat{V}^{(SQ)}\). This probability varies for different spin projections, leading to skew scattering and spin separation at the surface. Fig.2(b) shows the probability for electrons with the incident angle of \(\pi/4\) in the xz-plane to scatter to the yz-plane. The scattering rate depends on the spin projection to the y-axis. The calculation details are presented in the Appendix B. When spin-orbit interaction is present, electron scattering also results in spin rotation, leading to spin relaxation after multiple random scattering events [37]. Fig.2(c) demonstrates that in our case, this rotation becomes asymmetric, causing the spin-up to rotate differently than the spin-down. This asymmetric spin rotation produces spin polarization of reflected electrons, even when the incident electrons are not spin-polarized, resulting in the interface spin-Hall effect. The geometry corresponding to Figs. 2(b,c) is shown in Fig.2(d). ## III Spin current and spin polarization To understand impact of the skew scattering and asymmetric spin rotation on the electrons in the bulk of heavy metal we introduce Boltzmann equation \[\frac{\partial\hat{f}(\mathbf{r},\mathbf{p})}{\partial t}+\mathbf{v}\frac{ \partial\hat{f}(\mathbf{r},\mathbf{p})}{\partial\mathbf{r}}+\mathbf{F}\frac{ \partial\hat{f}(\mathbf{r},\mathbf{p})}{\partial\mathbf{p}}=I(\hat{f}(\mathbf{ r},\mathbf{p})) \tag{7}\] Here the distribution function \(\hat{f}(\mathbf{r},\mathbf{p})\) is a \(2\times 2\) matrix in spin space that depends on the coordinate and momentum as usual. \(I(\hat{f}(\mathbf{r},\mathbf{p}))\) is the scattering operator. We consider the following ansatz for the distribution function: \(\hat{f}=\hat{f}_{0}+\hat{f}_{1}+\hat{f}_{2}\). Here \(\hat{f}_{0}\) is the equilibrium electron distribution. It is proportional to the unit matrix \(\hat{1}\) in the spin space. \[\hat{f}_{1}=-\frac{j}{e}\frac{p_{x}}{n}\frac{\partial\hat{f}_{0}}{\partial\varepsilon} \tag{8}\] represents the electric current density \(j\) which is assumed to flow along the \(x\)-axis. \(\hat{f}_{2}\) describes the spin polarization. We assume \(\hat{f}_{2}\ll\hat{f}_{1}\ll\hat{f}_{0}\), which corresponds to a relatively small spin polarization due to the skew scattering. In this case it is possible to neglect \(f_{2}\) for the incident electrons, because it would lead only to a small correction for \(f_{2}\) of scattered electrons (that is responsible for kinetic Rashba-Edelstein and interface spin-Hall effects). The skew scattering should be introduced into Boltzmann equation as a boundary condition. It is derived in Appendix C and reads \[\hat{f}_{2}(\mathbf{p},z=0) =\frac{mV}{S|p_{z}|}\hat{r}(\mathbf{p})\left(\int\frac{Vd\mathbf{p }^{\prime}}{(2\pi\hbar)^{3}}\times\right.\] \[\left.\widehat{\mathcal{W}}\left(\frac{\mathbf{p}}{\hbar},\frac{ \mathbf{p}^{\prime}}{\hbar}\right)\left(f_{1}(\mathbf{p}^{\prime})-f_{1}( \mathbf{p})\right)\right)\hat{r}(\mathbf{p})^{+} \tag{9}\] Here \[\widehat{\mathcal{W}}(\mathbf{k},\mathbf{k}^{\prime})=\frac{2\pi}{\hbar}N_{I} \widehat{V}(\mathbf{k},\mathbf{k}^{\prime})\widehat{V}(\mathbf{k}^{\prime}, \mathbf{k})\delta(\varepsilon_{k}-\varepsilon_{k^{\prime}}) \tag{10}\] \(N_{I}\) is the total number of impurities. Eqs. (C3,C4) show that \(f_{2}\) is proportional to the two-dimensional impurity concentration \(N_{I}/S\) which controls the probability \(P_{nsp}\) that an incident electron with Fermi energy is reflected non-specularly. This probability, averaged over incident electron momenta, can be expressed as follows: \[P_{nsp}=\frac{4\pi^{2}\hbar^{3}}{Sp_{F}^{2}}\mathcal{I}_{N} \tag{11a}\] \[\mathcal{I}_{N}=\int\int\frac{V^{2}d\mathbf{p}d\mathbf{p}^{\prime}}{(2\pi \hbar)^{6}}\mathrm{Tr}\widehat{\mathcal{W}}\left(\frac{\mathbf{p}}{\hbar},\frac {\mathbf{p}^{\prime}}{\hbar}\right)\delta(\varepsilon-\varepsilon_{F}) \tag{11b}\] In our model the macroscopic symmetry in \(xy\)-plane is not broken, and the only possible spin current density \(j_{s}\) in \(z\) direction describes the flow of y-polarized spins. It is conventionally expressed with the interface spin-Hall angle \[\tan\theta_{sh}=\frac{j_{s}}{j/e}=\frac{3}{4}P_{nsc}\frac{\mathcal{I}_{sh}}{ \mathcal{I}_{N}} \tag{12a}\] \[\mathcal{I}_{sh}=\int\frac{V^{2}d\mathbf{p}d\mathbf{p}^{\prime}}{(2 \pi\hbar)^{6}}\left(\frac{p_{x}^{\prime}}{p_{F}}-\frac{p_{x}}{p_{F}}\right) \delta(\varepsilon-\varepsilon_{F})\times\right.\] \[\left.\mathrm{Tr}\hat{\sigma}_{y}\left[\hat{r}(\mathbf{p}) \widehat{\mathcal{W}}\left(\frac{\mathbf{p}}{\hbar},\frac{\mathbf{p}^{\prime}}{ \hbar}\right)\hat{r}^{+}(\mathbf{p})\right] \tag{12b}\] The interface spin-Hall angle calculated with Eq. (12) is shown in Fig. 3 (a) as a function of spin-orbit interaction for different barrier heights \(U_{0}\). Interestingly, is not an odd function of \(\gamma\). It is related to the electron momentum \(\mathbf{p}\) being an operator that does not commute with impurity potential \(V_{0}(\mathbf{r})\). Fig. 3 (b) shows \(\theta_{sh}(\gamma)\) dependence in the double logarithm scale for \(U_{0}=10\varepsilon_{F}\) and small positive \(\gamma\). The spin-Hall angle is a high-order function of \(\gamma\), \(\theta_{SH}\propto\gamma^{3}\) at very small \(\gamma\) and becomes steeper with the increase of \(\gamma\), up to \(\theta_{SH}\propto\gamma^{6}\). As shown in Fig. 3 (a) there is a maximum in this dependence corresponding to \(\gamma p_{F}^{2}/\hbar\sim\sqrt{\varepsilon_{F}/U_{0}}\). The physical mechanism of interface spin Hall effect consists of the two parts. The first one is the skew scattering shown in Fig. 2(b). However, the skew scattering alone is insufficient to generate the spin current if spin is conserved during the scattering. Therefore, the interface spin Hall requires also the asymmetric spin rotation shown in Fig. 2(c) that leads to the "spin relaxation" which depends on spin projection. The spin relaxation is the second order in \(\gamma\) phenomenon [37], meaning that the interface spin-Hall effect is absent in the first order approximation over \(\gamma\). To show that the spin separation itself is a first order effect, we examine the spin accumulation near the interface. We solve the Boltzmann equation using the minimal model, which assumes that the scattering operator in the bulk can be described by a single relaxation time \(\tau\): \(I(\hat{f})=(\hat{f}_{0}-\hat{f})/\tau\). By applying this assumption to Eq. (7), we obtain the solution \[\hat{f}_{2}(\mathbf{p},z)=\hat{f}_{2}(\mathbf{p},0)\exp\left(-\frac{zp_{f}}{p_ {z}l_{free}}\right) \tag{13}\] Here \(l_{free}=v_{F}\tau\) is the mean free path. The spin polarization calculated from Eq. (13) reads \[m_{y}(z)=\frac{3}{4}\frac{j}{ev_{F}}P_{nsp}\frac{\mathcal{I}_{s}(z)}{\mathcal{ I}_{N}} \tag{14a}\] \[\mathcal{I}_{s}(z)=\int\frac{V^{2}d\mathbf{p}d\mathbf{p}^{\prime }}{(2\pi\hbar)^{6}}\times\\ \text{Tr}\sigma_{y}\hat{r}(\mathbf{p})\widehat{\mathcal{W}}\left( \frac{\mathbf{p}}{\hbar},\frac{\mathbf{p}^{\prime}}{\hbar}\right)\hat{r}^{+}( \mathbf{p})\times\\ \left(\frac{p_{x}^{\prime}}{|p_{z}|}-\frac{p_{x}}{|p_{z}|}\right) \exp\left(-\frac{zp_{f}}{l_{free}|p_{z}|}\right)\delta(\varepsilon-\varepsilon _{F}) \tag{14b}\] Fig. 4 shows the spin polarization \(m_{y}\) calculated for \(U_{0}=10\varepsilon_{F}\). Similarly to the interface spin-Hall effect, the polarization obtained from the Boltzmann equation appears only due to the possibility of non-specular reflection and is normalized by \(P_{nsc}\). Fig. 4(a) and (b) show the \(m_{y}\) distribution over \(z\) for \(\gamma p_{F}^{2}/\hbar=\pm 0.05\) and \(\gamma p_{F}^{2}/\hbar=\pm 0.2\) correspondingly. For small values of \(\gamma\), \(m_{y}\) changes its sign as a function of \(z\) indicating the difference in the average reflection angle for different spin projections. This change of the sign disappears at larger values of \(\gamma\) when interface spin-Hall effect becomes sufficiently strong to dominate the spin accumulation. Phenomenologically, the calculated spin accumulation is the Rashba-Edelstein effect because it is confined near Figure 3: (a) Interface Spin-Hall angle as a function of spin-orbit interaction for various relationships between interface potential and Fermi energy \(U_{0}/\varepsilon_{F}\) as indicated in the legend. (b) The \(\theta_{sh}(\gamma)\) dependence for \(U_{0}/\varepsilon_{F}=10\) in the double logarithm scale. The blue dashed line and the black dashed-dotted line correspond to \(\theta_{sh}\propto\gamma^{6}\) and \(\theta_{sh}\propto\gamma^{3}\) respectively. Figure 4: Spin accumulation \(m_{y}\) due to kinetic Rashba-Edelstein effect calculated for \(U_{0}=10\varepsilon_{F}\). Panels (a) and (b) show the dependence \(m_{y}(z)\) for different values of \(\gamma\). Panel (c) shows the dependence of accumulated spin on the spin orbit interaction parameter \(\gamma\) for three different values \(z/l_{free}\). the interface. However, unlike the conventional Rashba-Edelstein effect at clean interfaces studied in [19; 20], the polarization spans up to the mean free path \(l_{free}\). In the case where the sample only contains impurities at the surface and the bulk is clean, \(l_{free}\) can be arbitrary large. Thus, the resulting spin polarization can be referred as the kinetic Rashba-Edelstein effect. ## IV Discussion The interface spin Hall and kinetic Rashba-Edelstein effects are closely related to the interference of different reflected waves shown in Fig. 2 (a). Both the effects should disappear if this interference is somehow suppressed. For example one can consider bulk impurities near the interface as a possible reason for interface spin Hall. However, in this case the phase of the reflection amplitude \(\hat{r}\) would be modified by the random distance between impurity and the surface. If this distance exceeds \(\hbar/p_{f}\), the interference and the spin generation would be suppressed. Nevertheless, if the impurities are located precisely at the interface and are small compared to \(p_{F}\), their potential \(V_{I}\) is not significant: it is absorbed into non-specular reflection probability \(P_{nsp}\) in our final expressions. This is a consequence of the calculations performed within Born approximation. We presume that introduction of complex impurities with large size and high potential energy can significantly modify the spin accumulation and spin current. The existence of interface spin-Hall effect in Born approximation makes it fundamentally different from the ordinary spin-Hall effect that appears only as a higher-order corrections over impurity potential \(V_{I}\) and is suppressed if the disorder is Gaussian. Although there are some predicted mechanisms for its existence in Gaussian disorder, namely the combined scattering from impurities and phonons [41] and the scattering at close impurity complexes [41; 42], all these mechanisms require high orders of perturbation theory. Interface spin Hall is possible in the Born approximation because of the large interface potential energy that exists already in zeros order. The unusual properties of the kinetic Rashba-Edelstein effect and interface spin-Hall effect suggest that in some cases the predicted interface phenomena should dominate the spin accumulation and spin current. It happens when the thickness \(d\) of the sample is intermediate \(\hbar/p_{F}\ll d\lesssim l_{free}\) and bulk impurities that control conductivity have small potential energy \(V_{0}\ll\varepsilon_{F}\). If these conditions are met, the control of interface properties is important to optimize the spin-torque. The effect of interface roughness is sometimes reported in experiments [17; 43], however, to the best of our knowledge, there is no its systematic study. We predict that the perfect interface is not necessarily the clean one. The controlled disorder can serve as a source of both spin polarization and spin current. It allows to control the functionality of spin-torque devices by surface engineering, i.e. by manipulating surface impurity concentration and their type. Furthermore, the sensitivity of interface spin-Hall to the interface potential \(U_{0}\) provides a means of controlling spin-torque with gating. In conclusion, our study demonstrates that the impurities at the metal-insulator interface significantly increase the variety of surface spin effects. The skew scattering from these impurities induces spin accumulation that extends up to the mean free path. When combined with spin relaxation, it gives rise to the interface spin Hall effect, which converts charge currents to spin currents at the metal-insulator boundary. ## V Acknowledgements We have received funding from the European Community's H2020 Programme under Grant Agreement INTERFAST (H2020-FET-OPEN-965046), and from the Slovenian Research Agency Program No.P1-0040. ## Appendix A spin polarization near the clean interface Our starting point is the Hamiltonian described by Eq. (1). We are interested in its solutions when electron energy \(\varepsilon\) is below the barrier \(U_{0}\). We start with rotation in the spin space with the matrix: \[U_{\sigma}=\begin{pmatrix}\tilde{k}_{\perp}&-\tilde{k}_{\perp}\\ 1&1\end{pmatrix} \tag{10}\] here \(\tilde{k}_{\perp}=(ik_{x}+k_{y})/k_{\perp}\), \(k_{\perp}=\sqrt{k_{x}^{2}+k_{y}^{2}}\). After canonical transformation \(U_{\sigma}^{-1}HU_{\sigma}\) the Hamiltonian becomes diagonal: \[H=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dz^{2}}+\frac{\hbar^{2}k_{\perp}^{2}}{2m} +U_{0}\theta(z)\mp\gamma U_{0}\delta(z)k_{\perp} \tag{11}\] Using Eq.(11) we find the eigenfunctions: \[\psi_{\pm}(\mathbf{r}_{\perp},z)=\begin{cases}A_{\pm}\begin{pmatrix}\pm\tilde {k}_{\perp}\\ 1\end{pmatrix}\sin\left(k_{\pm}z-\phi_{\pm}\right)\exp\left(i\mathbf{k}_{\perp }\mathbf{r}_{\perp}\right),z<0\\ -A_{\pm}\begin{pmatrix}\pm\tilde{k}_{\perp}\\ 1\end{pmatrix}\sin\left(\phi_{\pm}\right)\exp\left(i\mathbf{k}_{\perp}\mathbf{ r}_{\perp}\right)\exp\left(-\kappa z\right),z>0\end{cases} \tag{12}\] Here \(A_{\pm}=\sqrt{\frac{\kappa_{k}}{\kappa_{\perp}L+1}}\), \(\kappa=\sqrt{\frac{2m(U_{0}-\epsilon_{\pm})}{\hbar^{2}}}\), \(\kappa_{\pm}=\kappa\mp\frac{2m\gamma U_{0}k_{\perp}}{\hbar^{2}}\), \(\epsilon_{\pm}=\frac{\hbar^{2}k_{\perp}^{2}}{2m}\), \(L\) is the size of the sample in z direction that corresponds to the thickness of heavy metal layer. \(k_{\pm}\) is electron wavevector in \(z\)-direction that will become spin-dependent after effects of finite \(L\) will be taken into account. The phase shift \(\phi_{\pm}\) is determined by the equation \(\tan\phi_{\pm}=k_{\pm}/\kappa_{\pm}\). The energy is \[\varepsilon_{\pm}=\frac{\hbar^{2}k_{\pm}^{2}}{2m}+\frac{\hbar^{2}k_{\perp}^{2}} {2m} \tag{10}\] To describe the boundary conditions for Boltzmann equation it is enough to consider the infinite sample \(L\rightarrow\infty\). In this case \(k_{+}=k_{-}=k_{z}\), \(\varepsilon_{+}=\varepsilon_{-}=\varepsilon_{k}\) and the solution of Schrodinger equation exists for an arbitrary incident wave. \[\begin{pmatrix}a\\ b\end{pmatrix}\exp\left(ik_{z}z\right)\exp\left(i\mathbf{k}_{\perp}\mathbf{r}_ {\perp}\right) \tag{11}\] Here \((a,b)\) is an arbitrary spinor. According to Eq. (10) its reflected wave is described as follows \[\hat{r}(k)\begin{pmatrix}a\\ b\end{pmatrix}\exp\left(-ik_{z}z\right)\exp\left(i\mathbf{k}_{\perp}\mathbf{r}_ {\perp}\right) \tag{12}\] Here \[\hat{r}(k)=\exp\left(i\pi+2i\phi_{0}\right)\times\\ (\cos\left(\Delta\phi\right)\hat{\sigma}_{0}+i\sin\left(\Delta \phi\right)\hat{\sigma}_{k}) \tag{13}\] \[\hat{\sigma}_{k}=\frac{k_{y}}{|k_{\perp}|}\hat{\sigma}_{x}-\frac{k_{x}}{|k_{ \perp}|}\hat{\sigma}_{y} \tag{14}\] Interestingly, to account for Rashba-Edelstein effect at the clean interface, it is important to consider \(L\) to be finite. Here we assume the boundary conditions at \(z=-L\): \(d\psi(\mathbf{r}_{\perp},z)/dz|_{z=-L}=0\). It leads the following equations for \(k_{\pm}\) and \(\phi_{\pm}\): \[k_{\pm}=\frac{\pi}{2L}(2n+1)-\frac{\phi_{\pm}}{L},\\ \phi_{\pm}=\arctan\bigg{[}\frac{(n+1/2)\pi-\phi_{\pm}}{L\kappa_{ \pm}}\bigg{]}, \tag{15}\] where \(n\) is arbitrary integer. Eqs. (10,15) show that when \(L\) is finite the discrete levels for spin up and down are different, that will later lead to the current-induced spin polarization. To account for in-plane electric current we introduce the distribution function of the electrons \(f(\mathbf{k})=f_{0}(\varepsilon-v\hbar k_{x})\). Here \(f_{0}\) is the Fermi function, \(v\) is the drift velocity that is assumed to be along \(x\)-direction. The current density \(j_{x}(z)\) is defined as follows: \[j_{x}(z)=\frac{e}{L}\sum_{n}\int\frac{d\mathbf{k}_{\perp}}{(2\pi )^{2}}\frac{2\hbar k_{x}}{m}\times\\ \big{(}\sin\left(k_{+}z-\phi_{+}\right)^{2}f_{0}(\varepsilon_{+}- \hbar vk_{x})\\ +\sin\left(k_{-}z-\phi_{-}\right)^{2}f_{0}(\varepsilon_{-}-\hbar vk_{x}) \big{)} \tag{16}\] Note, that although the distribution \(f(\mathbf{k})\) does not depend on spin, the energies \(\varepsilon_{\pm}\) do. Eq. (16) can be simplified by expanding Fermi functions over small \(vk_{x}\) and integrating over the angle of \(\mathbf{k}_{\perp}\). \[j_{x}(z)=\frac{ev\hbar^{2}}{8\pi mT}\sum_{n}\int dk_{\perp}k_{ \perp}^{3}\times\\ \left[\frac{\sin^{2}(k_{+}z-\phi_{+})}{\cosh^{2}\left(\frac{ \varepsilon_{+}-\mu}{2T}\right)}+\frac{\sin^{2}(k_{-}z-\phi_{-})}{\cosh^{2} \left(\frac{\varepsilon_{-}-\mu}{2T}\right)}\right] \tag{17}\] Here \(T\) is temperature and \(\mu\) is the chemical potential. Eq. (17) describes the distribution of current density near the interface. To describe the spin polarization we take into account that \[(\pm\tilde{k}_{\perp}^{*},1)\sigma_{y}\begin{pmatrix}\pm\tilde{k}_{\perp}\\ 1\end{pmatrix}=\mp 2k_{x}/k_{\perp} \tag{18}\] It allows us to derive the expression for the distribution of spin polarization \(m_{y}(z)\) \[m_{y}(z)=\frac{1}{L}\sum_{n}\int\frac{d\mathbf{k}_{\perp}}{(2 \pi)^{2}}\frac{2k_{x}}{k_{\perp}}\times\\ \Big{[}\sin\left(k_{+}z-\phi_{+}\right)^{2}f_{0}(\varepsilon_{+}- \hbar vk_{x})\\ -\sin\left(k_{-}z-\phi_{-}\right)^{2}f_{0}(\varepsilon_{-}-\hbar vk_{x}) \Big{]} \tag{19}\] After expanding it over small \(vk_{x}\) and performing the angle integration we obtain: \[m_{y}(z)=\frac{v\hbar}{8\pi TL}\sum_{n}\int dk_{\perp}k_{\perp}^{2}\times\\ \left[\frac{\sin^{2}\left(k_{+}z-\phi_{+}\right)}{\cosh^{2}\left( \frac{\varepsilon_{+}-\mu}{2T}\right)}-\frac{\sin^{2}\left(k_{-}z-\phi_{-} \right)}{\cosh^{2}\left(\frac{\varepsilon_{-}-\mu}{2T}\right)}\right] \tag{20}\] This formula describes the spin accumulation near the interface when current is flowing parallel to the surface. Fig. 5 represents calculated current density and the accumulated spin density as a function of \(z\) (Rashba-Edelshtein effect). The current density is normalized to \(j_{x,0}=evk_{F}^{3}/3\pi^{2}\) which is the current density far from interface. Fig. 5 shows that in a clean sample the spin polarization exists only at the distances \(\sim\hbar/p_{F}\) from the surface where the current is modified by quantum effects. Note also, that the wavefunctions (10) do not include density flux \(\text{Im}\psi^{*}_{\pm}\nabla\psi_{\pm}=0\) and the spin density (12) is not accompanied by spin current. ## Appendix B spin-dependent scattering at the interface impurities In the general case, impurity scattering requires density matrix formalism for its description. The usual procedure for such a description starts with the density matrix \(\hat{\rho}(\mathbf{k})\) diagonal in the space of wavefunctions \(\Psi_{\mathbf{k}}\) and considers non-diagonal terms \(\hat{\rho}(\mathbf{k},\mathbf{k}^{\prime})\) as a small perturbation. However, the terms \(\hat{\rho}(\mathbf{k})\) diagonal in the space of \(\Psi_{\mathbf{k}}\) are still \(2\times 2\) matrices in spin space. The master equation for the density matrix reads \[\frac{\partial\hat{\rho}}{\partial t}=\frac{i}{\hbar}(\hat{\rho}\hat{V}-\hat{V} \hat{\rho}). \tag{13}\] Here \(V\) is the impurity energy that includes both potential energy and spin-orbital correction (6). We assume the slow variance of diagonal matrix elements compared to the oscillation frequencies \((\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{k}^{\prime}})/\hbar\). It allows us to derive expression for the perturbation \(\hat{\rho}(\mathbf{k},\mathbf{k}^{\prime})\) from Eq. (13) \[\hat{\rho}(\mathbf{k},\mathbf{k}^{\prime})=\frac{\exp\left(i\frac{\varepsilon _{\mathbf{k}}-\varepsilon_{\mathbf{k}^{\prime}}}{\hbar}t\right)}{\varepsilon _{\mathbf{k}}-\varepsilon_{\mathbf{k}^{\prime}}-i\delta}\times\left(\hat{\rho }(\mathbf{k})\hat{V}(\mathbf{kk}^{\prime})-\hat{V}(\mathbf{kk}^{\prime})\hat {\rho}(\mathbf{k}^{\prime})\right) \tag{14}\] This equation should be substituted back to Eq. (13) where we now keep only the non-oscillating terms \[\frac{\partial\hat{\rho}(\mathbf{k})}{\partial t}=\sum_{\mathbf{ k}^{\prime}}\frac{i/\hbar}{\varepsilon_{\mathbf{k}^{\prime}}-\varepsilon_{ \mathbf{k}}-i\delta}\times\left(\hat{V}(\mathbf{kk}^{\prime})\hat{V}(\mathbf{ k}^{\prime}\mathbf{k})\hat{\rho}(\mathbf{k})-\hat{V}(\mathbf{kk}^{\prime})\hat{ \rho}(\mathbf{k}^{\prime})\hat{V}(\mathbf{k}^{\prime}\mathbf{k})\right)-\\ \frac{i/\hbar}{\varepsilon_{\mathbf{k}}-\varepsilon_{\mathbf{k}^ {\prime}}-i\delta}\times\left(\hat{V}(\mathbf{kk}^{\prime})\hat{\rho}( \mathbf{k}^{\prime})\hat{V}(\mathbf{k}^{\prime}\mathbf{k})-\hat{\rho}( \mathbf{k})\hat{V}(\mathbf{kk}^{\prime})\hat{V}(\mathbf{k}^{\prime}\mathbf{k} )\right) \tag{15}\] Here the real part of \(1/(\varepsilon_{\mathbf{k}^{\prime}}-\varepsilon_{\mathbf{k}}-i\delta)\) corresponds to the small modification of electron states due to impurity potential that can be neglected. Imaginary part leads to actual transitions between states. They can be described with the equations \[\frac{\partial\rho_{ij}(\mathbf{k})}{\partial t}=\int\frac{Vd\mathbf{k}^{ \prime}}{(2\pi)^{3}}\left(W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime}) \rho_{lm}(\mathbf{k}^{\prime})\right.\left.-W^{(out)}_{ij,lm}(\mathbf{k}, \mathbf{k}^{\prime})\rho_{lm}(\mathbf{k})\right) \tag{16}\] \[W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\rho_{lm}(\mathbf{k}^{\prime} )=\frac{2\pi}{\hbar}N_{I}\times V_{il}(\mathbf{k},\mathbf{k}^{\prime})\rho_{ lm}(\mathbf{k}^{\prime})V_{mj}(\mathbf{k}^{\prime},\mathbf{k})\delta( \varepsilon_{k}-\varepsilon_{k^{\prime}}) \tag{17}\] \[W^{(out)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\rho_{lm}(\mathbf{k})=\frac{ \pi}{\hbar}N_{I}\times\Big{[}V_{in}(\mathbf{k},\mathbf{k}^{\prime})V_{nl}( \mathbf{k}^{\prime},\mathbf{k})\rho_{lm}(\mathbf{k})\delta_{jm}+\rho_{lm}( \mathbf{k})V_{mn}(\mathbf{k},\mathbf{k}^{\prime})V_{nj}(\mathbf{k}^{\prime}, \mathbf{k})\delta_{il}\Big{]}\delta(\varepsilon_{k}-\varepsilon_{k^{\prime}}) \tag{18}\] Figure 5: The distribution of the current density near the surface (a) and spin density as a function of z (b). Here \(N_{I}\) is the total number of impurities. \(W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\) describes the electrons scattering from the state \(\mathbf{k}^{\prime}\) to the state \(\mathbf{k}\). Because both the initial and the final states of electrons are described with \(2\times 2\) density matrix, \(W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\) is, therefore, a four-dimensional \(2\times 2\times 2\times 2\) matrix. \(W^{(out)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\) is the out-scattering term that stands for the backward transition from \(\mathbf{k}\) to \(\mathbf{k}^{\prime}\). It is different from \(W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\) because the spin is not conserved during the scattering. For a given density matrix \(\hat{\rho}(\mathbf{k}^{\prime})\) of incident electrons, \[W^{(in)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\rho_{lm}(\mathbf{k}^{\prime}) \tag{10}\] represents the properties of the scattered ones. In particular the results shown in Fig. 2(b,c) are calculated with Eq. (10) where \(\hat{\rho}(\mathbf{k}^{\prime})=(\hat{1}\pm\sigma_{y})/2\) and \(\mathbf{k}^{\prime}=(p_{F}/\sqrt{2}\hbar)(1,0,1)\). The scattering probabilities correspond to \(\mathrm{Tr}\widehat{W}^{(in)}(\mathbf{k},\mathbf{k}^{\prime})\hat{\rho}( \mathbf{k}^{\prime})=W^{(in)}_{ii,lm}(\mathbf{k},\mathbf{k}^{\prime})\rho_{ lm}(\mathbf{k}^{\prime})\) and the spin polarization vector \(\mathbf{s}\) to \(\mathrm{Tr}\boldsymbol{\sigma}r(\mathbf{k})\widehat{W}^{(in)}(\mathbf{k}, \mathbf{k}^{\prime})\hat{\rho}(\mathbf{k}^{\prime})r^{+}(\mathbf{k})=r^{+}_{ ii_{1}}(\mathbf{k})\sigma_{i_{1}i_{2}}r_{i_{2}j}(\mathbf{k})W^{(in)}_{ji,lm}( \mathbf{k},\mathbf{k}^{\prime})\rho_{lm}(\mathbf{k}^{\prime})\). ## Appendix C Boundary conditions for the Boltzmann equation The approach (11) is based on the wavefunctions \(\Psi_{\alpha}(\mathbf{k})\) defined in the main text. They correspond to the coherence of incident and specularly reflected electrons. However, in the bulk of the film this coherence is lost due to the scattering at the bulk impurities and The Boltzmann equation approach is based on the distribution function \(\hat{f}(\mathbf{p})\) that neglects such a coherence. To relate the two approaches we consider the spin polarization generated per unit time. Because the kinetic equation is linear we can decouple it into \(G_{\alpha}(\mathbf{p})d\mathbf{p}\) - the spin polarization in the \(\alpha\)-direction related to the reflected electrons with the momentum \(\mathbf{p}\). In terms of Boltzmann equation it is expressed as follows \[G_{\alpha}(\mathbf{p})d\mathbf{p}=\frac{d\mathbf{p}}{(2\pi\hbar)^{3}}\mathrm{ Tr}\hat{\sigma}_{\alpha}\hat{f}_{2}(\mathbf{p})\frac{|p_{z}|}{m}S \tag{12}\] Here \(\hat{f}_{2}(\mathbf{p})\) is taken at the interface and is assumed not to depend on the exact point of the interface. \(S\) is the interface area. Note that \(p_{z}\) is negative for the reflected electrons. In terms of Eq. (11) \(G_{\alpha}(\mathbf{p})d\mathbf{p}\) is equal to \[G_{\alpha}(\mathbf{k})d\mathbf{k}=\frac{Vd\mathbf{k}}{(2\pi)^{3}}\mathrm{Tr} \hat{r}^{+}_{\mathbf{k}}\hat{\sigma}_{\alpha}\hat{r}_{\mathbf{k}}\times\int \frac{Vd\mathbf{k}^{\prime}}{(2\pi)^{3}}\left(W^{(in)}_{ij,lm}(\mathbf{k}, \mathbf{k}^{\prime})\rho_{lm}(\mathbf{k}^{\prime})-\right.\left.W^{(out)}_{ij,lm}(\mathbf{k},\mathbf{k}^{\prime})\rho_{lm}(\mathbf{k})\right) \tag{13}\] When \(\hat{f}_{2}\ll\hat{f}_{1}\ll\hat{f}_{0}\) one can substitute \(\hat{\rho}(\mathbf{k})\) with \(\hat{f}_{0}(\hbar k_{x},\hbar k_{y},\hbar|k_{z}|)+\hat{f}_{1}(\hbar k_{x}, \hbar k_{y},\hbar|k_{z}|)\) in the r.h.s. of Eq. (13). However, \(\hat{f}_{0}\) does not lead to spin polarization and can be dropped. It allows us to derive the following boundary condition for \(\hat{f}_{2}\) at \(z=0\). \[\hat{f}_{2}(\mathbf{p})=\frac{mV}{S|p_{z}|}\hat{r}(\mathbf{p})\left(\int\frac{ Vd\mathbf{p}^{\prime}}{(2\pi\hbar)^{3}}\times\right.\left.\widehat{\mathcal{W}} \left(\frac{\mathbf{p}}{\hbar},\frac{\mathbf{p}^{\prime}}{\hbar}\right)\left(f_ {1}(\mathbf{p}^{\prime})-f_{1}(\mathbf{p})\right)\right)\hat{r}(\mathbf{p})^ {+} \tag{14}\] Here \[\mathcal{W}_{ij}=W^{(in)}_{ij,ll}=W^{(out)}_{ij,ll}, \tag{15}\] and we took into account that \(\hat{f}_{1}=f_{1}\hat{1}\).
2307.14447
Tunable Magnon-Photon Coupling by Magnon Band Gap in a Layered Hybrid Perovskite Antiferromagnet
Tunability of coherent coupling between fundamental excitations is an important prerequisite for expanding their functionality in hybrid quantum systems. In hybrid magnonics, the dipolar interaction between magnon and photon usually persists and cannot be switched off. Here, we demonstrate this capability by coupling a superconducting resonator to a layered hybrid perovskite antiferromagnet, which exhibits a magnon band gap due to its intrinsic Dzyaloshinskii-Moriya interaction. The pronounced temperature sensitivity of the magnon band gap location allows us to set the photon mode within the gap and to disable magnon-photon hybridization. When the resonator mode falls into the magnon band gap, the resonator damping rate increases due to the nonzero coupling to the detuned magnon mode. This phenomena can be used to quantify the magnon band gap using an analytical model. Our work brings new opportunities in controlling coherent information processing with quantum properties in complex magnetic materials.
Yi Li, Timothy Draher, Andrew H. Comstock, Yuzan Xiong, Md Azimul Haque, Elham Easy, Jiang-Chao Qian, Tomas Polakovic, John E. Pearson, Ralu Divan, Jian-Min Zuo, Xian Zhang, Ulrich Welp, Wai-Kwong Kwok, Axel Hoffmann, Joseph M. Luther, Matthew C. Beard, Dali Sun, Wei Zhang, Valentine Novosad
2023-07-26T18:37:58Z
http://arxiv.org/abs/2307.14447v1
# Tunable Magnon-Photon Coupling by Magnon Band Gap in a Layered Hybrid Perovskite Antiferromagnet ###### Abstract Tunability of coherent coupling between fundamental excitations is an important prerequisite for expanding their functionality in hybrid quantum systems. In hybrid magnonics, the dipolar interaction between magnon and photon usually persists and cannot be switched off. Here, we demonstrate this capability by coupling a superconducting resonator to a layered hybrid perovskite antiferromagnet, which exhibits a magnon band gap due to its intrinsic Dzyaloshinskii-Moriya interaction. The pronounced temperature sensitivity of the magnon band gap location allows us to set the photon mode within the gap and to disable magnon-photon hybridization. When the resonator mode falls into the magnon band gap, the resonator damping rate increases due to the nonzero coupling to the detuned magnon mode. This phenomena can be used to quantify the magnon band gap using an analytical model. Our work brings new opportunities in controlling coherent information processing with quantum properties in complex magnetic materials. Hybrid quantum systems [1; 2; 3] offer an important pathway for harnessing different natural advantages of complementary quantum systems, leveraging the distinct properties of their constituent excitations. The fundamental excitations of magnetically ordered materials, i.e. magnons, provide efficient coupling with other excitations [4; 5], such as microwave photons [6; 7; 8; 9; 10; 11; 12], acoustic phonons [13; 14; 15; 16], and magnons themselves [17; 18; 19; 20; 21; 22; 23; 24], therefore holding promise for future integration with diverse quantum modules [25; 26; 27; 28]. In addition, coherent magnon interactions exhibit great controllability in different aspects, such as polarization [11], mode profile [21], phase [29; 30; 31; 32] and layer structure [20], allowing for implementation of coherent magnon operations [33]. Recent demonstrations of coherent magnon-magnon coupling with controllable coupling strength by frequency detuning [34; 35] have further expanded the capability of distributed hybrid magnonic networks [4]. Despite its controllability, which is largely based on extrinsic control of magnetic systems, the intrinsic magnetic properties are rarely explored for manipulating coherent magnon interactions. To date, landmark demonstrations of hybrid magnonics have been centered on the ferrimagnetic insulator yttrium iron garnet (YIG) [6; 7; 8; 9; 10; 36] or metallic magnets such as NiFe [11; 12]. Their relatively simple and rigid chemical and magnetic structures limit the potential for developing highly tunable hybrid systems. On the other hand, the recent two-dimensional (2D) organic layered magnets [37] offer distinct advantages in their structure-enabled topological chirality and symmetry breaking [38]. One nice class of materials is 2D magnetic hybrid organic-inorganic perovskites (HOIPs) possessing both superior structural versatility and long-range magnetic order [39; 40; 41; 42]. They usually exhibit an interlayer antiferromagnetic (AFM) coupling [43], inducing the acoustic and optical magnon modes [19; 44] in the gigahertz (GHz) frequency range. In addition, the structural symmetry breaking leads to Dzyaloshinskii-Moriya interaction (DMI) [45], causing a finite spin canting [46; 47] and creating an intrinsic magnon band gap where the acoustic and optical modes intersect [48]. This is fundamentally different from the magnon band gap induced by an external field [49][19; 21; 22; 23; 24; 50] in that the DMI has provided an intrinsic effective field for magnon-magnon coupling without the need of external field. Furthermore, the large sensitivity of the magnon band gap to small temperature change can lead to new opportunities of modulating coherent magnonic coupling. [51] In this Letter, we report a hybrid magnonic system consisting of a 2D HOIPs, (CH\({}_{3}\)CH\({}_{2}\)NH\({}_{3}\))\({}_{2}\)CuCl\({}_{4}\) (Cu-EA) [48], coupled to a superconducting resonator. The high sensitivity of the superconducting resonator enables coherent magnon-photon coupling and avoided crossing with a small Cu-EA flake. By changing the temperature of the sample, the location of the DMI-induced magnon band gap can be adjusted so that the resonator photon mode completely falls into the gap and cancels the mode hybridization. At the non-hybridized state, the magnetic interaction with the resonator causes the resonator linewidth to broaden. Using our developed analytical model, the narrow-band linewidth broadening measurements can be used to extract the magnon band gap, which quantitatively agrees with the broadband FMR measurements. Our results highlight the opportunity of manipulating coherent mode hybridization with new quantum materials and probing their complex magnonic dispersion with narrow-band microwave characterizations. The chemical structure of Cu-EA features corner-sharing halogen (Cl) octahedra with the Cu atom situated at the center, as shown in Fig. 1(a). The canted inorganic CuCl\({}_{4}^{2-}\) octahedral structures allow for intralayer long-range magnetic order with superexchange Cu-Cl-Cu interactions, while the interlayer organic cations modulate the interlayer antiferromagnetic (AFM) coupling [43]. Raman spectroscopy of the Cu-EA [52] confirms the vibration modes of the octahedral structure (at 175, 250, and 280 cm\({}^{-1}\)) and the organic cation (at 100 cm\({}^{-1}\)) [53], as shown in Fig. 1(c). We have also conducted Inductively Coupled Plasma (IPC) spectroscopy on the sample, showing accurate stoichiometry of the elemental weight as compared with the chemical structure [54]. Fig. 1(d) shows the broad-band ferromagnetic resonance of a large Cu-EA crystal at 1.6 K at parallel pumping condition, i.e. \(\mu_{0}H_{B}\parallel h_{\text{rf}}^{y}\) as illustrated in Fig. 2(b). Both the acoustic and optical modes are measured, which can be formulated as [19]: \[\omega_{a}=\mu_{0}\gamma\sqrt{2H_{E}(2H_{E}+M_{eff})}\frac{H}{2H_{E}} \tag{1}\] \[\omega_{o}=\mu_{0}\gamma\sqrt{2H_{E}M_{eff}\left(1-\frac{H^{2}}{4H_{E}^{2}}\right)} \tag{2}\] where \(H_{E}\) is the interlayer exchange coupling field, \(M_{eff}\) is the effective magnetization which contributes to the perpendicular demagnetization field, and \(\gamma/2\pi=(g_{e}/2)\times 28\) GHz/T is the gyromagnetic ratio, with \(g_{e}\) as the \(g\)-factor of the magnetization. Clear avoided crossing gaps between the two modes show the existence of magnon band gap around 4 GHz. The coupled magnon spectra can be fitted to the hybrid mode expression \(\omega_{\pm}^{mm}=(\omega_{a}+\omega_{o})/2\pm\sqrt{(\omega_{a}-\omega_{o})^{2 }/4+\delta^{2}}\), where \(\delta\) is the magnon-magnon coupling strength. The fitting curves are plotted in Fig. 1(d). The extracted parameters are \(\mu_{0}H_{E}=0.16\) T, \(\mu_{0}M_{eff}=80\) mT, \(g_{e}=2.3\), and \(\delta/2\pi=150\) MHz. Note that the actual saturation magnetization of Cu-EA can be larger than \(M_{eff}\) because the shape of the sample crystal is not a perfect two-dimension system and the perpendicular demagnetization factor can be smaller than one. The strong magnon-magnon coupling observed in Cu-EA at parallel pumping condition, which is absent in other layered [19] or synthetic [21; 22] antiferromagnets at the same pumping condition, is caused by the spontaneous canting of the octahedral CuCl\({}_{4}\)\({}^{2-}\) spin sites from their chiral DMI and the resultant overlap between the acoustic and optical modes [48]. Fig. 1(e) shows the temperature dependence of extracted \(2H_{E}\) and the magnon band gap \(2\delta\) from Fig. 1(d). With the same \(y\)-axis proportion ratio in Fig. 1(e), a good overlap of \(H_{E}\) and \(\delta\) shows that they Figure 1: (a) Lattice structure of layered perovskite antiferromagnet Cu-EA, with Cu filling the octahedral sites of Cl and the antiferromagnetic layers separated by the CH\({}_{3}\)CH\({}_{2}\)NH\({}_{3}\) molecules. (b) Optical microscope image of a small Cu-EA flake mounted onto a CPW superconducting resonator. (c) Raman spectroscopy of Cu-EA showing the high-frequency octahedral modes and the low-frequency organic structure modes. (d) Broad-band ferromagnetic resonance spectra of a large Cu-EA crystal measured at 1.6 K, which is used to extract the magnon band gap \(2\delta\), the interlayer exchange field \(2H_{E}\), and magnon damping rate \(\kappa_{m}\). The colorbar shows the signals \(\Delta S_{21}\) after background subtraction. (e) Extracted \(2H_{E}\) magentaand \(2\delta\) as a function of \(T\). (f) Mode anticrossing between magnons and photons at 5.5 K, with \(H_{B}\perp h_{\text{rf}}^{y}\). The colorbar shows the signals \(S_{21}\) in absolute values. The red curves are the fits with \(g/2\pi=45\) MHz. The black dashed lines denote the magnon and photon modes without interaction. are proportional to each other at different temperatures. This suggests that the DMI-induced spin canting shares a similar mechanism with the interlayer exchange coupling in Cu-EA. The magnon damping rate, \(\kappa_{m}\), of the acoustic and optical modes are also extracted and are found to be weakly frequency and temperature dependent. In the range of 2-5 GHz, \(\kappa_{m}/2\pi\sim 50\) MHz for the acoustic mode and \(\sim 80\) MHz for the optical mode; see the Supplemental Materials for details [54]. To feature the sensitivity of the superconducting resonator to small magnetic crystals, we precisely transfer a thin Cu-EA flake with lateral dimensions of 500 \(\mu\)m \(\times\) 200 \(\mu\)m and a thickness of 40 \(\mu\)m onto the center of a half-wavelength NbN coplanar waveguide (CPW) superconducting resonator with a signal line width of 20 \(\mu\)m, as shown in Fig. 1(b). The dimension matching between the signal line and the flake thickness allows for optimal coupling of the magnon excitations to the resonator. The loaded superconducting resonator exhibits a sharp peak at \(\omega_{p}/2\pi=3.5\) GHz and a zero-field half-width half-maximum linewidth of \(\kappa_{p}/2\pi=0.4\) MHz at 1.6 K, which corresponds to a quality factor of \(\omega_{p}/2\kappa_{p}=4400\). The maximum mode splitting happens at 5.5 K between the acoustic magnon mode of Cu-EA and the resonator photon mode, shown in Fig. 1(f). The peak positions of the avoided crossing can be fitted to the hybrid modes [6]: \[\omega_{\pm}^{mp}=(\omega_{m}+\omega_{p})/2\pm\sqrt{(\omega_{m}-\omega_{p})^{2 }/4+g^{2}} \tag{3}\] where \(\omega_{m}\) is the magnon frequency, \(\omega_{p}\) is the photon frequency, and \(g\) is the magnon-photon coupling strength due to dipolar interaction. The field dependence of \(\omega_{p}\) can be extracted from the linear extrapolation of the background, and the field dependence of \(\omega_{m}\) can be obtained from the broad-band FMR spectrum. Fits to Eq. (3) yield \(g/2\pi=45\) MHz. Using the damping rates of \(\kappa_{p}/2\pi=2.7\) MHz for the superconducting resonator at 5.5 K and \(\kappa_{m}/2\pi=50\) MHz for the Cu-EA acoustic mode, we obtain a cooperativity of \(C=g^{2}/\kappa_{p}\kappa_{m}=15\). We note that even though the cooperativity becomes higher at lower temperature, e.g. 1.6 K, because of the much lower \(\kappa_{p}\), the real bottleneck of strong magnon-photon coupling is the ratio \(g/\kappa_{m}\), which is maximized as 0.95 at 5.5 K. The strong coupling regime requires both \(g/\kappa_{m}\) and \(g/\kappa_{p}\) to be greater than one [9]. See the Supplemental Materials for the temperature dependence of \(\kappa_{p}\) and \(\kappa_{m}\)[54]. Next, we investigate the temperature dependence of the magnon-photon interactions. Shown in Figs. 2(a) and (b), the signal line of the resonator generates both the in-plane and perpendicular Oersted fields, \(h_{\mathrm{rf}}^{y}\) and \(h_{\mathrm{rf}}^{z}\), respectively. In the orthogonal pumping condition (\(\mu_{0}H_{B}\perp h_{\mathrm{rf}}^{y}\)), the Oersted field components \(h_{\mathrm{rf}}^{y}\) and \(h_{\mathrm{rf}}^{z}\) only couple to the acoustic mode. In the parallel pumping condition (\(\mu_{0}H_{B}\parallel h_{\mathrm{rf}}^{y}\)), \(h_{\mathrm{rf}}^{y}\) couples to the acoustic mode and \(h_{\mathrm{rf}}^{z}\) couples to the optical mode. Thus, the field alignment allows for selective excitation of the acoustic mode in Fig. 2(c), or the mutual excitation of both modes in Fig. 2(d). The interaction with the acoustic mode is manifested by an avoided crossing at a con Figure 2: (a-b) Illustration of two different in-plane field alignments and their selective mode excitations. In (a), \(\mu_{0}H_{B}\perp h_{\mathrm{rf}}^{y}\) and only the acoustic mode is excited. In (b), \(\mu_{0}H_{B}\parallel h_{\mathrm{rf}}^{y}\) and both the acoustic and optical modes are excited. (c-d) Temperature dependence of the magnon-photon coupling evolutions from 1.5 to 6 K with two different magnetic field alignments. All the dispersion centered at the resonator photon mode (\(\omega_{p}/2\pi\approx 3.5\) GHz). Dashed curves are guide to eye for the acoustic mode crossing the resonator mode at \(\mu_{0}H_{a}=95\) mT and the optical mode crossing the resonator mode at different fields. (e-g). Illustration of the three regimes where the resonator mode is (e) above, (f) within, and (g) below the magnon band gap. stant field of \(\mu_{0}H_{\rm a}=95\) mT for both pumping geometries. The optical mode found in Fig. 2(d) shows a large temperature-dependent drift of its location, as marked by the dashed curves. The reversed anticrossing compared with the acoustic mode shows that the magnon frequency decreases as the field rises, agreeing with the feature of the optical mode as shown in Fig. 1(d). Due to the tunability of \(H_{E}\), the center frequency of the magnon band gap changes rapidly with temperature in the range from 1.5 to 6.0 K, therefore, allowing the resonator mode (much less sensitive to temperature) to intercept with the magnon band gap while maintaining a nearly constant quality factor. Three regimes of the magnon-photon coupling between the Cu-EA flake and the superconducting resonator are observed, with the relation between the magnon band gap and the resonator mode shown in Figs. 2(e-g). In regime (i) (\(T>3.5\) K), the magnon band gap is below the superconducting resonator frequency [Fig. 2(e)]. The resonator photon mode coherently interacts with the acoustic mode in Fig. 2(c), and both the acoustic and optical modes in Fig. 2(d). In regime (ii) (\(3.5\geq T\geq 3\) K), the acoustic and optical magnon modes cross each other and form the magnon band gap at the superconducting resonator mode frequency. This causes the resonator mode to fall inside the magnon band gap, leading to a moderate change of the peak amplitude and linewidth without peak frequency shift around 95 mT. In regime (iii) (\(T<3\) K) where the magnon band gap is above the resonator mode, the acoustic mode resumes its anticrossing-like interaction with the resonator mode. In addition, for the \(\mu_{0}H_{B}\parallel h_{\rm rf}^{y}\) geometry where the optical mode also interacts with the resonator mode [Fig. 2(d)], the regime between the optical and acoustic modes are blurred, as shown in regimes (i) and (iii). This indicates that one of the two acoustic-optical hybrid magnonic modes is still near the resonator mode and maintains the magnon-photon interaction. Figure 3 summarizes extracted magnon-photon coupling strength, \(g\), as a function of \(T\). To verify the phenomena, we have also coupled another Cu-EA crystal to a lumped-element resonator (LER) [55]. This allows for a larger magnon-photon coupling strength while maintaining the same magnon band gap. For both the CPW resonator and LER, \(g\) quickly decreases in regime (ii) due to mode degeneracy breaking between the magnon mode and the resonator photon mode. The zero coupling strength for the CPW resonator is manifested by the continuous evolution of resonator peak without mode anticrossing, as shown in Fig. 2(c-d). For the LER, a finite \(g\) can still be extracted in regime (ii) which is due to non-perfect centering of the resonator mode in the magnon band gap when then magnon-photon coupling is large. The maximal acoustic mode coupling strengths for the CPW resonator are \(g_{\rm CPW}^{\perp}/2\pi=45\) MHz for \(\mu_{0}H_{B}\perp h_{\rm rf}^{y}\) and \(g_{\rm CPW}^{\parallel}/2\pi=28\) MHz for \(\mu_{0}H_{B}\parallel h_{\rm rf}^{y}\) at 5.5 K. Their difference quantifies the coupling ratio of the acoustic magnon mode between the in-plane (\(h_{rf}^{y}\)) and perpendicular(\(h_{rf}^{z}\)) O Figure 3: Extracted effective magnon-photon coupling \(g\) as a function of \(T\). The resonator mode is within the magnon band gap between 3 and 3.5 K, yielding \(g=0\). Figure 4: (a) Superconducting resonator linewidth \(\kappa_{c}\) as a function of \(H_{B}\) at \(T=3.3\) K, where the resonator mode is inside the magnon band gap. (b) SC resonator linewidth change \(\Delta\kappa_{c}\) as a function of \(g_{\rm eff}^{2}\) for the CPW and LER resonator designs. Error bars denote the uncertainty of base resonator linewidth drift under external magnetic fields. The red line is a fit to Eq. 4, with the slope quantifying the magnon band gap \(2\delta\). the CPW: at \(\mu_{0}H_{B}\perp h_{\rm rf}\), both \(h_{rf}^{y}\) and \(h_{rf}^{z}\) couple to the acoustic mode, while at \(\mu_{0}H_{B}\parallel h_{\rm rf}\), only \(h_{rf}^{z}\) couples to the acoustic mode. The ratio can be calculated as \(h_{rf}^{y}/h_{rf}^{z}=\sqrt{(g_{\rm CPW}^{\perp})^{2}-(g_{\rm CPW}^{\parallel} )^{2}}/g_{\rm CPW}^{\perp}=1.25\). For the LER, the obtained ratio is 1.23. This suggests that \(h_{rf}^{z}\) plays an important role in magnon-photon coupling. When the magnon band gap is far from the resonator mode (e.g. 1.5 K and 6 K), a reduction of \(g\) from 6 K to 1.5 K reflects the change of coupling efficiency between the Oersted field and the canted magnetization at different biasing field directions. We plot the calculated prediction of the effective magnon-photon coupling, \(g_{\rm eff}\), for the acoustic magnon mode without considering the magnon band gap [54], and the trend nicely captures the experiment at low and high temperatures. We show that the magnon band gap of Cu-EA can be quantitatively extracted from the modulated magnon-photon interaction. When the resonator mode is inside the magnon band gap in regime (ii), the interaction between the magnon and photon modes leads to a linewidth broadening of the resonator photon mode. Such an effect has been previously observed in magnon-magnon coupled bilayers in the Purcell regime [56; 57; 58; 59]. We develop an analytical model for quantifying the change of photon linewidth by considering two detuned magnon mode coupled to the photon mode. The photon damping rate \(\kappa_{c}\) can be expressed as: \[\kappa_{c}=\kappa_{c0}+(g_{\rm eff})^{2}\frac{\kappa_{m}}{\kappa_{m}^{2}+ \delta^{2}}, \tag{4}\] where \(\kappa_{c0}\) is the intrinsic photon damping rate, \(g_{\rm eff}\) is the effective magnon-photon coupling strength as plotted in Fig. 3, \(\kappa_{m}\) is the magnon damping rate, and \(2\delta\) is the magnon-magnon band gap. The detailed derivation of the model is included in the Supplemental Materials [54]. Note that the information of \(g_{\rm eff}\) needs to be obtained from regime (iii) where mode anticrossing between the magnon and photon modes are resumed. Eq. (4) shows that the change of linewidth \(\Delta\kappa_{c}=\kappa_{c}-\kappa_{c0}\) is proportional to \((g_{\rm eff})^{2}\), with the slope determined by two intrinsic magnon characteristics of the Cu-EA: \(\kappa_{m}\) and \(2\delta\). With two completely different superconducting resonator designs, i.e., the CPW resonator and LER, we find that the extracted \(\Delta\kappa_{c}\) nicely follows the linear dependence of \((g_{\rm eff})^{2}\), with a slope of \((210~{}\rm MHz)^{-1}\). For \(\kappa\), we take the average of the acoustic and optical modes, as \(\kappa_{m}/2\pi=65~{}\rm MHz\). The magnon band gap is calculated to be \(\delta/2\pi=152~{}\rm MHz\), which is close to the value in Fig. 1(e) as 140 MHz around 3.5 K. Thus, we confirm the validity of this new technique for quantifying the magnon band gap \(\delta\) of a small magnetic flake with a highly sensitive superconducting microwave resonator, where the linewidth change of the resonator mode acts as a probe to interact with the acoustic-optical hybrid magnon modes. In summary, we demonstrate tunable magnon-photon coupling by adjusting the intrinsic magnon band gap in a layered perovskite antiferromagnet in coupling with a superconducting resonator. The use of high-quality-factor superconducting resonator allows for coherent interaction with the magnon excitations and the study of unique magnon band gap in a magnetic material with narrow-band microwave measurements. The magnon-photon coupling strength can be tuned from a few tens of megahertz to zero by modifying the magnon band gap location with temperature. At the zero coupling strength state where the resonator mode falls into the magnon band gap, probing the change of photon mode linewidth also allows one to extract the value of magnon band gap using an analytical model. Our results provide a new idea to modify magnon-photon interaction as well as a new approach to study the quantum properties of novel layered magnetic materials from cavity magnonics. To improve the slow temperature tunability of magnon band gap, we anticipate other approaches such as strain or electric field [60; 61; 62] for controlling the magnetic properties with high speed and extending the application in coherent information processing. **Acknowledgement.** D.S. and M.B. acknowledge the primary financial support through the Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE), an Energy Frontier Research Center funded by the Office of Basic Energy Sciences, Office of Science within the U.S. Department of Energy (Hybrid perovskite synthesis, crystal preparation, structural characterization, and motivation of this work). This work was authored in part by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy LLC, for the U.S. Department of Energy (DOE) under contract no. DE-AC36-08GO28308. The views expressed in this article do not necessarily represent the views of the DOE or the U.S. Government. Works at Argonne National Laboratory and University of Illinois and Urbana-Champaign, including the superconducting resonator fabrication, design, ICP chemical analysis, and hybrid magnonics characterization, were supported by the U.S. DOE, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division under contract No. DE-SC0022060. Work at UNC-CH were supported by NSF-ECCS 2246254 for the experimental design, data analysis, theoretical analysis, and manuscript preparation. D.S. acknowledges the partial financial support from the Department of Energy grant DE-SC0020992 and the National Science Foundation grant DMR-2143642 for magnetic properties characterization. X.Z. acknowledges support by the National Science Foundation CAREER Award (Grant CBET-2145417) and LEAPS Award (Grant DMR-2137883) for the Raman characterization. Use of the Center for Nanoscale Materials (CNM), an Office of Science user facility, was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
2305.05333
Shocks Power Tidal Disruption Events
Accretion of debris seems to be the natural mechanism to power the radiation emitted during a tidal disruption event (TDE), in which a supermassive black hole tears apart a star. However, this requires the prompt formation of a compact accretion disk. Here, using a fully relativistic global simulation for the long-term evolution of debris in a TDE with realistic initial conditions, we show that at most a tiny fraction of the bound mass enters such a disk on the timescale of observed flares. To "circularize" most of the bound mass entails an increase in the binding energy of that mass by a factor $\sim 30$; we find at most an order unity change. Our simulation suggests it would take a time scale comparable to a few tens of the characteristic mass fallback time to dissipate enough energy for "circularization". Instead, the bound debris forms an extended eccentric accretion flow with eccentricity $\simeq 0.4-0.5$ by $\sim 2$ fallback times. Although the energy dissipated in shocks in this large-scale flow is much smaller than the "circularization" energy, it matches the observed radiated energy very well. Nonetheless, the impact of shocks is not strong enough to unbind initially bound debris into an outflow.
Taeho Ryu, Julian Krolik, Tsvi Piran, Scott Noble, Mark Avara
2023-05-09T10:45:04Z
http://arxiv.org/abs/2305.05333v3
# Shocks Power Tidal Disruption Events ###### Abstract Accretion of debris seems to be the natural mechanism to power the radiation emitted during a tidal disruption event (TDE), in which a supermassive black hole tears apart a star. However, this requires the prompt formation of a compact accretion disk. Here, using a fully relativistic global simulation for the long-term evolution of debris in a TDE with realistic initial conditions, we show that at most a tiny fraction of the bound mass enters such a disk on the timescale of observed flares. To "circularize" most of the bound mass entails an increase in the binding energy of that mass by a factor \(\sim 30\); we find at most an order unity change. Our simulation suggests it would take a time scale comparable to a few tens of the characteristic mass fallback time to dissipate enough energy for "circularization". Instead, the bound debris forms an extended eccentric accretion flow with eccentricity \(\simeq 0.4-0.5\) by \(\sim 2\) fallback times. Although the energy dissipated in shocks in this large-scale flow is much smaller than the "circularization" energy, it matches the observed radiated energy very well. Nonetheless, the impact of shocks is not strong enough to unbind initially bound debris into an outflow. black hole physics \(-\) gravitation \(-\) hydrodynamics \(-\) galaxies:nuclei \(-\) stars: stellar dynamics + Footnote †: journal: ApJ 0000-0002-3070-8885]Taeho Ryu 0000-0002-4880-0885]Julian Krolik 0000-0002-0780-0885]Tsui Piran 0000-0002-4707-3733]Scott C. Noble 0000-0002-0703-387X]Mark Avara ## 1 Introduction A tidal disruption event (TDE) takes place when a star that has wondered into the vicinity of a supermassive black hole (SMBH) is torn apart by the SMBH's gravitational field. About half of the stellar mass is left unbound and is ejected to infinity while the other (bound) half returns to the vicinity of the black hole. The end result is a flare of optical/UV, X-ray, and at times, radio emission. During the last decades, TDEs were transformed from a theoretical prediction (Hills, 1988; Rees, 1988) to an observational reality (Gezari, 2021). With the use of numerous detectors, ranging from ROSAT (and now eROSITA) in X-rays, to GALEX in the UV, and multiple telescopes in the optical band (including systematic surveys like SDSS Blanton et al., 2017, ASSAS-N Shappee et al., 2014, Pan-STARRS Kaiser et al., 2002, and, more recently, ZTF Bellm et al., 2019), more than a hundred TDEs have now been detected. Upcoming observations with the Rubin Observatory will provide an overwhelming amount of data in the near future. Although TDEs are of great interest in their own right, their properties offer a wide range of opportunities to learn about other astrophysical questions. They can reveal quiescent SMBHs and possibly permit inference of their masses. They present otherwise unobtainable information about non-steady accretion onto SMBHs and the conditions for jet-launching. In addition, understanding the rates of TDEs would reveal valuable information concerning the stellar dynamics in galaxy cores. Early theoretical predictions pointed out that the bound debris returns to the BH at a rate \(\propto t^{-5/3}\)(Rees, 1988; Phinney, 1989). It was then speculated that, hav ing returned, matter quickly forms a disk whose outer radius is comparable in size to the pericenter of the star's original trajectory. With such a small scale, the inflow time through the disk should be so short that the light emitted by the disk would follow the matter fallback, i.e., likewise \(\propto t^{-5/3}\). With a characteristic temperature \(\sim 10^{5}-10^{6}\) K, the emitted light would be in the FUV/EUV or soft X-ray band, and the peak luminosity would be much larger than Eddington. However, the observed luminosity rarely reaches the Eddington luminosity for the expected SMBH masses. In addition, when more TDEs were discovered in the optical, it was realized that typical temperatures are a few \(10^{4}\) K (Gezari, 2021), implying that the radiating area is much larger than that of a small disk whose radial scale is similar to the star's pericenter. Moreover, the total energy radiated during the flare period is generally two orders of magnitude smaller than the energy expected if half the stellar mass had been efficiently accreted onto a SMBH. In fact, it is an order of magnitude smaller than the "circularization" energy that would have been emitted during the initial formation of the small disk envisioned. That the radiated energy is so small is sometimes called the "inverse energy crisis" (Piran et al., 2015; Svirski et al., 2017). A possible explanation for the low luminosity and low temperatures observed is that the energy produced by the accretion disk is reprocessed by a radiation-driven wind ejected from the disk itself (Strubbe & Quataert, 2009). If this wind carries a significant kinetic energy, it would also resolve the "inverse energy crisis" (Metzger and Stone, 2016) (see, however Matsumoto and Piran, 2021). An alternative possibility is that matter does not circularize quickly and most of the observed emission arises from self-intersection shocks at the apocenter (Shiokawa et al., 2015; Piran et al., 2015; Krolik et al., 2016). These shocks, which are expected to be strong at and shortly after the time of maximum mass return, take place at \(\simeq O(10^{3})r_{\rm g}\) from the BH (\(r_{\rm g}\equiv GM_{\bullet}/c^{2}\) for BH mass \(M_{\bullet}\)), are consistent with the observed luminosity, temperature, line width and total energy generated (Ryu et al., 2020). The long-term fate of bound debris is less clear. Because it is created on highly eccentric orbits, with only a small further diminution in angular momentum it may be able to fall directly into the black hole, releasing very little energy (Svirski et al., 2017). Alternatively, when there has been time for the magnetorotational instability (MRI) to build strong magnetohydrodynamic (MHD) turbulence and for the gas to lose energy to radiation, the debris may accrete gradually, while radiating efficiently (Shiokawa et al., 2015). Although it may be possible by observational means to determine whether one of these two different scenarios occurs, numerical simulations of the disruption and subsequent accretion process may provide an alternative way to resolve this issue. However, such simulations are hindered by the extreme contrast in length and timescales involved. Adding fully relativistic features that are critical for some of the physics ingredients also poses technical challenges. Aiming to clarify the many questions regarding the evolution of the bound debris' energy, angular momentum, and location, in this work we present a fully relativistic numerical simulation of a complete tidal disruption of a realistic \(3M_{\odot}\) star by a \(10^{5}M_{\odot}\) SMBH in which we follow the system long enough to see the majority of the bound mass return to the black hole. Several of these features have never previously appeared in a global TDE simulation: fully relativistic treatment, both in hydrodynamics and stellar self-gravity; a main-sequence internal structure for the star's initial state; and its long duration. Our simulation scheme does not include time-dependent radiation transfer; instead, we make the approximation (for our parameters, well-justified for most of the debris mass) that the radiation pressure is the value achieved in LTE and does not diffuse relative to the gas. The structure of the work is as follows: we define the physical problem and present our numerical scheme in SS2. We discuss the results in SS3 and their implications in SS4. In this last section we also compare our results to previous work. We conclude in SS5, where we also discuss the possible observational implications of this work. ## 2 The Calculation ### Physics Problem To provide a context for our choice of numerical methods and parameters, we begin with a brief discussion of our problem's overall structure. We begin with a 3 \(M_{\odot}\) star (radius \(R_{\star}=2.4\ R_{\odot}\)) whose internal structure is taken from a MESA (Paxton et al., 2011) evolution to middle-age on the main-sequence, the age at which the hydrogen mass fraction in its core has fallen to 0.5. The star (initially placed at \(r\simeq 900\ r_{\rm g}\) from the black hole) approaches a SMBH of \(M_{\bullet}=10^{5}\ M_{\odot}\) on a parabolic orbit with a pericenter distance of \(r_{\rm p}\simeq 110\ r_{\rm g}\), just close enough to be fully disrupted. Although the nominal tidal radius \(r_{\rm t}=R_{\star}(M_{\bullet}/M_{\star})^{1/3}\) is \(\simeq 370\ r_{\rm g}\) for this black hole mass and stellar mass, the critical distance within which a star is fully disrupted is given by \(\Psi(M_{\bullet},M_{\star})r_{\rm t}\), where the order-unity factor \(\Psi\) encodes general relativistic corrections through its \(M_{\bullet}\) depen dence and stellar structure corrections through its \(M_{\star}\) dependence Ryu et al. (2020). As the star passes through pericenter, it begins to be torn apart; the process is complete by the time its center-of-mass reaches \(\sim 7500r_{\rm g}\). During the debris' first orbit, its trajectory is ballistic, with specific orbital energy \(E\) and specific angular momentum \(J\) very close to that of the star's center-of-mass. Immediately after the disruption, the distribution of mass with energy \(dM/dE\) is roughly a square-wave and is approximately symmetric with respect to \(E=0\). Although order-of-magnitude arguments suggest that the half-width of the energy square-wave \(\Delta\varepsilon\simeq GM_{\bullet}R_{\star}/r_{\rm t}^{2}\), Ryu et al. (2020) showed that this estimate should be multiplied by an order-unity correction factor \(\Xi(M_{\bullet},M_{\star})\), which depends on both \(M_{\bullet}\) and \(M_{\star}\) as it accounts for both relativistic and stellar internal structure effects. For our parameters, \(\Xi\approx 1.64\), so that the energy half-width is \[\Delta E\simeq 1.4\times 10^{-4}c^{2}\left(\frac{\Xi}{1.64} \right)\left(\frac{M_{\bullet}}{10^{5}\;M_{\odot}}\right)^{1/3}\\ \times\left(\frac{M_{\star}}{3\;M_{\odot}}\right)^{2/3}\left( \frac{R_{\star}}{2.4R_{\odot}}\right)^{-1}\;. \tag{1}\] The semimajor axis \(a\) and the eccentricity \(e\) of the bound debris with \(E=-\Delta E\) are \[a=\frac{GM_{\bullet}}{2E}\simeq 3600r_{\rm g}\left(\frac{\Xi}{ 1.64}\right)^{-1}\\ \left(\frac{M_{\bullet}}{10^{5}\;M_{\odot}}\right)^{-1/3}\left( \frac{M_{\star}}{1\;M_{\odot}}\right)^{-2/3}\left(\frac{R_{\star}}{2.4\;R_{ \odot}}\right), \tag{2}\] and \[e\simeq 1-0.07\left(\frac{\Xi}{1.64}\right)\left(\frac{M_{\bullet}}{10^{5}\;M_ {\odot}}\right)^{-1/3}\left(\frac{M_{\star}}{3\;M_{\odot}}\right)^{1/3}, \tag{3}\] respectively. The apocenter distance of the debris with \(E=-\Delta E\) is then \((1+e)a\simeq 7000r_{\rm g}\). The bound debris, i.e., debris with \(E<0\), must return to the vicinity of the SMBH. The maximal mass return rate occurs when the debris with \(E\simeq-\Delta E\) returns. The fallback time scale of the material with \(E\simeq-\Delta E\) is very nearly its orbital period, \[t_{0}=\frac{\pi}{\sqrt{2}}\frac{GM_{\bullet}}{\Delta E^{3/2}} \simeq 7.6\ \mathrm{days}\ \left(\frac{\Xi}{1.64}\right)^{-3/2}\\ \left(\frac{M_{\bullet}}{10^{5}\;M_{\odot}}\right)^{1/2}\left( \frac{M_{\star}}{3\;M_{\odot}}\right)^{-1}\left(\frac{R_{\star}}{2.4\;R_{ \odot}}\right)^{3/2}. \tag{4}\] The maximal fallback rate is then \[\dot{M}_{0}\simeq\frac{M_{\star}}{3t_{0}}\simeq 0.13\;M_{ \odot}\,\mathrm{days}^{-1}\left(\frac{\Xi}{1.64}\right)^{3/2}\\ \times\left(\frac{M_{\bullet}}{10^{5}\;M_{\odot}}\right)^{-1/2} \left(\frac{M_{\star}}{3\;M_{\odot}}\right)^{2}\left(\frac{R_{\star}}{2.4\;R_{ \odot}}\right)^{-3/2}. \tag{5}\] The different scales involved are why the problem is a difficult one for numerical simulation. Fluid travels through regions where the characteristic scale on which gravity changes runs from \(\sim r_{\rm g}\) to \(\sim 10^{4}r_{\rm g}\), while the structure of the star varies on a scale that is a fraction of \(R_{\star}\sim 15r_{\rm g}\). Similarly, the characteristic dynamical timescale ranges from \(\sim r_{\rm g}/c\) to \(\sim 10^{6}r_{\rm g}/c\). ### Code To treat this problem, we perform a fully relativistic global hydrodynamics simulation using a software package comprising three core codes, PatchworkMHD (Avara et al., in preparation), Harm3d(Noble et al., 2009), and a relativistic self-gravity solver (Ryu et al., 2020). We use the intrinsically conservative general relativistic magneto-hydrodynamics (GRMHD) code Harm3d(Noble et al., 2009) to solve the equations of relativistic pure hydrodynamics. This code employs the Lax-Friedrichs numerical flux formula and uses a parabolic interpolation method (Colella & Woodward, 1984) with a monotonized central-differenced slope limiter. Because of its robust algorithm, it has been used for studying a wide variety of SMBH accretion problems (e.g., Noble et al., 2009, 2012; Shiokawa et al., 2015; Ryu et al., 2020, 2020). We take radiation pressure into account by setting the pressure \(p=\rho kT/\bar{m}+aT^{4}/3\) when the internal energy density \(u=(3/2)\rho kT/\bar{m}+aT^{4}\). Here \(\rho\) is the proper rest-mass density, \(\bar{m}\) is the mass per particle, and \(T\) is the temperature. We can then define an equation of state with an "effective adiabatic index" (Shiokawa et al., 2015) that varies between \(\gamma=4/3\) and \(5/3\) depending on the ratio of the gas pressure and radiation pressure. Our self-gravity solver is described in detail in Ryu et al. (2020). In brief, it constructs a metric valid in the star's center-of-mass frame by superposing the potential found by a Poisson solver operating in a tetrad frame defined at the center-of-mass on top of the metric for the BH's Kerr spacetime described in the center-of-mass frame. Because this is a compact free-fall frame, the spacetime for the star and its close surroundings is always very nearly flat. The large contrasts in length and time scales noted earlier make such a computation prohibitively expensive if the entire range of scales is resolved in a single domain. To overcome this difficulty, we introduce multiple domains with PatchworkMHD (Avara et al., in prepartion). PatchworkMHD is an extension of the original purely hydrodynamic Patchwork code (Shiokawa et al., 2018) that both enables MHD and introduces a number of algorithmic optimizations. Both versions create a multiframe / multiphysics / multiscale infrastructure in which independent programs simultaneously evolve individual patches with their own velocities, internal coordinate systems, grids, and physics repertories. The evolution of each patch is parallelized in terms of subdomains; where different patches overlap, the infrastructure coordinates boundary data exchange between the relevant processors in the different patches. A further extension of PatchworkMHD, specialized to problems involving numerical relativity, is described in Bowen et al. (2020). ### Domain setup During the first part of a TDE, a star travels through extremely rarefied gas as it traverses a nearly parabolic trajectory around the black hole. This situation is a prime example of the contrasts motivating our use of a multipatch system: the characteristic lengthscales inside the star are much smaller than those in the surrounding gas; in addition, self-gravity is important inside and near the star, but not in the remainder of the volume. Consequently, during this portion of the event we employ PatchworkMHD and apply it to two patches: one covering the star, the other covering the remainder of the SMBH's neighborhood. Once the star is fully disrupted, there is no further need of the star patch, and it is removed, its content interpolated onto a single remaining patch, evolving the entire region around the SMBH. #### 2.3.1 The star's pericenter passage--two-patch simulation This first stage begins with the initial approach of the star to pericenter and ends when the star's center-of-mass trajectory reaches a distance from the SMBH \(\simeq 20r_{\rm t}\). It ends here because the star is then fully disrupted. During this stage, a rectangular-solid Cartesian domain denoted _Domain1_ covers the volume around the star; it is completely embedded in a larger spherical-coordinate domain denoted _Domain2_ that ultimately covers a large spherical region centered on the SMBH. Following the methods described in Ryu et al. (2020), in _Domain1_ we follow the hydrodynamics of gas with self-gravity in a frame that follows the star's center of mass. Initially, the domain is a cube with edge-length Figure 1: Successive moments in a full tidal disruption event simulated using two domains: The density distribution of the star before (1) and after (2) the pericenter passage and the debris after the pericenter passage (3, 4) in the equatorial plane. The yellow dot in the big _left_ panel indicates the BH. The cyan box in the four _right_ panels depicts the boundary of _Domain1_. Once the star is disrupted, we increase the size of the box (from 2 to 3) to keep the debris inside _Domain1_ where the self-gravity is calculated as much as possible. Panels are not drawn to scale. \(5R_{\star}\), and the cell size in each dimension is \(\simeq R_{\star}/25\). The orientation of this box relative to the black hole is rotated during the disruption in order to follow the direction of the bulk of the tidal flow, which is easily predicted from Kepler's laws. As the debris expands, the cube is adaptively extended with constant cell-size to keep the debris inside the domain for as long as possible. As the debris expands, a fraction of the debris crosses smoothly from _Domain1_ to _Domain2_, where we continue to evolve it under the gravity of the SMBH without any gas self-gravity. For _Domain2_, we adopt modified spherical coordinates in Schwarzschild spacetime. That is, if the code's three spatial coordinates are \((x_{1},x_{2},x_{3})\), they can represent spherical coordinates \((r,\theta,\phi)\) through the relations \[r =e^{x_{1}},\] \[\theta =0.5\pi[1.0+h_{1}x_{2}+(1-h_{1}-\frac{2.0\theta_{0}}{\pi}){x_{2}} ^{h_{2}}],\] \[\phi =x_{3}. \tag{6}\] where \(h_{1}\) and \(h_{2}\) are tuning parameters that determine the vertical coordinate structure (\(h_{1}\simeq 0.03\) and \(h_{2}\simeq 9\)), and \(\theta_{0}\) is the angle from the polar axis to the \(\theta\)-boundaries. In this coordinate system, the radial grid cells have a constant ratio of cell-dimension to radius, and the \(\theta\) grid cells are concentrated near the mid-plane. This coordinate system is suitable for modelling systems that involve a wide range of radial scales and contain a disk-like structure near the mid-plane. To be computationally efficient, instead of fixing the size and the tuning parameters throughout the simulation, we flexibly adjust the size and the resolution of _Domain2_ so that it is large enough to contain the entire debris, but we do not waste cells on regions where there is no debris. This strategy reduces the computational cost by a large factor. To ensure proper resolution during the period of flexible domain size, we require at least 15-20 cells per scale height in all three dimensions. In addition, at all times during the two-patch evolution, we keep the cell sizes in the overlapping regions of both domains comparable. At its largest extent, _Domain2_ runs from \(r_{\rm min}=40r_{\rm g}\) to \(r_{\rm max}=18000r_{\rm g}\). The maximum radius is chosen to be greater than the apocenter of debris that would return to the black hole within a time \(4t_{0}\); our simulation ran for \(3t_{0}\). The minimum radius was chosen by balancing two opposing goals: minimizing the mass lost through the inner radial boundary while maximizing the time-step so as to limit computational expense. Similarly, when _Domain2_ has its largest volume, the polar angle extends from \(\theta_{0}=2^{\circ}\) to \(\pi-\theta_{0}=178^{\circ}\). The azimuthal angle \(\phi\) covers a full \(2\pi\) when _Domain2_ is largest. We adopt outflow boundary conditions at all boundaries of _Domain2_. When it is maximally extended azimuthally, we provide boundary conditions to the processor domains having surfaces at \(\phi=0\) and \(\phi=2\pi\) by matching those with the same radial and polar angle locations. We show in Figure 1 the density distribution of the star before the pericenter passage (_top-left_ of the four small panels), during the passage (_top-right_) and the debris after the passage (_bottom_) in the equatorial plane in the two-domain simulation. The cyan box demarcates the boundary of _Domain1_. These figures demonstrate how, as the star is disrupted, the stellar debris crosses smoothly the inter-patch boundary between _Domain1_ and _Domain2_. #### 2.3.2 Single-domain simulation When only a tiny fraction of the star's original mass remains within _Domain1_, self-gravity becomes irrelevant, and gradients in gas properties are no longer connected with \(R_{\star}\). We therefore remove _Domain1_ and continue the simulation using only _Domain2_. In this stage, it takes its maximum extent. Shocks caused by the stream-stream intersection dissipate the orbital energy into thermal energy, which results in the vertical expansion of streams. The vertical coordinate system used for the early evolution (Equation 6) is not suitable to resolve gas at a large height as the resolution becomes increasingly crude as \(z\) increases. To better resolve the gas at high \(z\), we reduce the concentration of \(\theta\) cells to the midplane while leaving the \(r\) and \(\phi\) grids untouched. To do this, we redefine \(\theta(x_{2})\): \[\theta=\alpha(\tanh[b(x_{2}-a)]+\tanh[b(x_{2}+a)])+0.5\pi, \tag{7}\] where \(\alpha=-(0.5\pi-\theta_{0})/[\tanh(b(-0.5-a))+\tanh(b(-0.5+a))]\). Here, \(a\) and \(b\) are a set of tuning parameters that determine the vertical structure. At this stage, we fix the domain extent of \(r\) and \(\phi\). But we keep adjusting the domain extent and the resolution of the vertical structure flexibly, whenever it is necessary, to ensure sufficiently high vertical resolution (at least 20 cells per vertical scale heights) by properly choosing the cell number (within \(80-120\)), \(\theta_{0}\) (\(2-15^{\circ}\)) and the tuning parameters (\(a\simeq 0.32-0.34\) and \(b\sim 9.8\)). ## 3 Results ### Overview The star becomes strongly distorted as it passes through the pericenter (see Figure 1) and then falls apart entirely as it travels farther away from the black hole. When its entire mass has been dispersed, the orbital energy distribution of the debris \(dM/dE\) is not confined within the order of magnitude estimate of the energy distribution's width \(\Delta\epsilon\); it is about twice as wide and has extended wings (Figure 2). The mass of bound gas (\(E<0\)) is \(\simeq 0.48M_{\star}\), slightly smaller than that of the unbound gas (\(\simeq 0.52M_{\star}\)). The energy at which the peak mass return rate (Figure 3) occurs is \(E\simeq-1.64\Delta\epsilon\), or \(\Xi=1.64\). If we assume the debris follows ballistic orbits, the energy distribution presented in Figure 2 may be translated into a mass fallback rate as a function of time (Figure 3). The mass return rate peaks earlier by a factor of \(|\Xi|^{-1.5}\simeq 0.5\) than the traditional order of Figure 4: The density distribution around the black hole (yellow dot) at four different times, \(t/t_{0}=0.5\), 1, 2 and 3. The extend of the inset is \(500r_{\rm g}\). Figure 3: The mass fallback rate \(\dot{M}\), using the energy distribution shown in Figure 2. The rate and time are normalized by \(M_{0}=M_{\star}/3t_{0}\) and \(t_{0}\), respectively. The horizontal dashed line indicates the Eddington limit with the radiative efficiency of 0.01 and the diagonal dashed line a power-law of \(t^{-5/3}\). Figure 2: The energy distribution of debris at \(t\simeq 0.4t_{0}\). The vertical dashed line (\(E/\Delta\epsilon=\Xi\simeq-1.64\)) shows the characteristic energy at which fallback rate peaks (see Figure 3). magnitude estimate predicts, and the maximal fallback rate is greater by a factor \(\simeq 2.5\). After the bound debris goes out through its orbital apocenter and returns to the region close to the BH, it undergoes multiple shocks (initially near pericenter and apocenter), and finally forms an eccentric flow, as illustrated in Figure 4 (see Section 3.3 for more details). Near pericenter, strengthening vertical gravity and orbital convergence compress the returning debris, creating a "nozzle" shock (as predicted by Evans & Kochanek 1989) visible at \(t\gtrsim 0.5t_{0}\). Over time, the shocked gas becomes hotter and thicker. On the way to apocenter, the gas cools adiabatically1. Near apocenter, the previously-shocked outgoing debris collides with fresh incoming debris, creating another shock (the "apocenter" shock). Previously-shocked gas close to the orbital plane is deflected inward toward the BH, while the portion farther from the plane is deflected above and below the incoming stream (see Section 3.4). These deflections broaden the angular momentum distribution. A small part of the debris loses enough angular momentum that it acquires a pericenter smaller than the star's pericenter. Other gas gains angular momentum, which results in the nozzle shock-front gradually extending to larger and larger radii. Footnote 1: We ignore here the effect of recombination as this energy is negligible compared with the orbital energy. At \(t\simeq(1-2)t_{0}\), the debris in the apocenter region undergoes a dramatic transition in shape, from well-defined incoming and outgoing streams to an extended eccentric accretion flow. By the time the return rate of the newly incoming debris declines, the mass that had arrived earlier becomes large enough to significantly disrupt the newly incoming debris' orbit. The space inside the apocenter region is then quickly filled with gas. The outcome is an extended eccentric accretion flow (\(e\simeq 0.4-0.5\) at \(t\simeq 3t_{0}\)), most of whose mass resides at radii \(\sim 10^{3}-10^{4}r_{\rm g}\) (see top panel of Figure 5). At \(t\simeq 3t_{0}\), only \(2-3\times 10^{-2}\)\(M_{\odot}\) can be found inside \(2r_{\rm p}\). By contrast, as shown in the bottom panel of Figure 5, from \(t\approx t_{0}\) onward, nearly all the thermal energy is found at small radii, a condition that has consequences for the time-dependence of escaping radiation. In other words, _circularization is not prompt_: the flow retains significant eccentricity, and the great majority of the gas remains at a distance \(10^{3}-10^{4}r_{\rm g}\gg r_{\rm p}\) even after several characteristic timescales. That this is so can also be seen from another point of view. At \(t\simeq 3t_{0}\), the total amount of dissipated energy is only 10% of the energy, \(E_{\rm circ}\equiv GM_{\bullet}/4r_{\rm p}\), required for the debris to fully "circularize" into a compact disk on the commonly-expected radial scale of \(2r_{\rm p}\) (see _bottom_ panel of Figure 5). Extrapolating this slow energy dissipation rate to late times suggests that true "circularization" would take a few tens of \(t_{0}\). The shocked gas expands outward quasi-symmetrically (Figure 6). Because the intrinsic binding energy of the debris is much smaller than \(E_{\rm circ}\), the dissipated energy is large enough to be comparable to the specific orbital energy. As a result, the expanding material is marginally bound. The radial expansion speed of the gas near the photosphere (\(r\simeq 7000-10000r_{\rm g}\), Figure 7) at \(t\simeq 3t_{0}\) is \(0.005-0.01c\simeq 1500-3000\) km/s; the as Figure 5: Accumulated mass (_top_) normalized by \(M_{\star}\) and thermal energy (_bottom_) normalized by \(M_{\star}E_{\rm circ}\) as a function of the distance from the SMBH at \(t/t_{0}\simeq 0.5\), 1, 2, and 3. The dashed black vertical line indicates the pericenter distance of the original stellar orbit. The mass expelled through the radial inner boundary at \(r=40r_{\rm g}\) and the thermal energy carried by the expelled mass are included as if the accreted mass is confined within \(r=40r_{\rm g}\). sociated specific energy is \(\lesssim 10^{-4}c^{2}\), comparable to the intrinsic energy scale, \(\Delta E\). Although we do not incorporate radiation transfer into the simulation, we estimate the luminosity in post-processing (see SS 3.6). The bolometric luminosity rises to \((6-7)\times 10^{43}\)erg/s in \(t\simeq t_{0}\); its effective temperature on the photosphere is generally close to \(\sim 2\times 10^{5}\) K. ### Formation of shocks It is convenient to divide the multiple shocks by approximate location: pericenter or apocenter. The compression and heating of the gas near pericenter are depicted in detail in Figure 8. When the incoming stream is narrow and well-defined (the two _upper_ panels at \(t\lesssim t_{0}\)), the nozzle shock structure can be described in terms of two components. As the different portions of the returning stream converge, adiabatic compression raises the temperature at the center of the stream. The shock itself runs more or less radially across the stream, extending both inward and outward from the stream center. However, at later times (beginning at \(t\gtrsim t_{0}\)), the structure becomes more complex. Matter that has been deflected onto lower angular momentum orbits circulates in the region inside \(r_{\rm p}\) and develops a pair of nearly-stationary spiral shocks. The shock closest to the position of the nozzle shock stretches progressively farther outward, reaching \(\sim 2000r_{\rm g}\) by \(t\simeq 2t_{0}\) (see Figure 4). However, while the shock extends to greater radii, it also loses strength. A similar progressive widening and weakening of the nozzle shock was found by Shiokawa et al. (2015). Outgoing previously-returned matter intersects the path of newly-arriving matter in the apocenter region because a combination of apsidal rotation due to the finite duration of the disruption and relativistic apsidal precession cause earlier and later stream orbits to be misaligned (Shiokawa et al., 2015). When the apocenter shock first forms, it is relatively close to the black hole (\(\gtrsim 1000r_{\rm g}\)) because the very first debris to return has an orbital energy more negative than \(-\Delta E\). As the mass-return rate rises, its orbital energy also increases, so the debris apocenter moves outward. However, even at \(t\simeq t_{0}\), when the shock is located at \(r\simeq 6000r_{\rm g}\), it is found closer to the BH than the apocenter distance corresponding to \(E=-\Delta E\) because the outgoing stream has lost orbital energy to dissipation in the nozzle shock. At still later times, the apocenter shock moves further inward as the mean energy of the previously-shocked matter decreases further. Some of the outgoing material, upon collision with the incoming stream, is deflected both horizontally and vertically. In the left panel of Figure 9, we show the temperature distribution in the equatorial plane at \(t\simeq t_{0}\). In the _right_ panel, one can see a clear boundary between the incoming stream (with temperature \(T\simeq 10^{5}\) K) and the outgoing stream (with \(T\gtrsim 3\times 10^{5}\) K). At this boundary, the outgoing stream is deflected towards the SMBH. Figure 6: The azimuthally integrated density distribution at \(t/t_{0}=0.5\), 1, 2, and 3. Figure 7: The location of the thermalization photosphere (magenta curves) as seen by a distant observer plotted over the density distribution at \(\phi=0\) at \(t/t_{0}=1\), 2, 3 (_top_ panels) and at \(\phi=90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) at \(t/t_{0}=3\) (_bottom panels_). We define the thermalization optical depth as \(\sqrt{\tau_{\rm T}\pi_{\rm ff}}\), for \(\tau_{\rm T}\) (\(\tau_{\rm ff}\)) the Thomson (absorption) optical depths integrated radially inwards from the outer boundary. Figure 8: The density (_left_ panels) and temperature (_right_ panels) distribution near the black hole (yellow dot) at four different times. Note the stationary spiral shock structure that appears around \(2t_{0}\). The two panels of Figure 9 also reveal that, just as found in Shiokawa et al. (2015), the apocenter shock splits into two (called shocks 2a and 2b by Shiokawa et al.). Shock 2a occurs where the outgoing stream encounters the incoming stream, while shock 2b (visible in these figures as the surface on which the outgoing gas temperature rises from \(\simeq 3\times 10^{5}\) K to \(\simeq 1\times 10^{6}\) K) occurs where one portion of the outgoing gas catches up with another portion that has been decelerated by gravity. energy is converted into thermal energy. Evaluating this rate in units of \(E_{\rm circ}\) and \(t_{0}\) gives a measure of the circularization "efficiency": \[\eta\equiv\frac{|dU_{\rm gas+rad}/dt|}{M_{\rm gas}E_{\rm circ}/t_{0}}. \tag{8}\] Here \(U_{\rm gas+rad}\) is the total thermal (gas + radiation) energy; from \(t\simeq 0.5t_{0}\) onward, its rate of change is roughly constant at \(\simeq 1.4\times 10^{44}\) erg s\({}^{-1}\), integrating to a total thermal energy \(\simeq 2.2\times 10^{50}\) erg at the end of the simulation. \(M_{\rm gas}\) is the total gas mass in the domain at \(r<10^{4}r_{\rm g}\). Conveniently, the mass within this region stays very nearly constant over the duration of the simulation (see the top panel of Figure 5), so the total rate of thermal energy creation is very close to directly proportional to \(\eta\). This "circularization efficiency" as a function of time is shown in Figure 112. \(\eta\) rises very rapidly during the time from \(t\simeq 0.5t_{0}\) to \(t\simeq t_{0}\), quickly reaching a maximum \(\sim 0.03\). However, from \(t\simeq t_{0}\) to the end of the simulation, its value remains very nearly flat. If it were to stay at that level until circularization were complete, the process would take \(\simeq 30t_{0}\). Footnote 2: Note that this definition of \(\eta\) differs from the one used by Steinberg & Stone (2022) who compare the instantaneous energy dissipation rate to the instantaneous fall back rate, \(\eta^{\prime}=|dU_{\rm gas+rad}/dt|(M_{\rm fb}E_{\rm circ})^{-1}\). ### Radial motion The outer bound of the region in which bound debris is found expands quasi-spherically due to the combined effects of the radiation pressure gradient built by shock heating and deflection caused by stream-stream collisions. Figure 6 illustrates the azimuthally integrated Figure 10: The mass-weighted average of the specific orbital energy of debris (_top-left_), angular momentum (_top-right_), eccentricity (_bottom-left_) and aspect ratio (_bottom-right_) for given \(r\) at \(t/t_{0}=0.5\), 1, 2 and 3. The dashed grey vertical line in all panels indicate the pericenter distance of the original stellar orbit. The dotted grey horizontal line in some panels show different quantities: \(E=\Delta\epsilon\) (_top-left_), initial angular momentum (_top-right_), and \(R_{\star}/r_{\rm p}\) (_bottom-right_). The diagonal black line in the _top-left_ panel shows a power-law of \(r^{-1}\). density at four different times, \(t/t_{0}=0.5\), \(1\), \(2\) and \(3\). At \(t=0.5t_{0}\), the outgoing gas that had been heated by the nozzle shock forms a vertically thick density structure within \(3000r_{\rm g}\). Later on, at \(t>0.5t_{0}\), the apocenter shock contributes further to the expansion of the gas. At \(t\gtrsim 2t_{0}\), most of the mass that had been pushed outward starts to fall back towards the SMBH. However, in the outermost \(\lesssim 1\%\) of the flow, i.e., radii \(7000-10000r_{\rm g}\), the gas continues moving outward with a speed of \(0.005-0.01c\simeq 1500-3000\) km/s even at late times. To test whether this is an incipient wind, we define unbound gas by the total energy criterion \(E+E_{\rm therm}>0\), where \(E\) is the orbital (kinetic and gravitational) energy and \(E_{\rm therm}\) is the thermal energy. _We find almost no matter that has been made unbound after the initial disruption at any time during our simulation (i.e., up to \(3t_{0}\))_. Figure 12 depicts \(E+E_{\rm therm}\) at \(\phi=0^{\circ}\) (near the nozzle shock) and \(180^{\circ}\) (near the apocenter shock) at \(t\simeq 3t_{0}\). As already noted, although essentially all the mass is bound, its specific binding energy is small. It is therefore not straightforward to predict the final fate of the expanding envelope based on the energy distribution measured at a specific time: energy is readily transferred from one part of the system to another, or from one form of energy to another. However, because the fallback rate declines beyond \(3t_{0}\), the major energy source at later times would be effectively the interactions of gas in the accretion flow that has formed. Hence, the energy distribution in the outer envelope is unlikely to evolve much over time. In addition, because we allow for no radiative losses, the thermal energy content measured in the simulation data, particularly in the outer layers, is an upper bound to the actual value. This material's most likely long-term evolution is therefore a gradual deceleration followed by an eventual fallback. Moving gas carries energy. Defining the mechanical luminosity by \[L_{\rm mech}(r)=\int_{\Omega}d\Omega r^{2}\rho v^{r}(E+E_{\rm therm}), \tag{9}\] we find (as shown in Fig. 13) that the net \(L_{\rm mech}\) integrated over spherical shells is nearly always positive for \(t\gtrsim t_{0}\) and is super-Eddington. The predominantly negative slope in \(L_{\rm mech}(r)\) at \(r\lesssim 10^{3}r_{\rm g}\) and \(r\gtrsim 5000r_{\rm g}\) indicates that these regions are gaining energy, while the relatively constant mechanical luminosity at \(10^{3}r_{\rm g}\lesssim r\lesssim 5\times 10^{3}r_{\rm g}\) shows relatively insignificant energy exchange in that range of radii. In interpreting this radial flow of mechanical luminosity, it is important to note that it is due to a mix of outwardly-moving unbound matter and inwardly-moving bound matter. Both signs of radial velocity are represented on almost every spherical shell; in fact, the mass-weighted mean radial velocity is generally _inward_ with magnitude \(\sim 300-1000\) km s\({}^{-1}\). Thus, the regions gaining energy do so in large part, but not exclusively, by losing strongly bound mass. ### Matter loss through inner radial boundary Figure 14 shows the rate at which mass falls through the inner radial boundary at \(40r_{\rm g}\) and leaves the computational domain. This rate rises rapidly from \(t=0\) to \(t=t_{0}\), and from then until \(t=3t_{0}\) fluctuates about a nearly-constant mean value \(\simeq 0.035\dot{M}_{0}\). The total lost mass up to \(t\simeq 3t_{0}\) is \(\lesssim 0.025M_{*}\). This is a factor \(\sim 3\) smaller than the rate, at \(t=3t_{0}\), of mass-loss through a similarly-placed inner cut-out in the simulation of Shiokawa et al. (2015). Although the simulation provides no information about what happens to this gas after it passes within \(r=40r_{\rm g}\), we can make certain informed speculations. By computing its mass-weighted mean specific energy and angular momentum, we find that its mean eccentricity is \(\simeq 0.7-0.8\) and does not evolve over time. Its mean pericenter and apocenter distances are \(10-15r_{\rm g}\) and \(80-120r_{\rm g}\), respectively. These values imply that the mass lost through the boundary would not accrete onto the SMBH immediately. In fact, if it follows such an orbit, it should re-emerge from the inner cutout. If it did so without suffering any dissipation, it would erase the positive energy flux we find at the inner boundary, which is due to bound matter leaving the computational domain. On the other hand, if it suffered the maximum amount of dissipation consistent with an unchanging angular momentum and settled onto circular orbits inside \(40r_{g}\), it might release as much as \(\simeq 10^{51}\) erg, more than enough to unbind the rest of the bound debris, whose binding energy is only \(\sim 3\times 10^{50}\) erg. However, this estimate of dissipation is an upper bound, and likely a very loose one, with the actual dissipation far smaller. The location of the shocks suffered by this gas would be much nearer its apocenter at \(\sim 100r_{g}\) than its pericenter, reducing the kinetic energy available for dissipation Figure 14: The rate at which gas is expelled through the inner radial boundary at \(r=40r_{\rm g}\), divided by \(\dot{M}_{0}\), as a function of \(t/t_{0}\). Figure 12: The distribution of the total energy \(E+E_{\rm therm}\), normalized by the local gravitational potential \(E_{\rm grav}=-GM_{\bullet}/r\), at \(\phi=0^{\circ}\) (_left_, near nozzle shock) and \(180^{\circ}\) (_right_, near apocenter shock) at \(t=3t_{0}\). Figure 13: Radial total energy (\(E+E_{\rm therm}\)) flux integrated over a spherical shell at given \(r\), normalized by the Eddington luminosity \(L_{\rm Edd}\), at four different times. Positive values mean net positive (or negative) energy carried by gas moving radially outward (or inward). by an order of magnitude or more. Oblique shock geometry, as seen in the directly-simulated shocks, sharply diminishes how much kinetic energy is dissipated per shock passage. In addition, if matter does settle onto weakly eccentric orbits with semimajor axes \(<40r_{\rm g}\), it will block further inflow, thereby decreasing the net flow across the \(r=40r_{\rm g}\) surface. Whatever accumulation of gas occurs within \(\lesssim 100r_{g}\) is also unlikely to have much effect on the bulk of the bound debris because, as visible in Figure 8, the radial range at which the bulk of the returning gas passes through the nozzle shock moves steadily outward over time, reaching \(\gtrsim 500r_{g}\) by \(t=3t_{0}\). ### Radiation To infer the bolometric luminosity of this event, we post-process the simulation results. We first identify the thermalization photosphere with the surface where \(\sqrt{r_{\rm T}\tau_{\rm ff}}\simeq 1\). Here, \(\tau_{\rm T}\) (\(\tau_{\rm ff}\)) is the Thomson (absorption) optical depth integrated radially inward from the outer \(r\) boundary. The absorption cross section is calculated using an OPAL table for solar metallicity. The _upper_ panels of Figure 7 show that the photosphere expands quasi-spherically, which is expected from the radial expansion of the outer debris. At \(t=t_{0}\), the photosphere is quasi-spherical located at \(r\simeq 4000-5000r_{\rm g}\). It expands to \(9000-10000r_{\rm g}\) at \(t=2t_{0}\) and to \(\simeq 12000r_{\rm g}\) at \(t=3t_{0}\). Given the eccentric orbit of debris, the radius of the photosphere depends on \(\phi\) at a quantitative level, even while the its overall shape is qualitatively round. To demonstrate the \(\phi\)-dependence, we show in the _bottom_ panels of Figure 7 the density distribution and the photosphere at four different azimuthal angles, \(\phi=0\), \(90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) at \(t=3t_{0}\). We then estimate the cooling time \(t_{\rm cool}\) at all locations inside the photosphere as \[t_{\rm cool}(r)=\frac{h_{\rho}\tau(r)}{c(1+u_{\rm gas}/u_{\rm rad})}, \tag{10}\] where \(h_{\rho}\) is the first-moment density scale height of the gas along a radial path, \(\tau\) is the optical depth (radially integrated) to \(r\), and \(u_{\rm gas}/u_{\rm rad}\) is the ratio of the local internal energy density to the radiation energy (a ratio that is often only slightly greater than unity). We then estimate the luminosity by integrating the energy escape rate over the volume within the photosphere, but including only those locations for which \(t_{\rm cool}\) is smaller than the elapsed time in the simulation. This condition accounts for the fact that in order to leave the debris by time \(t\), the cooling time from the light's point of origin must be less than \(t\). The resulting expression is \[L=\int_{0}^{2\pi}\int_{\theta_{e}}^{\pi-\theta_{e}}\int_{r=r(t_{\rm cool}<t)}^ {\pi=r(\tau=1)}\frac{aT^{4}}{t_{\rm cool}}r^{2}\sin\theta drd\theta d\phi, \tag{11}\] where \(a\) is the radiation constant. The local effective temperature at each individual cell near the photosphere is then calculated as \(T_{\rm ph}=[(dL/dA)/\sigma]^{1/4}\), where \(A\) is the surface area of the photosphere and \(\sigma\) is the Stefan-Boltzmann constant. We estimate that the peak luminosity is \(\simeq 10^{44}\) erg/s \(\simeq 10L_{\rm Edd}\), which occurs at \(t\simeq t_{0}\). This is roughly the mean rate of thermal energy creation during the simulation. The photospheric temperature distribution at \(t=0.5t_{0}\) can be described as nearly flat within the range \(\times 10^{4}K\lesssim T\lesssim 2\times 10^{5}K\), as shown in the _left_ panel of Figure 15. At \(t\gtrsim t_{0}\), the distribution becomes narrower: the distribution at \(t\gtrsim 2t_{0}\) has a single peak at \(T\simeq(5-6)\times 10^{4}\) K. In the right panel of Figure 15 we show the photospheric temperature distribution as a function of observer direction at \(t\simeq 3t_{0}\). The temperature is \(5-6\times 10^{4}\) K over almost the entire photosphere except for a noticeably low-temperature spot at \(\phi\simeq\pi\) and \(\theta=0.5\pi\), corresponding to the low-\(T\) incoming stream. ## 4 Discussion ### Circularization - fast or slow? The pace of "circularization" has long played a central role in understanding how TDE flares are powered. If it is rapid, i.e., takes place over a time \(\lesssim t_{0}\), the debris joins a small (\(r\lesssim r_{\rm p}\)) accretion disk as soon as it first returns. In addition, accretion takes place on a timescale short compared to \(t_{0}\) because the orbital period on this scale is shorter than \(t_{0}\) by a factor \((M_{\star}/M_{\bullet})^{1/2}\sim 10^{-3}\). Even after waiting \(\sim 10\) orbital periods for MRI turbulence to saturate and then consuming many more orbital periods to flow inward by magnetic stresses, the total inflow is still short compared to \(t_{0}\). The dissipation rate at the time of peak mass-return would then be strongly super-Eddington. The result of our simulation, however, is that "circularization" is actually very slow. We find that the returning debris forms a large cloud that stretches all the way from the pericenter of the original stellar orbit to the apocenter of the most-bound debris, a dynamic range \(\sim 30-100\). Throughout the first \(3t_{0}\) after disruption, only a small fraction of the debris resides within the pericenter. The mass-weighted mean eccentricity falls from \(\simeq 0.9\) to \(\simeq 0.4-0.6\) by \(t\gtrsim 2t_{0}\), but doesn't decrease further from that time to until at least \(3t_{0}\). Thus, by this late time the debris has neither achieved a circular orbit nor been compressed within \(r\sim r_{\rm p}\). Such slow circularization is consistent with the low energy dissipation rate. The thermal energy within the system is very small compared with \(E_{\rm circ}\), the energy that must be removed from the bound debris' orbital energy in order to fully circularize it. Similarly, the circularization efficiency parameter \(\eta\) suggests that several dozen \(t_{0}\) are required in order to dissipate \(E_{\rm circ}\) of energy (see Figure 11). Thus, we may conclude that little circularization is accomplished during the time in which most of the debris mass returns to the BH. Our conclusions in this matter agree with earlier findings of Shiokawa et al. (2015), who used a somewhat cruder computational scheme and less realistic conditions (these authors considered a disruption of a white dwarf of 0.64 \(M_{\odot}\) by a BH of 500 \(M_{\odot}\)). On the other hand, they differ with those of Steinberg and Stone (2022), who analyzed the "circularization efficiency" in terms of the heating rate per returning mass rather than our definition, the heating rate per _returned_ mass over a time \(t_{0}\). On the basis of tracking this definition of circularization efficiency up to \(t=t_{0}\), they argued that it was growing exponentially on a timescale \(\sim t_{0}\), so that full circularization might be achieved quickly. Interestingly, our definition of efficiency also grows rapidly with time during the first \(t_{0}\); in this respect we agree with Steinberg and Stone (2022). However, we also find that it flattens out shortly after \(t_{0}\). Thus, one possible explanation of the contrast in our conclusions about the magnitude of energy dissipation is simply that our simulation ran longer than theirs when measured in \(t_{0}\) units. It is also possible that some of the difference in the results could be attributed to differences in our physical assumptions. Steinberg and Stone (2022) used a spherical harmonic oscillator potential at \(r<30~{}r_{\rm g}\) (private conversation with Elad Steinberg) and a Paczynski-Wiita potential at larger radii, whereas we used a Schwarzschild spacetime with a cut-out at \(40r_{\rm g}\); they described radiation transport by a flux-limited diffusion scheme, whereas we included radiation only as a contribution (often the dominant one) to the pressure. On balance, though, because the gravity descriptions used are not very different on the relevant lengthscales and the long cooling times in the system severely limit radiative diffusion, these contrasts are unlikely to explain this disagreement. Lastly, it is possible that the difference in parameters (our \(M_{\rm BH}=10^{5}M_{\odot}\) and \(M_{*}=3M_{\odot}\) vs. their \(M_{\rm BH}=10^{6}M_{\odot}\) and \(M_{*}=1M_{\odot}\)) may also play a role. Further simulations will be necessary in order to test this possibility. ### Energy Dissipation: Shocks vs. Accretion The physical assumptions in our simulation restrict the creation of thermal energy to two mechanisms: shocks and compressive work done within the fluid. There is no energy release due to classical accretion because our equations contain neither MHD turbulence nor phenomenological viscosity. Nonetheless, we have demonstrated that shocks and compression can, without these other processes, generate enough energy during a few \(t_{0}\) to power the observed luminosity of TDEs. We estimate a photon luminosity during this period of \(\sim 10^{44}~{}{\rm erg~{}s^{-1}}\), and all of this energy was generated by shocks and compressive work. As discussed in the previous subsection, we have demonstrated the _absence_ of orbital energy loss that is a prerequisite for forming a classical accretion flow. ### Outflow Figure 15: (_Left_) The photosphere temperature distribution \(dL/dT\) at four different times \(t/t_{0}=0.5\), 1, 2, and 3, and (_right_) the angular distribution of the temperature at \(t=3t_{0}\). A third interesting finding is that we do not find a significant unbound outflow emerging from the bound debris. Very nearly all the bound material that has returned to the vicinity of the SMBH remains bound by the end of our simulations. Although we do see outward motion, its slow speed indicates that the material remains bound (see Figure 12). It should therefore eventually slow down and fall back. This result places an even stronger upper bound on the dissipated energy than the earlier result that there was too little dissipation to circularize the matter, as the specific energy needed to unbind the debris is significantly smaller than that needed to circularize it around the original pericenter. Whereas the circularization energy is \(\sim 3\times 10^{51}\) erg, the binding energy is only \(\sim 3\times 10^{50}\) erg. That almost no initially bound debris is rendered unbound is consistent with observational limits on outflows from both radio and optical TDEs (Matsumoto and Piran, 2021). This conclusion, which is contrary to a number of predictions (e.g., Jiang et al., 2016; Bonnerot et al., 2021; Huang et al., 2023), also casts some doubt on the possibility (Metzger and Stone, 2016) that the kinetic energy of an outflow is the solution to the "inverse energy crisis" mentioned earlier. When the source of heating is shocks, we find negligible transport of energy to infinity associated with outflows. Interestingly, although Steinberg and Stone (2022) do find an unbound outflow, its mechanical luminosity is only \(\simeq 1.6\times 10^{42}\) erg s\({}^{-1}\) if the outflow velocity they quote, 7500 km s\({}^{-1}\), is its velocity at infinity. This is such a small fraction of the heating rate that even this sort of wind does not play a significant role in the energy budget. Moreover, even if _all_ the mass lost through our inner boundary were quickly accreted in a radiatively efficient manner, as we have already estimated, the associated heat produced would be only a factor of 4 - 5 greater than the thermal energy generated by shocks in the first \(3t_{0}\) after the disruption. In this sense, we have also placed a strong limit on the ability of a wind dependent upon accretion energy to carry away a large quantity of energy. ### The ultimate fate of the bound debris Our simulation ends at \(3t_{0}\) with nearly all the bound debris \(\sim 10^{3}-10^{4}r_{\rm g}\) from the BH, spread over a large eccentric cloud. The question naturally arises: what happens next? Extrapolating from their qualitatively similar results, Shiokawa et al. (2015) suggested that, after the usual \(\sim 10\) orbital-period time necessary for saturation of MHD turbulence driven by the magnetorotational instability (MRI), the gas would accrete in more or less the fashion of circular accretion disks. Since that work, it has been shown (Chan et al., 2018, 2022) that, indeed, the MRI is a genuine exponentially-growing instability in eccentric disks and, in its non-linear development, creates internal magnetic stresses comparable to those seen in circular disks. However, its outward transport of angular momentum may, in the context of eccentric disks, cause the innermost matter to grow in eccentricity while outer matter, the recipient of the angular momentum removed from the inner matter, becomes more circular (Chan et al., 2022). If this is the generic result of MRI-driven turbulent stresses in an eccentric disk, accretion might be radiatively inefficient, as matter can plunge directly into the BH if it has sufficiently small angular momentum (Svirski et al., 2017). The condition for this to happen is for the angular momentum transport to be accompanied by very little orbital energy loss. It is then possible for fluid elements of very low angular momentum to fall ballistically into the SMBH after having radiated only a small amount of energy. In this case, the system will dim rapidly after the thermal energy created by shocks has diffused out in radiation. However, it remains to be determined whether this is, in fact, the situation in TDE eccentric accretion flows. If, instead, the work done by torques associated with angular momentum transport is substantial, a compact, more nearly circular, accretion disk eventually forms. This disk will then behave much more like a conventional accretion flow, radiating soft X-rays until most of the disk mass has been consumed. If the energy lost per unit accreted mass comes anywhere near the \(\sim 0.1c^{2}\) of radiatively-efficient accretion, the total energy radiated over this prolonged accretion phase could be quite large: \(0.1M_{\star}c^{2}\approx 10^{53}\) erg. However, sufficiently long accretion timescales might keep the luminosity relatively low. There is some observational evidence for such radiation on multi-year timescales, both in X-rays (e.g., Jonker et al., 2020; Kajava et al., 2020) and UV (e.g., van Velzen et al., 2021; Hammerstein et al., 2023). In these long-term observations, the luminosity declines gradually enough (\(\propto t^{-1}\)) to make the total energy radiated logarithmically divergent. A related question is posed by the matter that passed through our inner radial boundary. To the extent that some portion of it does dissipate enough energy to achieve a near-ISCO orbit, there is the possibility of significant energy release in excess of what was seen in our simulation. In fact, in order to generate soft X-ray luminosities comparable to those often seen (\(\sim 10^{44}\) erg s\({}^{-1}\) at peak), all that is required is a mass accretion rate \(\sim 3\times 10^{-3}M_{\star}/t_{0}\). Thus, if \(\sim 0.3\) of the matter pass ing through our inner boundary were able to accrete onto the BH, it might be able to account for the X-ray luminosity sometimes seen, given an optically thin path to infinity. For the parameters of our simulation, there appears to be little or no solid angle through which such a path exists (see Figure 7), but, as shown by Ryu et al. (2020), the ratio \(t_{\rm cool}/t_{0}\) falls to \(\lesssim O(1)\) when \(M_{\bullet}\gtrsim 10^{6}M_{\odot}\). Consequently, radiative cooling might make the flow geometrically thinner for larger \(M_{\bullet}\) events, permitting X-rays emitted near the center to emerge during the time of the optical/UV flare. Alternatively, for those cases that, like our simulation, have relatively long cooling times, X-ray emission may become visible only after a significant delay relative to the optical/UV light, a delay that has been observed in several TDEs (Gezari et al., 2017; Kajava et al., 2020; Hinkle et al., 2021; Goodwin et al., 2022). ### Comparison with Ryu et al. (2020) Ryu et al. (2020) introduced a parameter-inference method TDEmass for \(M_{\bullet}\) and \(M_{\star}\) built on the assumption that optical TDEs are powered by the energy dissipated by the apocenter shock. In this method, one assumes that the peak luminosity and temperature occur at \(t\simeq 1.5t_{0}\) when the most-bound debris collide with the incoming stream at the apocenter. Using our numerical results to determine the two parameters of TDEmass (setting \(c_{1}\), the ratio of the photospheric radius to the apocenter distance, to 1.2 and the solid angle of the photosphere to \(4\pi\)), we find that the luminosity and temperature at the peak of the bolometric lightcurve would be \(3\times 10^{44}\) erg/s and 70000 K (see Equations. 1, 2, 6 and 9 of Ryu et al., 2020). These values can be compared with the estimates derived from our cooling time method, \(L\approx 10^{44}\) erg s\({}^{-1}\) and \(T\approx 60000\) K, measured at \(t\simeq 1.5t_{0}\). The contrast in luminosity may be a consequence of an assumption made in the method of Ryu et al. (2020): that the heating due to shocks is radiated promptly. Although this is a reasonable approximation for \(M_{\bullet}\gtrsim 10^{6}M_{\odot}\), our simulation has shown that when \(M_{\bullet}\) is as small as \(\sim 10^{5}M_{\odot}\), cooling is significantly retarded (in fact, Ryu et al., 2020) pointed out that \(t_{\rm cool}/t_{0}\propto\Xi^{5/2}M_{\bullet}^{-7/6}M_{\star}^{4/9}\)). Although our simulation suggests that this method may require some refinement in the range of small SMBH masses, overall, whether with or without the corrections suggested by the detailed numerical simulation, the peak luminosity is in the range of optical/UV bright TDEs. The temperature estimated from the simulation is larger by only a factor of 1.2, which is reasonable given the approximate treatment of the radiation in our scheme. ## 5 Conclusions Following the energy often provides a well-marked path toward understanding the major elements of a physical event. It is especially useful for TDEs because one might define their central question as "How does matter whose initial specific orbital energy is \(\sim 10^{-4}c^{2}\) dissipate enough energy to both power the observed radiation and then, in the long-run, fall into the black hole?" This question can be made more specific by pointing out certain milestones in energy. In a typical TDE flare, \(\sim 3\times 10^{50}\) erg is radiated during its brightest period, although in a number of cases an order of magnitude more is radiated over multiple year timescales (e.g., van Velzen et al., 2021; Hammerstein et al., 2023). The immediately post-disruption binding energy of the bound gas in the simulation described here is very similar to this number, \(2\times 10^{50}\) erg. The energy required to circularize all the bound gas is 1.5 dex larger, \(7.5\times 10^{51}\) erg. Lastly, the energy that might be liberated through conventional relativistic accretion of all the bound material is \(\sim 3\times 10^{53}\) erg. Comparing the results of our simulation--\(1.5\times 10^{50}\) erg radiated over a time \(3t_{0}\) long and final gas binding energy less than a factor of 2 greater than in the initial state (\(3\times 10^{50}\) erg)--to these milestones points to a number of strong implications. First, and most importantly, the radiation we estimate as arising from our simulation is very close to the typical radiated energy during the brightest portion of the flare. In other words, the hydrodynamics we have computed, in which shocks dissipate orbital energy into heat, succeed in matching the most important quantity describing TDE flares. Second, over this period the binding energy of the debris does not change appreciably. It immediately follows from the virial theorem that the scale of the region occupied by the debris likewise does not change appreciably. The only modification that might be made to this conclusion is that radiation losses would increase the binding energy by a factor of \(\sim 1.5-2\). The area of the photosphere is determined by the scale of the region containing the bound debris. Third, swift "circularization", that is, confinement of the bound debris to a circular disk with outer radius \(\sim r_{\rm p}\), does not happen. This process requires the bulk of the debris to increase its binding energy by a factor \(\sim 30\); this did not happen. Fourth, radiatively efficient accretion of most of the debris mass onto the black hole certainly did not happen. If this had occurred, the mass remaining on the grid would be substantially smaller, and the energy released would have rendered the remaining mass strongly unbound, as it corresponds to a total dissipated energy \(\sim 10^{3}\times\) larger than seen. Lastly, we have also found that, contrary to some expectations, essentially no debris gas that was bound immediately after the disruption was rendered unbound by shock dynamics. ## Acknowledgements We thank Elad Steinberg and Nick Stone for helpful conversations. We also thank Suvi Gezari for informing us about delayed X-ray flares in TDEs. This research project was conducted using computational resources (and/or scientific computing services) at both the Texas Advanced Computing Center and the Max-Planck Computing & Data Facility. At TACC, we used Frontera under allocations PHY-20010 and AST-20021. In Germany, the simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. TP is supported by ERC grant MultiJets. JK is partially supported by NSF grants AST-2009260 and PHY-2110339. matplotlib (Hunter, 2007); MESA(Paxton et al., 2011); Harm3d(Noble et al., 2009).
2302.08868
Richardson Approach or Direct Methods? What to Apply in the Ill-Conditioned Least Squares Problem
This report shows on real data that the direct methods such as LDL decomposition and Gaussian elimination for solving linear systems with ill-conditioned matrices provide inaccurate results due to divisions by very small numbers, which in turn results in peaking phenomena and large estimation errors. Richardson iteration provides accurate results without peaking phenomena since division by small numbers is absent in the Richardson approach. In addition, two preconditioners are considered and compared in the Richardson iteration: 1) the simplest and robust preconditioner based on the maximum row sum matrix norm and 2) the optimal one based on calculation of the eigenvalues. It is shown that the simplest preconditioner is more robust for ill-conditioned case and therefore it is recommended for many applications.
Alexander Stotsky
2023-02-17T13:31:22Z
http://arxiv.org/abs/2302.08868v1
# Richardson Approach or Direct Methods? What to ###### Abstract This report shows on real data that the direct methods such as LDL decomposition and Gaussian elimination for solving linear systems with ill-conditioned matrices provide inaccurate results due to divisions by very small numbers, which in turn results in peaking phenomena and large estimation errors. Richardson iteration provides accurate results without peaking phenomena since division by small numbers is absent in the Richardson approach. In addition, two preconditioners are considered and compared in the Richardson iteration: 1) the simplest and robust preconditioner based on the maximum row sum matrix norm and 2) the optimal one based on calculation of the eigenvalues. It is shown that the simplest preconditioner is more robust for ill-conditioned case and therefore it is recommended for many applications. Estimation of Ill-Conditioned System in the Moving Window, Recursive Matrix Inversion Based on Rank Two Update, Divisions by Very Small Numbers and Large Estimation Errors in Ill-Conditioned Case, Wave Form Distortion Monitoring ## 1 Introduction The solution of the system of linear equations with _SPD (Symmetric and Positive Definite)_ ill-conditioned matrix is required in many application areas such as control, system identification, signal processing, statistics as well as in many big data applications. The matrix \(A\) is SPD ill-conditioned (a) information matrix for systems with harmonic regressor and short window sizes, Stotsky (2015), Stotsky (2019), (b) Gram matrix due to the squaring of the condition number in the least squares method, Bjorck (1996), Ljung (1999), (c) mass matrix in finite element method, Sticko (2016) and mass matrix (lumped mass matrix) for mechanical systems with singular perturbations, Moustafa (1990), (d) state matrix for systems of linear equations, Hasan et al. (2011) and in many other applications. Moreover, many mechanical and electrical systems are singularly perturbed (have modes with different time-scales), Kokotovic (1984) and considered as stiff and ill-conditioned systems, which potentially extends application areas of this model. Ill-conditioning implies robustness problems (sensitivity to numerical calculations) and imposes additional requirements on the accuracy of the solution of algebraic equations. The accuracy requirements is the motivation for application of the Richardson iteration which is driven by the residual error and has filtering (averaging) property. The residual error is smoothed and remains bounded for a sufficiently large step number in this iteration, where the bound depends on inaccuracies, providing the best possible solution in finite digit calculations. The performance of Richardson algorithms strongly depends on the preconditioning and therefore development of new computationally efficient preconditioners is required. To this end the movement of the window is presented as rank two update of the information matrix. Indeed, the least squares estimation of the frequency contents of oscillating signal in the window of the size \(w\) which is moving in time can be presented in the following form: \[A_{k}\theta_{k}=b_{k},\ \ b_{k}=\sum_{j=k-(w-1)}^{j=k}\varphi_{j}\ y_{j}=b_{k-1 }+d_{k} \tag{1}\] \[b_{k-1}=\sum_{j=k-w}^{j=k-w}\varphi_{j}\ y_{j},\ \ d_{k}=\varphi_{k}\ y_{k}- \varphi_{k-w}\ y_{k-w} \tag{2}\] \[A_{k}=\sum_{j=k-(w-1)}^{j=k}\varphi_{j}\ \varphi_{j}^{T}=A_{k-1}+R_{k} \tag{3}\] \[A_{k-1}=\sum_{j=k-w}^{j=k-1}\varphi_{j}\ \varphi_{j}^{T},R_{k}=\varphi_{k}\ \varphi_{k}^{T}-\varphi_{k-w}\ \varphi_{k-w}^{T} \tag{4}\] \[\varphi_{k}^{T}=[cos(q_{0}k)\ sin(q_{0}k)\...\ cos(q_{k}k)\ sin(q_{k}k)] \tag{5}\] where the oscillating signal \(y_{k}\) is approximated using the model \(\hat{y}_{k}=\varphi_{k}^{T}\theta_{k}\) with the harmonic regressor (5), where \(q_{0},...q_{h}\) are the frequencies. The parameter vector \(\theta_{k}\) should be calculated in each step with desired accuracy as the solution of the algebraic equation (1), which is associated with minimization of the following error \(\sum_{j=k-(w-1)}^{j=k}(y_{j}-\hat{y}_{j})^{2}\), where \(y_{k}=\varphi_{k}^{T}\theta_{k}+\xi_{k}\) and \(\theta_{k}\) is the vector of unknown parameters and \(\xi_{k}\) is the noise. The information matrix \(A_{k}\) is defined in (3) as the sum of rank one matrices and as the rank two update, \(R_{k}\) of the matrix \(A_{k-1}\), \(k\geq w+1\). Rank two update is associated with the movement of the window, where the new data \(\varphi_{k},y_{k}\) enter the window and the data \(\varphi_{k-w},y_{k-w}\) leave the window in step \(k\). Ill-conditioning of the matrix \(A_{k}\) implies robustness problems (sensitivity to numerical calculations) and imposes additional requirements on the accuracy of the solution of (1) (which are especially pronounced in finite-digit calculations) since small changes in \(b_{k}\) due to measurement, truncation, accumulation, rounding and other errors result in significant changes in \(\theta_{k}\). In addition, ill-conditioning implies slow convergence of the iterative procedures. ## 2 Which Algorithms? Richardson IterationsRichardson framework provides simple assessment and quantification of the trade-off between the accuracy and computational burden, associated with the concept of approximate computing and can be chosen as the most promising solution for ill-conditioned systems. The Richardson algorithms can be written in the following form: \[\vartheta_{i}=\vartheta_{i-1}-G_{i}\ \{A_{k}\vartheta_{i-1}-b_{k}\},\ F_{i}=I-G_{i}A_{k},\ A_{k} \theta_{k}\quad=b_{k} \tag{6}\] where \(\vartheta_{i}\) is the vector that estimates unknown parameters, \(\theta_{k}=\vartheta_{i}\), \(G_{i}\) is associated with iterative matrix inversion algorithms, which minimize inversion error \(F_{i}\), where \(I\) is the identity matrix. The norm of the residual error \(\|A\theta_{k}-b\|\leq\delta\), where \(\delta>0\) is a given bound can be used for a proper choice of the number of iterative steps providing pre-specified upper bound of the accuracy according to the concept of approximate computing. Direct MethodsGaussian elimination method is much faster and more accurate than the methods associated with the matrix inversion. Gaussian method may produce residuals of the order of machine accuracy, but the solution is often not reliable due to numerical instability, MathWorks (2021), Druinski and Toledo (2012). Namely, the solution is often accompanied with the peaking phenomena in the ill-conditioned case due to division by very small numbers, which is not present in the Richardson approach, where the norm of the residual error can be kept uniformly within the bound \(\delta\). Comparisons on Real DataThe one-phase synchronized voltage waveform measured at the wall outlet (approximately 120V RMS) is used for comparisons of Richardson and direct methods, Freitas et al. (2016). The sampling measurement rate is 256 points per cycle.Measured voltage wave form of a single cycle is plotted in Figure 1 with the black line. The signal was approximated with the system with harmonic regressor, (5) which contains the fundamental frequency of \(q_{0}=60\) Hz and four higher harmonics. The wave form is approximated in the moving window of a relatively small size. The condition number of the information matrix varies significantly as the function of step number and has the average order of \(10^{7}\), indicating ill-conditioning. Extreme ill-conditioning was detected in several steps where the condition number reaches the order of \(10^{9}\). The Figure shows approximation performance of different types of computational algorithms for estimation of the parameter vector. Approximation with standard matrix inversion algorithm (matrix inversion using LDL decomposition,implemented as standard routine in Matlab) is plotted with the blue line, and the approximation with the standard routine which realises Gaussian elimination method is plotted with the red dashed line. Approximation with Richardson parameter estimation algorithm is plotted with the red dashed line. Average accuracy (residual) of the method which is based on the matrix inversion is much worse than the average accuracies of Richardson and Gaussian method. Moreover, the matrix inversion method shows essential deterioration of the accuracy (due to division by very small numbers) in the large number points. Gaussian method provides very high average accuracy compared to the matrix inversion and Richardson methods. However, the accuracy deteriorates in some points and becomes worse than the accuracy of the Richardson method. The number of steps of the Richardson algorithm was optimized in each step of the moving window to guarantee the pre-specified upper bound of the accuracy, which eliminated peaking phenomena and reduced computational time. Average accuracy of the Richardson algorithm was chosen much worse for the sake of robustness and reduction of computational complexity (according to the concept of approximate computing) compared to Gaussian method. Such a choice made the Richardson algorithm much faster than the Gaussian method. In other words, optimization possibility of the Richardson algorithms associated with the trade-off between accuracy on the one side and robustness and computational time on the other side allows the proper choice of acceptable uniform accuracy without any deterioration and essentially reduces the computational time. Finally, the Figure 1 shows that the approximation performance of the Richardson algorithm is far superior than direct methods (which are not suitable for the detection of the power quality events) in the ill-conditioning case. ## 3 Recursive Parameter Calculation Algorithm DescriptionThe parameter vector in (1) can be calculated using the inverse of information matrix, \(\theta_{k}=A_{k}^{-1}b_{k}\). Denoting \(\Gamma_{k}=A_{k}^{-1}\) the recursive update of \(\Gamma_{k}\) via \(\Gamma_{k-1}\) is derived by application of the matrix inversion lemma1, Stotsky (2023) to the identity (3): Figure 1: Measured voltage wave form of a single cycle is plotted with the black line.The Figure shows approximation performance of different types of computational algorithms for estimation of the parameter vector. Approximation with standard matrix inversion algorithm (matrix inversion using LDL decomposition,implemented as standard routine in Matlab) is plotted with the blue line, and the approximation with the standard routine which realises Gaussian elimination method is plotted with the green line. Approximation with Richardson parameter estimation algorithm is plotted with the red dashed line. \[\Gamma_{k}=\Gamma_{k-1}-U_{k}\;S^{-1}\;U_{k}^{T} \tag{7}\] where \(Q_{k}=[\theta_{k}\;\;\varphi_{k-w}]\), \(U_{k}=\Gamma_{k-1}\;Q_{k}\), \(S=D+Q_{k}^{T}\;\;\Gamma_{k-1}\;Q_{k}\) and \(D=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\) The \(2\times 2\) matrix \(S\) remain the same in all the steps of the window of a given size \(w\) and should be calculated only once. Two forms of the parameter update \(\theta_{k}\) can be presented as follows: \[\theta_{k} =[I-U_{k}\;S^{-1}\;Q_{k}^{T}][\theta_{k-1}+\Gamma_{k-1}d_{k}] \tag{8}\] \[\theta_{k} =\Gamma_{k}b_{k} \tag{9}\] where \(I\) is the identity matrix and the form (8) is derived from (9) and (7). The algorithms are initialized as follows \(\Gamma_{w}=A_{w}^{-1}\) and \(A_{w}\;\;\theta_{w}=b_{w}\). The parameter update (9) does not depend on parameters of the previous step and requires matrix vector multiplication only. The inverse matrix and the parameter update law, (7) and (8) can be calculated in two parallel loops. Both forms are quadratic complexity algorithms and faster than direct parameter calculation methods. _Error Accumulation._ The algorithm described above can be seen as ideal explicit recursive solution of the system (1) - (5). Unfortunately, such solution is not robust with respect to error accumulation without corrections. The accumulation strength depends on the size of the moving window and the information matrix. Although the error accumulation is not very significant for relatively large window sizes the deterioration of the performance maybe essential for big data applications. The performance deterioration due to error accumulation is significant for ill-conditioned information matrices and for a large number of harmonics (which is expected in future electric networks) and short window sizes even for well conditioned information matrices due to the large number of calculations. For the sake of improvement of the accuracy and robustness the Richardson corrections should be introduced in (7) and (8). Notice that Richardson framework with Newton-Schulz matrix inversion algorithms is ideally suited for these corrections providing (after few iterations only) two improved estimates for the next step of the algorithm. _Changeable Window Size._ Unfortunately, the algorithm (7) and (8) should be re-initialized when the moving window changes the size, which happens quite often for the detection of both rapidly and slowly varying parameters. Initialization includes calculations of the matrix inverse \(\Gamma_{w}=A_{w}^{-1}\) and the parameter vector which satisfies \(A_{w}\;\;\theta_{w}=b_{w}\) and it is computationally heavy for large scale systems. Therefore new computationally efficient preconditioning methods should be developed for the case of frequent changes of the window size. ## 4 Properties of the Moving Window _Lemma._ Consider rank two update \(R_{k}\) of the matrix \(A_{k-1}\) defined in (3). Then eigenvalues of the matrix \(A_{k}\) are the same for all the steps \(k\geq w+1\). _Proof._ The rank two matrix \(R_{k}\) has two nonzero eigenvalues only, \(\pm\|R_{k}\|_{F}/\sqrt{2}\), where the norm is the Frobenius norm. To prove that the eigenvalues remain the same it is sufficient to consider evolution of the coefficients of characteristic polynomial of this matrix (or eigenvectors), Stotsky (2023). ## 5 Preconditioning Based on the Properties of the Window _Simplest Preconditioner._ The simplest preconditioner \(\alpha=\frac{2}{\|A\|_{\infty}}\) guarantees that the spectral radius \(\rho\) of the SPD matrix \(A\) is less than one, \(\rho(I-\alpha A)<1\), where \(\|\cdot\|_{\infty}\) is the maximum row sum matrix norm, Ben-Israel et al. (1966), Stotsky (2015). _Optimal Preconditioner & Recursive Estimation of the Eigenvalues._ The spectral radius of the matrix \((I-\alpha A)\) gets its minimal value \((1-\lambda_{1}\alpha)\) for SPD matrix \(A\) for the following preconditioner \(\alpha=\frac{2}{\lambda_{1}+\lambda_{m}}\), where \(\lambda_{1}=\lambda_{min}(A)\) and \(\lambda_{n}=\lambda_{max}(A)\) are minimal and maximal eigenvalues respectively. In other words the optimal preconditioner maps the interval which contains all eigenvalues of \(A\) onto symmetric interval of maximal length around the origin, Ben-Israel et al. (1966). The following power iteration algorithm O'Leary et al. (1979), (10) which requires matrix vector multiplications only \[\hat{x}_{k}=\frac{A\;\hat{x}_{k-1}}{\|A\;\hat{x}_{k-1}\|} \tag{10}\] can be applied for estimation of the largest eigenpair \(Ax=\lambda x\). Notice that the minimal eigenvalue of \(A\) can be estimated via maximal eigenvalue of \((\beta I-A)\) where \(\beta=\hat{\lambda}_{n}+\epsilon\), \(\hat{\lambda}_{n}\) is estimated maximal eigenvalue of \(A\) and \(\epsilon\) is a sufficiently small positive number. Maximal eigenvalue of \((\beta I-A)\) in turn can be estimated using the same algorithm (10). _Comparisons & Drawbacks._ The spectral radius of the matrix \((I-\alpha A)\) with optimal preconditioner decreases only slightly compared to the simplest one. In addition, numerical stability problems may occur in the algorithm (10) for estimation of the maximal eigenvalue in the presence of roundoff errors. Estimation algorithms may require a large number of matrix vector multiplications for accurate estimation of maximal eigenvalue and accurate estimation of minimal one may require even more matrix vector multiplications due to error propagation. The number of iterations as a function of the size of the ill-conditioned information matrices that is required to reach the following accuracy of estimation error: \(\|\hat{x}_{k}\;\|A\;\hat{x}_{k}\|-A\;\hat{x}_{k}\|<\|x_{k}\|\). Figure 2: The number of iterations (for estimation of the largest eigenpair \(Ax=\lambda x\)) as a function of the size of ill-conditioned information matrices that is required to reach the following accuracy of estimation error: \(\|\hat{x}_{k}\;\|A\;\hat{x}_{k}\|-A\;\hat{x}_{k}\|<0.01\). 0.01 is plotted in Figure 2. The Figure 2 shows that the number of iterations increases with the size of the matrix and can be sufficiently large (which corresponds to large number of matrix vector multiplications) for large scale systems. In addition, the spectral radius of the matrix \((I-\alpha A)\) with optimal preconditioning can be even larger than one (which results in unstable system) for insufficient number of iterations of the algorithm (10) (due to insufficient computational capacity, for example) since the algorithm underestimates the eigenvalue (due to convergence from below). On the contrary the simplest preconditioner which is based on the maximum row sum matrix norm (associated with the upper bound for all Gershgorin circles) provides loose upper bound for maximal eigenvalue which is robust in finite digit calculations and does not require any significant computational efforts. Notice that the upper bound of the largest eigenvalue can be estimated in many other ways, see for example Householder (1953), Fadeev et al. (1960) and Wolkowicz et al. (1980). The choice of estimation algorithm is associated with the trade-off between accuracy and computational complexity. _Suboptimal Preconditioner for Ill-Conditioned Cases._ Minimal eigenvalue is very small for the ill-conditioned matrices and its accurate estimation requires significant computational efforts or even impossible for extreme ill-conditioning in finite digit calculations. Small minimal eigenvalue can be neglected in this case resulting in suboptimal preconditioner \(\alpha=\frac{2}{\epsilon+\lambda_{n}}\) where \(\epsilon>0\) is a sufficiently small number. However, the spectral radius of the matrix \((I-\alpha A)\) with suboptimal preconditioner maybe even larger then the spectral radius of the same matrix with the simplest preconditioner. Indeed, the spectral radius of the matrix with the simplest preconditioner is associated with minimal eigenvalue \(\lambda_{1}\) and is calculated as \(1-\frac{2}{\|A\|_{\infty}}\lambda_{1}\), whereas the spectral radius of suboptimal preconditioner is the absolute value of \(1-\frac{2\lambda_{n}}{\lambda_{n}+\epsilon}\) which depends on the maximal eigenvalue \(\lambda_{n}\) only and can be closer to one. _Recommendations._ Optimal and suboptimal preconditioners can be applied for the case where sufficient computational capacity is available in preprocessing and the window size does not change during processing. Then the preconditioner that is based on estimated largest eigenvalue can be applied in all the steps since the eigenvalues are the same, see Lemma in Section 42. Otherwise the simplest preconditioner which does not require any computational efforts (compared to optimal preconditioner) can be applied. Footnote 2: Notice that estimation of the largest eigenvalue associated with changeable window size can also be performed using parallel computational units (or in memory) and send to the signal processing unit ## 6 Conclusions Application of the direct methods for parameter calculations (as solution of the algebraic equations) provide inaccurate results in ill- conditioned case due to divisions by very small numbers, which has direct negative impact on estimation performance as it is shown on real data in this report. Richardson approach does not have any divisions by small numbers in the ill-conditioned case and provides robust and accurate solution. It was also shown that simple preconditioner, which is based on maximum row sum matrix norm provides more robust results than optimal preconditioner based on estimation of the eigenvalues.
2303.07872
Object-based SLAM utilizing unambiguous pose parameters considering general symmetry types
Existence of symmetric objects, whose observation at different viewpoints can be identical, can deteriorate the performance of simultaneous localization and mapping(SLAM). This work proposes a system for robustly optimizing the pose of cameras and objects even in the presence of symmetric objects. We classify objects into three categories depending on their symmetry characteristics, which is efficient and effective in that it allows to deal with general objects and the objects in the same category can be associated with the same type of ambiguity. Then we extract only the unambiguous parameters corresponding to each category and use them in data association and joint optimization of the camera and object pose. The proposed approach provides significant robustness to the SLAM performance by removing the ambiguous parameters and utilizing as much useful geometric information as possible. Comparison with baseline algorithms confirms the superior performance of the proposed system in terms of object tracking and pose estimation, even in challenging scenarios where the baseline fails.
Taekbeom Lee, Youngseok Jang, H. Jin Kim
2023-03-13T03:07:59Z
http://arxiv.org/abs/2303.07872v1
# Object-based SLAM utilizing unambiguous pose parameters ###### Abstract Existence of symmetric objects, whose observation at different viewpoints can be identical, can deteriorate the performance of simultaneous localization and mapping (SLAM). This work proposes a system for robustly optimizing the pose of cameras and objects even in the presence of symmetric objects. We classify objects into three categories depending on their symmetry characteristics, which is efficient and effective in that it allows to deal with general objects and the objects in the same category can be associated with the same type of ambiguity. Then we extract only the unambiguous parameters corresponding to each category and use them in data association and joint optimization of the camera and object pose. The proposed approach provides significant robustness to the SLAM performance by removing the ambiguous parameters and utilizing as much useful geometric information as possible. Comparison with baseline algorithms confirms the superior performance of the proposed system in terms of object tracking and pose estimation, even in challenging scenarios where the baseline fails. ## I Introduction Simultaneous localization and mapping (SLAM), one of the core technologies for autonomous driving, is a technology that reconstructs the environment around the robot and estimates the robot position in the reconstructed map. Despite a significant progress in precise localization using geometrical information of the surrounding environment, it is still difficult for a SLAM system to achieve advanced tasks based on human interaction and scene understanding. To overcome the problem, semantic SLAM that uses semantic information in SLAM has been in using deep learning techniques such as recently developed instance segmentation [1, 2, 3] and 3d object detection [4, 5, 6]. Object-based SLAM [7, 8] is a branch of semantic SLAM that reconstructs an object-based map and provides high-level information. Object-based SLAM simultaneously estimates the location of semantic object inferred from the network while performing feature-based localization used in existing SLAM systems, and enables to estimate camera poses even in featureless environments. In addition, a more expressive map can be constructed by expressing the pose, type, and shape of the semantic objects in the map. However, if the shape of the object itself is symmetric or if the observed shape is symmetric due to occlusion, object-based SLAM can suffer from significant error in the estimation process of ego motion and pose of the object. Since symmetric objects may have the same observation at different viewpoints, data association or motion estimation may fail. We propose a system for robustly optimizing the pose of cameras and objects even in the presence of symmetric objects. The 3d detection module [4] is modified to obtain a local observation that predicts multiple poses for each object, similar to [9]. We classify object types into asymmetry, discrete, and continuous symmetry based on the local observation of the observed objects. By extracting a parameter corresponding to the symmetry type from the pose of object, only the robust parameter is used as a constraint to jointly optimize the camera pose and object pose. Therefore, the proposed SLAM system has the advantage of robustness by Fig. 1: Illustration of object tracking results using the data obtained surrounding a rectangular table. Although only a single object is observed, the symmetry-agnostic approach in (b) may generate multiple map objects and fail object tracking. On the other hand, the symmetry-aware approach in (a) recognizes the discrete symmetry type of the object and succeed in object tracking using all the reliable measurements. Such improvement of (a) makes the object-based SLAM robust, since a large number of constraints obtained from long tracking of objects are useful for backend optimization. using as much useful geometric information as possible even when symmetric objects are observed, while directly using 3d detection networks with many existing research and dataset. In summary, the contributions of the paper are as follows: * We design a symmetry and pose ambiguity aware object SLAM system which fully utilizes information from multiple pose hypotheses of objects to jointly optimize camera pose and globally consistent object pose. * We propose a method to extract reliable information from ambiguous pose of symmetric objects. We categorize symmetry types of general objects and distinguish pose parameters into ambiguous one and unambiguous ones. The extracted unambiguous parameters of each symmetry type are used in the proposed object association and optimization modules. * We use multiple hypotheses 3d detection network as observation module of our system, which can be easily edited from existing networks and changed to better networks in the future. The remainder of this paper is organized as follows. Section II reviews related literature. The overview and core concepts of the system are provided in Section III. Detailed methods for applying core concepts to systems are described in Sections IV and V. In Section VI, experiments in simulation and public dataset are presented. Finally, conclusions are provided in Section VII. ## II Related Work This section reviews related studies on object-based SLAM and symmetric-aware object pose estimation. ### _Object-based SLAM_ Object-based SLAM attempts to build a robust SLAM system by simultaneously optimizing camera poses, feature positions, and poses of objects from semantic information. To express the shape and pose of an object, [8, 7], and [10] use prior object model, cuboid, and ellipsoid, respectively. And [11, 12] represent the shape and pose of an object using category specific embedding method. Although they showed that joint optimization could increase the robustness of both camera and object pose estimation, they did not consider the presence of ambiguous detection due to symmetric objects or occlusion. ### _Symmetry-aware object pose estimation_ #### Ii-B1 Single view [13] proposes a network that can determine whether a shape has symmetry using an object's CAD model, and [14] predicts multiple candidate poses for the detected object and analyzes ambiguity that may be caused by occlusion as well as ambiguity by the shape of an object. However, they estimated object pose only from a single view and did not treat multi-view cases such as SLAM. #### Ii-B2 Multiple view Recently, studies considering the pose-ambiguity of objects have been proposed in object-based SLAM. PoseRBPF [15] estimates the pose distribution of asymmetric and symmetric objects using the rao-blackwellized particle filter. However, it requires a predefined codebook for the pose of the object model. [16] is the object-based SLAM system that simultaneously optimizes camera poses and object poses using the projection of reconstructed 3d keypoints as a prior. However, every time they encounter on a new dataset, they have to label the prior. [17] expresses geometrical primitives in an unified, decomposed quadric form and explicitly deals with the degenerative case due to a symmetric shape. However, the degenerative cases considered in [17] are limited to a few quadric types, so error can be induced when fitting arbitrary shapes to quadric. [9] uses a single neural network to predict poses as multi-hypotheses and optimizes them through a max-mixture[18] model. Since the symmetric object has ambiguous pose parameters, such as the rotation angle for an axis of symmetry, pose estimation should be performed by considering only the unambiguous elements. However, error can occur because [9] directly uses hypotheses containing even ambiguous parameters for pose estimation. We propose a robust SLAM system using existing well-developed 3d detection modules while extracting only available geometric elements from symmetric objects and using them as optimization constraints. ## III Proposed system ### _Symmetry types_ The key idea of the proposed system is to propose a criterion that can effectively classify general objects using only three symmetry types and to define different pose representations suitable for each symmetry type in order to fully utilize the geometry information available in the symmetric object. In general, objects can be asymmetric or symmetric. The asymmetric object can be inferred to have a consistent object pose in the 3d detection module, no matter which viewpoint it is observed from, but the symmetric object cannot. In addition, we classify symmetry as discrete and continuous types. Discrete symmetric objects refer to reflection symmetric objects that can have a finite number of poses as the observed viewpoints change, and continuous symmetric objects refer to objects with an infinite number of poses based on an axis of symmetry, such as a circular table. Objects which are classified into the following three types are represented in different ways to reflect all possible unambiguous pose parameters: * **Asymmetry:** All detection results of a asymmetric object can be used to optimize a unique pose since the unique pose can be determined when viewpoint changes. Accordingly, the pose can be represented by 6 degrees of freedom (DoF) which is commonly used. * **Discrete symmetry:** Objects with multiple planes of symmetry have as many poses supporting the same shape as the number of planes of symmetry. We express the pose assuming that the symmetrical planes of most discrete symmetric objects present have one intersection line. The intersection of symmetric planes is defined as the axis of symmetry, and the rotation angle based on the axis is defined as a symmetric angle. Accordingly, the position and the axis of symmetry are shared with the reflected poses of a discrete symmetric object, and only the symmetric angle can be expressed differently. In other words, the pose of a discrete symmetric object is defined as five shared parameters for position and axis of symmetry and unshared parameters for all the symmetry angles. * **Continuous symmetry:** For objects such as round tables, there is an infinite number of symmetry planes, so there is an infinite number of object poses that support the same shape. For continuous symmetric objects, it is assumed that planes of symmetry have a single intersection, as in the case of discrete symmetry. Therefore, we express the pose of continuous symmetric objects only by position and axis of symmetry after removing the symmetric angle by classifying it as an ambiguous parameter. ### _Pipeline_ The entire pipeline is described in Fig. 2, and the tracking and object mapping modules are designed based on DSP-SLAM [12]. When a new keyframe is selected, the multi-hypothesis detection network uses an rgb-d image to detect objects, and the detected objects infer observable multiple pose hypotheses (Section IV-A). The categorization module determines the symmetry type of the detected object based on the distribution of multiple pose hypotheses. We extract the axis of symmetry for symmetric objects and cluster the poses with similar symmetry angles for discrete symmetry objects (Section IV-B). We associate the categorized detection with map objects using the class and pose except for ambiguous parameters. For discrete symmetric objects where a new symmetry angle is observed, we add a non-shared parameter (Section IV-C). The camera pose, map point, and map object are jointly optimized at the backend. The constraint between the camera pose and map object pose is formed differently for each map object type during optimization (Section V). ## IV Categorized Detection and Objects ### _Multi-hypothesis 3d object detection_ We modify the 3d object detection module so that it can infer multiple pose hypotheses. Since the distribution of inferred multiple hypotheses influences the determination of the object's symmetry type, it should be modified so that the multi-hypothesis 3d object detection module can cover all the poses that the object can have. [19] extends the single-loss single-output system to have multiple outputs, calculating loss [20] using only the hypothesis that succeeded in the most accurate inference among multiple hypotheses in the training process. Unlike the method of using the average loss of multiple outputs, this can increase multi-hypothesis diversity because each hypothesis is randomly selected and learned individually. We also modified the 3d object detection module [4] in the same way as [19], allowing a source for the object's symmetry type to be inherent in multiple hypotheses. ### _Symmetry type categorization of detection_ As shown in Fig. 3, since the multi-hypothesis detection implies the ambiguity of the object's pose, we can classify the object's symmetry type and extract the type-specific pose parameters only from the detection result without prior information. Unlike the symmetric object, the multiple pose hypotheses of the asymmetric object are similar to single pose. In other words, the distribution of pose hypotheses of the asymmetric object is very close to unimodal, and the variance is much smaller than that of the symmetrical object. For discrete and continuous symmetry types, we perform an additional classification process because the normalized singular values of multiple hypotheses are large in both cases. As mentioned in Section III-A, both types have certain axis of symmetry, and the axis of symmetry \(l\) is obtained as follows: \[l^{*}=\underset{l\in\mathbb{R}^{3}}{\text{argmax}}\left\|[\overline{\omega}_ {o_{1}o_{2}},\overline{\omega}_{o_{1}o_{3}},\cdots,\overline{\omega}_{o_{1}o_{ N}}]^{T}\cdot l\right\|_{2}\, \tag{1}\] where \(\overline{\omega}_{o_{1}o_{i}}=\dfrac{\omega_{o_{1}o_{i}}}{\left\| \omega_{o_{1}o_{i}}\right\|_{2}}\), \(\omega_{o_{1}o_{i}}=\log_{\text{5s0(3)}}(R_{co_{1}}^{T}R_{co_{i}})\in\mathbb{ R}^{3}\). Fig. 2: The overview of the proposed object-based SLAM system. \(N\) and \(\overline{\omega}\) are the number of hypotheses and rotation axis, respectively. And, symmetry angles are computed by \(\theta_{o_{1}o_{i}}=\left\|\omega_{o_{1}o_{i}}\right\|_{2}\). After clustering the multiple hypotheses using the DBSCAN algorithm [21], the variance between the representative \(\theta\) of each cluster is compared. As illustrated in Fig. 3, compared to the continuous symmetric object which corresponds to a continuous set of \(\theta\), the variance among clusters of discrete symmetric object is very large. Representative \(\theta\) values of rectangular table form two clusters with about 180\({}^{\circ}\) difference, while the round table has many similar representative \(\theta\) clusters. Then, reliable parameters of pose can be extracted for each classified symmetry type. First, the asymmetric object can use 6 DoF from detection results, and the continuous symmetric object can use position and axis of symmetry. In discrete symmetric objects, parameters are position, axis of symmetry, and representative symmetry angles of the clusters. ### _Data association of objects_ The categorized detection results are associated with the previously generated map objects. First, the detection results, which represent relative transformation between the camera and the object, are warped in world coordinate using tracked camera pose \(T_{wc}\). Then, object matching is performed by comparing the unambiguous parameters of the detected object and map objects considering the symmetry type. Due to inaccurate detection or partial observation, the same object can be categorized by different symmetry types in different views. We address the misclassification by following association strategies. If there is a discrete or continuous symmetry type in the two comparison groups, the position and axis of symmetry (5 DoF) parameters are compared; otherwise, the distance between the 6 DoF parameters is computed. If the distance is small enough and the class is the same, matching between the objects is successful. After object matching is performed, the symmetric angles of the detected discrete symmetric object are associated with the nearest symmetric angle of the matched map object. The unmatched angles are added to the map object as a new symmetric angle. Discrete symmetric object can be occasionally classified into asymmetry type at a specific viewpoint, as can be seen from Fig. 3 (a) and (c) of the the discrete case. Therefore, even if 6 DoF matching between asymmetry objects fails, if 5 DoF matching is successful, we change the map object to the discrete symmetry type to express the pose with position, an axis of symmetry, and symmetric angles. Object association using unambiguous parameters for each symmetry type alleviates mismatching caused by ambiguous parameters. Therefore, the proposed system can track objects for a long time, which acts as an important source in joint optimization. The detection result that failed to match is registered as a new map object after the shape reconstruction using deep-sdf [22] similar to DSP-SLAM. ## V Joint Optimization We perform joint optimization by modifying the existing SLAM optimization problem using the categorized map objects and associated detection results. \[C^{*},O^{*},P^{*}=\underset{C,O,P}{\text{argmin}}\ E_{reproj}(C,P)+E_{obj}(C,O)\, \tag{2}\] where C, O, and P refer to the camera and object pose and map point included in the optimized window size, respectively. \(E_{reproj}\) denotes the reprojection error, and \(E_{obj}\) denotes the pose error between the map object and the camera. The joint optimization is solved by the Levenberg-Marquardt method through g2o[23] solver. \(E_{obj}\) is designed only with unambiguous parameters in consideration of the symmetry type of map object, which can produce a more robust solution than existing methods that utilize all pose parameters with an ambiguous parameter as a constraint. #### V-C1 Asymmetry The results of asymmetry map object and associated multiple hypotheses are not linked by multiple edges but by a single edge using a max-mixture model, which selects the hypothesis with the lowest error at each iteration of optimization, similar to [9]. \[e_{\text{asym}}(T_{wc},T_{wo})=\min_{j\in[1,N]}\log_{\text{sc(3)}}(T_{co}^{j} \cdot T_{wo}^{-1}\cdot T_{wc})\, \tag{3}\] where \(T_{wc}\) and \(T_{wo}\), which mean the pose of the camera and object on the world, are optimization variables, \(T_{co}^{j}\) represents the \(j\)-th hypothesis, and \(N\) is the total number of hypotheses. #### V-C2 Discrete symmetry Discrete symmetric map objects have position and axis of symmetry as shared parameters and \(M\) symmetric angles as non-shared parameters. Each hypothesis is associated with one of symmetry angles of the matched map object, and edges are formed as many as the Fig. 3: Multi-hypothesis results for each symmetry type. Objects of three types are observed from different viewpoints (a), (b), and (c). number of associated symmetric angles, \(m\). Each edge is modeled with the max-mixture like the asymmetry case. \[e_{\text{disc}}(T_{wc},T_{wo_{i}})=\min_{j_{i}\in[1,N_{i}]}\log_{\text{SE(3)}}(T_{ co}^{j_{i}}\cdot T_{wo_{i}}^{-1}\cdot T_{wc})\, \tag{4}\] \[\text{where }T_{wo_{i}}=\begin{bmatrix}\exp\left(\theta_{wo_{i}}\cdot\overline{ \omega}_{wo}\right)&t_{wo}\\ 0_{1\times 3}&1\end{bmatrix},\ i\in[1,m]\.\] \(N_{i}\) refers to the number of hypotheses associated with the \(i\)-th symmetric angle, and \(T_{wo_{i}}\) means the \(i\)-th symmetric pose constructed by position and axis of symmetry which are shared parameters and the \(i\)-th symmetric angle. Furthermore, for unconstrained parameterization in optimization, the axis of symmetry is used by the following expression: \(\overline{\omega}_{wo}=\text{f}(\phi_{wo},\psi_{wo})\), where \(\phi_{wo},\psi_{wo}\) and \(\text{f}(\cdot)\) are polar angle, azimuth in spherical coordinates, and the transformation function from \((\phi,\psi)\) to \(\overline{\omega}\), respectively. #### V-B3 Continuous symmetry The continuous symmetric map object has only position and axis of symmetry as unambiguous parameters. The axis of symmetry (\(\overline{\omega}_{co}\)) extracted from the rotation part and individual position parts of multiple hypotheses are used to formulate the following: \[e_{\text{cts}}(T_{wc},T_{wo})=e_{\text{trans}}+\gamma\cdot e_{\text{axis}}\, \tag{5}\] \[\text{where }e_{\text{trans}}=\min_{j\in[1,N]}\|t_{co}^{j}-R_{wc}^{\text{T}}(t_{wc }-t_{wo})\|_{2}\,\] \[e_{\text{axis}}=\|\text{f}^{-1}(R_{wc}\cdot\overline{\omega}_{co})-[\phi_{wo},\psi_{wo}]^{\text{T}}\|_{2}\.\] \(e_{\text{trans}}\) and \(e_{\text{axis}}\) are max-mixture based translation error and error related to axis of symmetry, respectively. \(\gamma\) is the constant weight for balancing two error terms. ## VI Experimental Results This section presents the performance of the proposed symmetry-aware object SLAM system compared with the baseline. Simulation and public datasets are used to evaluate the proposed system. ### _Setup_ When a camera looks at the feature-rich scene, ego-motion can be estimated accurately by the feature-based SLAM backbone [24] in object-based SLAM systems. On the other hand, object-camera constraints are dominant estimation sources if the camera observes a featureless scene. Therefore, in order to effectively evaluate the proposed object-based SLAM system, we construct a simulation environment such that one side of the environment has rich feature points and the other side has a few feature points, as shown in Fig. 4. Then we obtain the simulation dataset using Unreal Engine and AirSim [25]. The camera moves surrounding a center object in the environment so that camera observes feature-rich and featureless scenes alternately for each specific region, as shown in Fig. 4 (a) and (b). As shown in Fig. 4, the center object can be replaced with discrete and continuous symmetric objects in the simulation environment. These cases are called _sim(disc)_ and _sim(cts)_. In addition, we evaluate the proposed system in the general indoor scenes using the popular scanNet dataset [26]. For training of the 3D detection network used in our system, pre-training is performed using SUN RGB-D dataset [27]. After that, to reduce the domain gap between actual and simulation data, additional data are acquired in the simulation, and fine-tuning is performed. We only fine-tune the detection network for simulation dataset since ScanNet dataset only provide axis-aligned bounding box. Both pre-training and fine-tuning is done in the same way as [19]. The SUN RGB-D dataset assumes that the input point clouds are represented in the coordinates aligned with the direction of gravity, and expresses the object's orientation using the rotation angle with respect to the axis of gravity (i.e. yaw). However, ScanNet dataset has no gravity direction, so we calculated in advance the rotation matrix that aligns the y-axis of camera with the normal vector of the ground plane for each frame. An additional sensor such as an inertial measurement unit or ground detection module may help to change the preprocessing for online implementation. We used the number of hypotheses as 30, which was selected empirically to categorize symmetry types of detection well. We test using Intel i7-10700 (2.9GHz) and NVIDIA RTX 2060 GPU. DSP-SLAM using single hypothesis detection results is used as the first baseline system (_SH_), and the second baseline (_MH_) is the modified DSP-SLAM to integrate with key idea on [9] which uses multi-hypothesis detection with no consideration of symmetry types. Fig. 4: Simulation environment. Fig. 5: The results of object tracking and map object reconstruction in _sim(disc)_. Fig. 6: The translation error of the estimated map object’s pose in case study. ### _Case study_ To understand how the proposed system enhances the overall performance, we test object tracking and pose estimation under the presence of symmetric objects with pose ambiguity, using the environment in Fig. 4. For fair comparison by isolating the other sources that affect the performance except the pose ambiguity, we construct a pose graph using edges from the camera to objects obtained from the proposed categorization and association module and true camera nodes and estimated the object pose by optimization. The same setup is employed to test the baseline algorithms. Fig. 5 shows the result of object tracking and the reconstructed map object using the _sim(disc)_ dataset. The proposed method continuously tracks the center object, and a single map object is reconstructed accordingly. On the other hand, the baseline algorithms (_MH_, _SH_) fail to track, and incorrectly recognize a single object as two or more map objects. The translation error of the estimated object is reported in Fig. 6. _MH_ and _SH_ show large error when establishing the object again after the tracking failure, whereas the proposed method maintains small error through successful object tracking. ### _System evaluation_ The performance of the entire system was compared and evaluated with the baseline system using the simulation environment and ScanNet. First, Fig. 7 shows the quantitative results compared with the baseline system using the simulation environment _sim(disc)_. The baseline and camera trajectory are plotted on the map built by the proposed system. In the beginning, a feature-rich scene is observed as shown in Fig. 4 (a), so all three systems are good at estimating pose. Such trend changes as the features that can be used for localization disappear, since the camera location must be estimated using only the tracked object. Based on the good object tracking performance, the proposed system also demonstrates good performance even in the featureless region using an unambiguous pose parameter that fits the symmetry type. However, both _MH_ and _SH_ systems fail to estimate the pose in the featureless region. The baseline algorithms associate the detection with the map object on the map using all 6 DoF parameters, and there are cases when none of the multiple hypotheses fits the previous detection result. The data association may fail due to such mismatch in yaw angle. Quantitative results for the simulation environment are shown in Table I. For _sim(cts)_, the baseline algorithms also had no failure in pose estimation, but we can see that the proposed algorithm has the highest performance. Fig. 8 shows qualitative results on a ScanNet _Scene0022_00_. This sequence includes not only the symmetrical objects but also the objects that do not fully enter the camera field of view, so the uncertainty of detection is high. This is revealed in the result of _SH_. As seen in simulation setting, the baseline system fails data association and many objects are registered in same position of the map. However, the proposed system robustly recognizes as a single object and optimizes the pose even in these cases. The quantitative results are shown in table I. We evaluate the system performance using root mean squared error (RMSE) error of each keyframe position. The proposed system exhibits similar or better path estimation performance in most sequences. ## VII Conclusions Symmetric objects present in the scene can cause the performance degradation or even failure of SLAM, since their observation at different viewpoints can be identical and cause obscurity. We proposed a method for robustly optimizing the pose of cameras and objects even in the presence of symmetric objects. The proposed classification of objects into three categories depending on their symmetry characteristics was successfully applied to various objects. Under the proposed method, the objects in the same category can be associated with the same type of ambiguity, which contributes to the efficiency in data association. By extracting only the unambiguous parameters corresponding to each category and using them in data association and joint optimization of the camera and object pose, the proposed approach provides significant robustness to the SLAM performance. Proposed system showed better performance than baseline systems in environments with many symmetric objects. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \multicolumn{2}{c|}{dataset} & \multicolumn{3}{c}{RMSE of translation [m]} \\ \cline{3-5} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{OURS} & _SH_ & _MH_ \\ \hline \multirow{3}{*}{ScanNet} & _Scene0022.00_ & **0.161** & 0.237 & 0.179 \\ & _Scene0049.00_ & **0.137** & **0.137** & 0.152 \\ & _Scene0091.00_ & **0.079** & 0.101 & 0.091 \\ & _Scene0289.00_ & **0.125** & 0.144 & 0.153 \\ \hline \multirow{2}{*}{sim} & _disc_ & **0.086** & - & - \\ & _cts_ & **0.265** & 0.341 & 0.284 \\ \hline \end{tabular} \end{table} TABLE I: The translation error of the proposed and baseline systems using ScanNet and simulation dataset. The bold indicates the best performance and the symbol ‘-’ means failure of pose estimation. Fig. 8: The qualitative results of the proposed and baseline (_SH_) algorithms in _Scene0022_00_ in ScanNet. Fig. 7: The qualitative results of the proposed and baseline algorithms (_SH_, _MH_) in _sim(disc)_.
2302.06845
SEAM: Searching Transferable Mixed-Precision Quantization Policy through Large Margin Regularization
Mixed-precision quantization (MPQ) suffers from the time-consuming process of searching the optimal bit-width allocation i.e., the policy) for each layer, especially when using large-scale datasets such as ISLVRC-2012. This limits the practicality of MPQ in real-world deployment scenarios. To address this issue, this paper proposes a novel method for efficiently searching for effective MPQ policies using a small proxy dataset instead of the large-scale dataset used for training the model. Deviating from the established norm of employing a consistent dataset for both model training and MPQ policy search stages, our approach, therefore, yields a substantial enhancement in the efficiency of MPQ exploration. Nonetheless, using discrepant datasets poses challenges in searching for a transferable MPQ policy. Driven by the observation that quantization noise of sub-optimal policy exerts a detrimental influence on the discriminability of feature representations -- manifesting as diminished class margins and ambiguous decision boundaries -- our method aims to identify policies that uphold the discriminative nature of feature representations, i.e., intra-class compactness and inter-class separation. This general and dataset-independent property makes us search for the MPQ policy over a rather small-scale proxy dataset and then the policy can be directly used to quantize the model trained on a large-scale dataset. Our method offers several advantages, including high proxy data utilization, no excessive hyper-parameter tuning, and high searching efficiency. We search high-quality MPQ policies with the proxy dataset that has only 4% of the data scale compared to the large-scale target dataset, achieving the same accuracy as searching directly on the latter, improving MPQ searching efficiency by up to 300 times.
Chen Tang, Kai Ouyang, Zenghao Chai, Yunpeng Bai, Yuan Meng, Zhi Wang, Wenwu Zhu
2023-02-14T05:47:45Z
http://arxiv.org/abs/2302.06845v2
# Searching Transferable Mixed-Precision Quantization Policy ###### Abstract Mixed-precision quantization (MPQ) suffers from time-consuming policy search process (_i.e.,_ the bit-width assignment for each layer) on large-scale datasets (_e.g.,_ ISLVRC-2012), which heavily limits its practicability in real-world deployment scenarios. In this paper, we propose to search the effective MPQ policy by using a small proxy dataset for the model trained on a large-scale one. It breaks the routine that requires a consistent dataset at model training and MPQ policy search time, which can improve the MPQ searching efficiency significantly. However, the discrepant data distributions bring difficulties in searching for such a _transferable_ MPQ policy. Motivated by the observation that quantization narrows the class margin and blurs the decision boundary, we search the policy that guarantees a general and dataset-independent property: _discriminability of feature representations_. Namely, we seek the policy that can robustly keep the intra-class compactness and inter-class separation. Our method offers several advantages, _i.e.,_ high proxy data utilization, no extra hyper-parameter tuning for approximating the relationship between full-precision and quantized model and high searching efficiency. We search high-quality MPQ policies with the proxy dataset that has only 4% of the data scale compared to the large-scale target dataset, achieving the same accuracy as searching directly on the latter, and improving the MPQ searching efficiency by up to 300\(\times\). ## 1 Introduction Despite the current success of deep learning (DL), the large computational resource requirements remain one of the most giant stumbling blocks for deploying the DL models. There are several compression techniques to reduce the redundancy in a deep model, such as pruning Liu _et al._ (2018), knowledge distillation Hinton _et al._ (2015) and quantization Choi _et al._ (2018); Zhou _et al._ (2016). Quantization is a promising technique to reduce both the storage and computational resources overhead remarkably, by leveraging the fact that the inference precision is not strictly as high as training time. It enables large models to run directly on the edge and mobile devices without redesigning a new model architecture, which empowers edge intelligence significantly. Quantization can be divided into two categories. The first one is the fixed-precision quantization Zhou _et al._ (2016); Choi _et al._ (2018); Esser _et al._ (2020), in which a uniform bit-width is designated for the whole model. While such a paradigm is proven to make the quantized model achieve sufficiently good performance at high bit-width (_e.g.,_\(\geq\) 8 bits), a uniform bit-width is challenging for quantization in an ultra-low bit-width (_i.e.,_\(\leq\) 4 bits) scenario. For example, BRQ Han _et al._ (2021) reports that there is more than 20% top-1 accuracy degradation in a 2 bit quantization for the MooiblNetV2 model as compared to its full-precision (FP) counterpart. The other one is the mixed-precision quantization (MPQ), it allows a fine-grained bit-width allocation manner Wang _et al._ (2019). MPQ leverages the empirical observation that layers (or channels) in a deep model have different redundancy levels Cai and Vasconcelos (2020), and provides a desirable accuracy-efficiency trade-off. Since the bit-width choice is discrete and the combination of bit-width and layer (_i.e.,_ the policy) grow exponentially, a main challenge is how to determine the optimal bit-width for each layer. Obviously, brute-force searching is ineffective, as an \(L\) layers model with \(n\) bit-widths for activations and weights has \(n^{2L}\) possible policies Wang _et al._ (2019). To solve this, several studies make efforts to apply the intelligent algorithms to search the optimal MPQ policy. HAQ Wang _et al._ (2019) and ReleQ Elthakeb _et al._ (2020) use reinforcement learning (RL) to train a bit-width allocation agent. SPOS Guo _et al._ (2020), EdMIPS Cai and Vasconcelos (2020) and BP-NAS Yu _et al._ (2020) adopt neural architecture search (NAS) methods to learn the bit-width end-to-end, by relaxing the discrete space to be continuous. While these search-based methods make the MPQ searching possible and provide a promising performance for quantized model, they all suffer from the high search time costs, _e.g.,_ BP-NAS consumes 35.6 GPU-hours to search for the ResNet-50 Wang _et al._ (2021). Therefore, recent research mainly focuses on how to fast search an MPQ policy. Alone this line, HAWQ Dong _et al._ (2019), Dong _et al._ (2020) and MPQCO Chen _et al._ (2021) propose to use the 2-order information (_i.e.,_ Hessian information) as the metric of layer's quantization sensitivity. Then, a high bit-width is assigned to the layer that has high metric values, and vice versa. To deal with the prohibitive Hessian computation, they have to approximate the Hessian information by using the quantization-unaware methods such as the Hutchinson algorithm Avron and Toledo (2011), which results in biased quantization sensitivity measurement Tang _et al._ (2022). Nevertheless, few research has explored to decouple the dataset used in model training and MPQ search stages. This is promising to improve the search efficiency since the searching process can be done on a small-scale proxy dataset, but inevitably encounters intractable challenge of dealing with disparate data distributions. Recently, GMPQ Wang _et al._ (2021) indicates that, for an input image, preserving the attribution rank between the FP and quantized model can search a generalizable MPQ policy. They resort the feature visualization technique Grad-cam Selvaraju _et al._ (2017) to maintain the consistency of image attribution rank between the quantized and FP model. GMPQ can be regarded as an instance-level regularization over the proxy dataset, by enforcing a consistent relationship between FP and quantized model of each input instance. However, it does not exploit the information outside of an instance, _i.e.,_ in a class-level, let alone the fussy hyper-parameters tuning for aligning the attribution rank. Considering the class-level information is more luxuriant than instance-level Chen _et al._ (2021), in this paper, we present a novel method to search the effective transferable MPQ policy by exploiting the class-level information on the proxy datasets. Our idea is motivated by the observation that quantization poses side effects to the quantized model in the feature space compared to the FP model. Specifically, we observe that quantization noise noteworthy narrows the margin between classes and blurs the decision boundary (see Fig. 2). On the other hand, maximizing inter-class separation while enhancing the intra-class compactness is highly favorable for classification, as there is a consensus that a large classification margin enhances the generalizability from statistical machine learning (_e.g.,_ SVM) to recent deep learning research Wan _et al._ (2018); Ranasinghe _et al._ (2021). We hence propose to search the MPQ policy that can properly gather the features of the same classes and separate the features of different classes, making the features more robust to quantization noise. Experimental results validate that a large margin on proxy data distributions helps search for a transferable MPQ policy for quantizing the model trained on challenging large-scale datasets. Our approach achieves competitive performance when searching on very small proxy datasets versus directly on large-scale datasets, in which the size of the former is only 4% of the latter. Consequently, we improve the MPQ policy search efficiency impressively. For ResNet18 and MobileNetv1, by using StanfordCars Krause _et al._ (2013) as the proxy dataset, our method achieves 375\(\times\) and 300\(\times\) speedup compared to the state-of-the-art MPQ approach FracBits Yang and Jin (2021), respectively. ## 2 Related Work ### Mixed-Precision Quantization Mixed-precision quantization (MPQ) aims to allocate different bit-widths for weights and activations in a deep model, at a tensor-level. The major challenge of MPQ is how to determine the optimal bit-widths for each layer in an exponential discrete search space. For search-based methods, some Wang _et al._ (2019); Elthakeb _et al._ (2020) apply RL to conduct exploration-exploitation search, while others Wu _et al._ (2018); Yu _et al._ (2020); Guo _et al._ (2020); Cai and Vasconcelos (2020) adopt NAS-based algorithms to achieve differentiable search. In particular, GMPQ Wang _et al._ (2021) develops an instance-level regularization to make searching MPQ policy on a small dataset possible. However, GMPQ suffers from a fussy hyper-parameters tuning, including the approximated attribution rank level, number of interested pixels, etc. Unlike learning the optimal MPQ policy, HAWQ Dong _et al._ (2019, 2020) and MPQCO Chen _et al._ (2021) use the Hessian information as the quantization sensitivity metrics to assist bit-width assignment. LIMPQ Tang _et al._ (2022) proposes to learn the layer-wise importance during a once quantization-aware training process. In contrast to these methods that aim to define some metrics to estimate the quantization sensitivity of layers, we propose to directly learn the effective bit-width configurations on a small proxy dataset. ### Discriminative Feature Learning Learning discriminative feature is highly favorable since it greatly facilitates the generalization of deep models, its core is to clarify the decision boundaries between classes. DrLIM Hadsell _et al._ (2006) proposes to use the contrastive loss to identify the classes. L-Softmax Liu _et al._ (2016) introduces a multiplicative hyper-parameter for the softmax function to produce a rigorous decision margin. L-GM Wan _et al._ (2018) assumes the output of the penultimate layer (_i.e.,_ the deep features) follows the Gaussian Mixture (GM) distribution, and leverages the non-negative squared Mahalanobis distance to construct a GM loss. OPL Ranasinghe _et al._ (2021) observes a potential orthogonality for features in the cross-entropy loss, and leverages this observation to explicitly enforce orthogonality of features. These works successfully demonstrate the significance of producing clear decision boundaries in the feature space, as the learned features become more robust and even increase the separation of features for the novel classes in a few-shot learning setting Ranasinghe _et al._ (2021). Motivated by the side effect of the quantization poses to the feature space, we propose to search a policy that can maintain the discriminative property of features on a small-scale dataset. We empirically find that such an MPQ policy can be applied directly to the model trained on challenging large-scale datasets. This allows us to decouple the datasets used for model training and MPQ policy search, thereby significantly saving search costs by using proxy datasets. ## 3 Method In this section, we first review the mixed-precision quantization (MPQ) problem in a differentiable way and discuss why it cannot be adopted on inconsistent datasets directly. Next, we consider the MPQ policy searching from the feature perspective. Namely, what good MPQ policy can ensure the quantized model has a generalization deep feature as its full-precision counterpart? Motivated by the observation, we introduce the separation regularization to search the policy that guarantees the _discriminative property of deep features_. ### Problem Formulation We consider a differentiable MPQ policy searching process Cai and Vasconcelos (2020); Yu _et al._ (2020); Wang _et al._ (2021). Typically, the whole searching pipeline is organized as a Directed Acyclic Graph (DAG), where the nodes represent a specific quantization precision (_e.g.,_ 3bit), and the edges represent the learnable weight for its corresponding quantization precision. Therefore, a differentiable searching graph is built to determine the optimal quantization bit-width through the learnable weight, by adding a complexity constraint (_e.g.,_ BitOPs, model size) to the loss function. Accordingly, the loss function is defined as \[\mathcal{L}=\mathcal{L}_{task}+\gamma\mathcal{L}_{comp}, \tag{1}\] where the \(\mathcal{L}_{task}\) represents the task loss, _i.e.,_ the cross-entropy loss, that guarantees the classification accuracy, \(\mathcal{L}_{comp}\) denotes the complexity loss that guarantees the target computational budget (_i.e.,_ BitOPs), and \(\gamma\) is the hyper-parameters to control the accuracy-complexity trade-off. \(\mathcal{L}_{comp}\) is defined as \[\mathcal{L}_{comp}=\sum_{l=0}^{L}\left(\sum_{j=0}^{||\mathbf{B}^{\mathbf{w}}||}{(p_{j }^{l,w}b_{j}^{w})}\sum_{k=0}^{||\mathbf{B}^{\mathbf{a}}||}{(p_{k}^{l,a}b_{k}^{a})} \right)comp^{l}, \tag{2}\] where \(p_{j}^{l,w}=\frac{\text{exp}(\alpha_{j}^{l})}{\sum_{k=0}^{||\mathbf{B}^{\mathbf{u}}|| }\text{exp}(\alpha_{k}^{l})}\)\(p_{k}^{l,a}=\frac{\text{exp}(\beta_{k}^{l})}{\sum_{k=0}^{||\mathbf{B}^{\mathbf{a}}|| }\text{exp}(\beta_{k}^{l})}\), \(\mathbf{B}^{\mathbf{w}}\) and \(\mathbf{B}^{\mathbf{a}}\) are the pre-defined bit-width candidate set for weights and activations, \(\mathbf{\alpha^{l}}\) and \(\mathbf{\beta^{l}}\) are the learnable weights vector for their corresponding bit-width candidate of layer \(l\), _e.g._\(\alpha_{j}^{l}\in\mathbf{\alpha^{l}}\) represents the learned weight for bit-width candidate \(b_{j}^{w}\in\mathbf{B^{w}}\). \(comp^{l}\) is the BitOPs constraint of layer \(l\), \[comp^{l}=c_{in}^{l}\times c_{out}^{l}\times k_{a}^{l}\times k_{b}^{l}\times h _{out}^{l}\times w_{out}^{l}, \tag{3}\] where \(c_{in}\) and \(c_{out}\) is the number of input channel and output channel, respectively. \(k_{a}\) and \(k_{b}\) are the kernel size, \(w_{out}\) and \(h_{out}\) are the width and height of the output feature map. After searching, the bit-width for weights and activations of layer \(l\) is determined by an \(argmax\) function acts on its learnable weights vector \(\mathbf{\alpha^{l}}\) and \(\mathbf{\beta^{l}}\). This paradigm and its variants Cai and Vasconcelos (2020); Huang _et al._ (2022) require the searching dataset to be consistent with the full-precision model training one, otherwise resulting in a serious accuracy degradation Wang _et al._ (2021). Inevitably, using a consistent dataset leads to inefficiencies, especially on large-scale datasets like ISLVRC2012 Deng _et al._ (2009) with over 1 million samples to search for. However, when searching an MPQ policy on a proxy dataset (_e.g.,_ a small-scale dataset CIFAR-10 with only 50000 training samples) through Eq. 1 and then directly applying it to the model trained on a large-scale dataset (_e.g.,_ ISLVRC2012), while the accuracy and complexity are both met, the accuracy on the proxy dataset is not of direct interest to us, because high accuracy on proxy dataset does not imply equivalent high accuracy on challenging large-scale datasets. One may argue that we can abridge the size of the target dataset to improve the efficiency, such as using a subset of target datasets to conduct MPQ search, but this would also result in serious performance degradation, as shown in Sec. 4.4. Accordingly, instead of optimizing the above improper objective on the proxy dataset, we aim to search an MPQ policy that guarantees a large-margin on the proxy dataset to handle the incoming classes of the large-scale dataset. Figure 1: The illustration of our approach. During the MPQ policy search process on the small-scale proxy dataset, we not only use the conventional classification loss and complexity loss as the optimization objective, but also introduce a large-margin constraint to search the policy can ensure the discriminative property in the feature space. In short, we hope the searched MPQ policy with a general and favorable attribute–gathering the features of the same classes and separating the features of different classes–to be applied to the target large-scale dataset (_e.g.,_ ISLVRC-2012) for model deployment efficaciously. ### Exploiting the Class-level Information From the perspective of class-level features in a well-preforming MPQ policy, they should be well separated if not in the same class, and tightly gathered if in the same class. This has the following benefits: **a)** It alleviates the side effect of quantization on classification boundary. As shown in Fig. 2(a) and Fig. 2(b), we observe quantization sharply narrows the class boundaries in the feature space compared to the full-precision model. Therefore, an MPQ policy with an explicit feature separation guarantee can effectively alleviate the side effect of quantization. **b)** This is a widely pursued and _dataset-independent_ attribute, as from classical statistical machine learning to recent deep learning research Wan _et al._ (2018); Ranasinghe _et al._ (2021); Liu _et al._ (2016) both recognize a large classification margin in feature space can help generalization. Motivated by this, we aim to search the MPQ policy that guarantees the large class margin on the proxy data distribution as much as possible. As we discussed above, such a general property in searched MPQ policies can ensure usability across the data distributions. However, the cross-entropy cannot provide this property, as the class margin is not explicitly formulated. Therefore, the objective is not only to optimize accuracy and complexity, but also to find an MPQ policy that maximizes the class margin. The illustration of our approach is shown in Fig. 1. We regard our approach as a class-level proxy data utilization, as it discovers the effective MPQ policy by leveraging the inter-class and intra-class information on the proxy dataset. The 2D visualization of our approach is shown in Fig. 2(c), we observe that the t-SNE pattern is quite similar to the full-precision model, indicating an MPQ policy that is able to separate the features is searched for the quantized model. ### Separation Regularization The first term in Eq. 1 is the soft-max cross-entropy loss Cai and Vasconcelos (2020); Wang _et al._ (2021). For simplicity, we revisit it here by considering a binary classification problem, which can be trivial generalized to multi-class classification, \[\mathcal{L}_{task} =-\log\,\frac{\exp(\mathbf{w_{1}^{T}}\mathbf{g})}{\exp(\mathbf{w_{1}^{T}}\bm {g})+\exp(\mathbf{w_{2}^{T}}\mathbf{g})}\] \[=-\log\,\frac{1}{1+\underbrace{\exp(\mathbf{w_{2}^{T}}\mathbf{g}-\mathbf{w_{1 }^{T}}\mathbf{g})}},\] \[\text{equivalent optimized term}\] where \(\mathbf{w_{1}^{T}}\) and \(\mathbf{w_{2}^{T}}\) are the weights for class 1 and class 2, respectively. \(\mathbf{g}\) is the deep feature of the model produced by several convolution layers (_i.e.,_ layers that need to be quantized to mixed-precision). Since the equivalent optimized term is not carried the margin objective during optimization, Eq. 4 cannot explicitly guarantee any margin between classes. Some previous works even observe that the learned feature regions for some classes tend to be bigger than others. If this combines with the side effect of quantization on decision boundaries, it inevitably leads to the search for sub-optimal MPQ policies. In other words, the performance objective in Eq. 1, the cross-entropy, is improper when the MPQ searching and full-precision model training datasets are inconsistent. To this end, we introduce separation regularization to enforce a large margin guarantee in the searched policy. Firstly, a small intra-class variance should be achieved to compact the features, \[\min_{q}\sum_{i=1}^{N}q_{i},\quad\text{where}\quad q_{i}=d(\mathbf{g_{i}},\mathbf{\mu _{y_{i}}}), \tag{5}\] where \(N\) is the number of samples, \(\mathbf{g_{i}}\) and \(y_{i}\) are the feature and label (ground truth) of sample \(i\), \(\mathbf{\mu_{y_{i}}}\) is the feature mean of class \(y_{i}\). \(d(\cdot,\cdot)\) is the metric for computing the distance between the feature and its mean (_e.g.,_ L2 distance). Secondly, we consider the inter-class margin by minimizing a classification loss as \[\min\mathcal{L}_{cls}=\min_{o}\sum_{i=1}^{N}\sum_{j=1}^{K}o_{i,j},\] \[o_{i,j}=\begin{cases}-\log\frac{\exp(h_{j}(\mathbf{g_{i}};m))}{\hat{\text{a}}}, &\text{if }j=y_{i}\\ 0,&\text{otherwise},\end{cases} \tag{6}\] \[\hat{\text{a}}=\sum_{k=1}^{K}\mathbbm{1}\left(k\neq y_{i}\right)\exp\left(h_ {k}(\mathbf{g_{i}};0)\right)+\exp\left(h_{j}(\mathbf{g_{i}};m)\right),\] \(h(\cdot;\cdot)\) is a map from feature space \(\mathbb{R}^{D}\) (_i.e.,_\(\mathbf{g}\)) to class-wise prediction score and \(\mathbbm{1}\left(\cdot\right)\) is the indicator function. \(m\) is a non-negative scalar that represents the margin of different classes to form an explicit classification margin between the label class of sample \(i\) and other classes in feature space, _i.e.,_\(h_{j}(\mathbf{g_{i}};m)>h_{k}(\mathbf{g_{i}};0)\) (\(k\neq j\), and \(j=y_{i}\)). One can see Eq. 6 becomes the classic log-softmax cross-entropy loss when \(h(;m)\) is a linear transformation and \(m\equiv 0\), _e.g.,_ in classic softmax cross-entropy, a linear layer with weight \(\textbf{W}\in\mathbb{R}^{D\times K}\) and no biases is used to project the deep feature \(\mathbf{g_{i}}\) to \(\mathbb{R}^{K}\)-let us denote \(\mathbf{w_{j}}\) is the \(j\)-th column vector of Fig. 2: The deep feature 2D visualization (t-SNE Van der Maaten and Hinton (2008)) on a proxy dataset CIFAR-10 over **(a)** the full-precision ResNet18, **(b)** direct searched MPQ policy through EdMPS Cai and Vasconcelos (2020) and **(c)** searched MPQ policy through proposed method. Colors represent different classes. \(\mathbf{W}\), thus \(h_{j}(\mathbf{g_{i}};0)=\mathbf{w_{j}^{T}}\mathbf{g_{i}}\). Please note when \(m\neq 0\), the classification margin requires the output sign of \(h\) should be always either positive or negative, which is not always satisfied in a classic softmax cross-entropy loss as the sign of linear projection is not certain. We hence follow the previous work L-GM Wan _et al._ (2018) that assumes the feature \(\mathbf{g_{i}}\) follows a Gaussian Mixture Distribution (GMD). Namely, \[p(\mathbf{g_{i}})=\sum_{k=1}^{K}p(k)\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{k}},\,\mathbf{\Sigma _{k}}), \tag{7}\] where \(p(k)\) is the prior probability of class \(k\), and \(\mathbf{\mu_{k}}\) and \(\mathbf{\Sigma_{k}}\) are the mean and covariance of class \(k\). The posterior probability of feature \(\mathbf{g_{i}}\) is derived through the Bayes' rule, \[p(y_{i}|\mathbf{g_{i}}) =\frac{p(y_{i})\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{y_{i}}},\,\mathbf{ \Sigma_{y_{i}}})}{p(\mathbf{g_{i}})}\] \[=\frac{p(y_{i})\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{y_{i}}},\,\mathbf{ \Sigma_{y_{i}}})}{\sum_{k=1}^{K}p(k)\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{k}},\,\mathbf{ \Sigma_{k}})}. \tag{8}\] Under the GMD assumption, we can easily derive the additive inter-class margin according to \[h_{y_{i}}(\mathbf{g_{i}};m)=p(y_{i})\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{y_{i}}},\mathbf{ \Sigma_{y_{i}}},m)=\] \[p(y_{i})|\mathbf{\Sigma_{y_{i}}}|^{-\frac{1}{2}}\exp\{-\underbrace{\underbrace{ \frac{1}{2}(\mathbf{g_{i}}-\mathbf{\mu_{y_{i}}})^{T}\mathbf{\Sigma_{y_{i}}}^{-1}(\mathbf{g_{i}} -\mathbf{\mu_{y_{i}}})}_{\text{non-negative}}+m}_{\text{non-negative}}+m\}, \tag{9}\] where \(h(\cdot)\) is formulated from a probability perspective, thus it is guaranteed to be non-negative. By replacing the subscript \(y_{i}\) of Eq. 9 with \(k\) and setting \(m=0\), we can derive \(h_{k}(\mathbf{g_{i}};0)=p(k)\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{k}},\mathbf{\Sigma_{k}},0)\). Substitute it and Eq. 9 into Eq. 6, we can obtain the \(\mathcal{L}_{cls}\) accordingly. Finally, we apply a log-likelihood term Wan _et al._ (2018) to restrict the feature \(\mathbf{g_{i}}\) centralization near its mean \(\mathbf{\mu_{y_{i}}}\) to achieve intra-class compactness according to Eq. 5 and Eq. 7, \[\mathcal{L}_{inc} =\sum_{i=1}^{N}q_{i}=\sum_{i=1}^{N}d(\mathbf{g_{i}},\mathbf{\mu_{y_{i}}})\] \[=\sum_{i=1}^{N}-\log\;p(y_{i})\mathcal{N}(\mathbf{g_{i}};\mathbf{\mu_{y_{ i}}},\,\mathbf{\Sigma_{y_{i}}}). \tag{10}\] For simplicity, we assume \(p(y_{i})=\frac{1}{K}\) and \(\mathbf{\Sigma_{y_{i}}}\) is diagonal. Thus, the optimization objective during MPQ searching is \[\mathcal{L}=\mathcal{L}_{cls}+\lambda\mathcal{L}_{inc}+\gamma\mathcal{L}_{comp}, \tag{11}\] where \(\mathcal{L}_{cls}\) is the classification loss, \(\mathcal{L}_{inc}\) is the intra-class compactness loss and \(\mathcal{L}_{comp}\) is the complexity loss. \(\lambda\) and \(\gamma\) are the hyper-parameters to weight the corresponding loss in the optimization process. ## 4 Experiment ### Settings #### Datasets The proxy (MPQ policy searching) datasets are CIFAR-10 Krizhevsky _et al._ (2009) and StanfordCars Krause _et al._ (2013). CIFAR-10 has 10 categories, and each category has 5000 training samples and 1000 test samples. StanfordCars has 196 categories of cars; and the training set has 8144 training samples, and the test set has 8041 test samples. The target (model training) dataset is ISLVRC-2012 Deng _et al._ (2009) with 1000 categories, containing about 1.28M training samples and 50000 validation samples. We search the MPQ policy on the training set of proxy datasets. We evaluate the final performance on the ISLVRC-2012 validation set. We use the basic data augmentation methods during finetuning, _i.e.,_ the input images are randomly cropped to 224\(\times\)224 with the horizontal flip. #### Models We conduct the experiments on three representative models including the ResNet-{18, 50} He _et al._ (2016) and the MobileNet Howard _et al._ (2017). Particularly, we use the standard architecture for ResNet. #### Hyper-parameters For ResNet and MobileNet, the bit-width candidates of weights and activations are \(\mathbf{B^{w}}=\mathbf{B^{a}}=\{2,3,4,6\}\) and \(\mathbf{B^{w}}=\mathbf{B^{a}}=\{2,3,4,5,6\}\), respectively. Following the previous arts Wang _et al._ (2019); Esser _et al._ (2020); Tang _et al._ (2022), the first and last layers are fixed to 8 bits. For searching, we adopt the SGD optimizer, and the initial learning rate is set to \(0.01\). We search 15 epochs on the proxy datasets. Empirically, we find the intra-class compactness regularization is not sensitive to the hyper-parameter and set \(\lambda=0.1\) for all proxy datasets. We set the class margin \(m=0.3\) and \(m=0.01\) for CIFAR-10 and StanfordCars, respectively. See the supplementary material for more details. For finetuning (quantizing), we follow the basic quantization-aware training settings in LSQ Esser _et al._ (2020) and LIMPQ Tang _et al._ (2022). Specifically, we use the full-precision model as the initialization and adopt the SDG optimizer with Nesterov momentum Sutskever _et al._ (2013) and the initial learning rate and weight decay are set to \(0.04\) and \(2.5\times 10^{-5}\), respectively. For fair comparisons, we do not apply knowledge distillation in Sec. 4.2. We use the cosine learning rate scheduler and finetune the model 90 epochs and the first 5 epochs are used as warm-up. ### Comparisons with the State-of-the-Art We compare our method with the SOTA quantization works on the classification task. For fixed-precision works, we compare our method with PACT Choi _et al._ (2018), PROFIT Park and Yoo (2020) and LSQ Esser _et al._ (2020). For MPQ works, we compare our method with DNAS Wu _et al._ (2018), HMQ Habi _et al._ (2020), HAQ Wang _et al._ (2019), BP-NAS Yu _et al._ (2020), FracBits Yang and Jin (2021), GMPQ Wang _et al._ (2021), SDQ Huang _et al._ (2022) and LIMPQ Tang _et al._ (2022). Specifically, since original LSQ and GMPQ use the Pre-Activation ResNet architecture, we re-implement them for fair comparisons through the vanilla ResNet He _et al._ (2016). #### ResNet We show the mixed-3bits and mixed-4bits results of ResNet-{18, 50}, as listed in Tab. 1. We provide the full-precision accuracy to compare the _absolute accuracy degradation_ between the full-precision and quantized model. For ResNet18, under 3-bits level BitOPs constraints, "Ours-C" causes only \(0.5\%\) Top-1 accuracy degradation compared to the full-precision model, which is the lowest one among recent works. Under 4bits level BitOPs constraints, "Ours-C" achieves the highest Top-1 accuracy. Meanwhile, it achieves about 160\(\times\) policy search speedup compared with FracBits. Thanks to the small data amounts of StanfordCars, "Ours-S" uses only 8041 training samples to search a very competitive MPQ policy. For ResNet50, we search 4bits level policies. One can see that our method achieves quite similar performance compared to gradient-based methods BP-NAS and FracBits while further reducing the search time significantly. In summary, we observe that using CIFAR-10 as the proxy dataset causes the least accuracy degradation, probably due to the categories in CIFAR-10 are somewhat similar to ISLVRC-2012. However, using the StanfordCars as the proxy dataset can further reduce the search time, since its samples are less than CIFAR-10. Overall, our method not only achieves a comparable accuracy as searching directly on ISLVRC-2012, but also significantly improves the searching efficiency. ### MobileNet Tab. 2 summarizes the results of mixed-3bits and mixed-4bits on MobileNetv1. For mixed-3bits searched on CIFAR-10, we observe our method both outperforms the existing SOTA mixed-precision work LIMPQ and fixed-precision work LSQ. In particular, our method arises a 1.8% absolute gain on Top-1 accuracy compared to LSQ, and 1.2% higher accuracy than FracBits. We further narrow the gap between the full-precision and quantized MobileNet. Please note that we are the first work to provide a 3-bits level MobileNet that almost achieves 70% Top-1 accuracy. For mixed-4bits searched on CIFAR-10, our method has up to 237\(\times\) searching efficiency improvement compared to FracBits and up to 0.4% higher accuracy compared to the SOTA efficient MPQ approach LIMPQ. For mixed-3bits and mixed-4bits searched on StanfordCars, they show 0.3% and 0.1% absolute Top-1 accuracy degradation compared to the CIFAR-10 but further save about 20% searching cost. This further proves that our method can still be very effective even if the proxy dataset (_i.e.,_ all cars) has much lower class-similarity to the target dataset. ### Discussion for Proxy Datasets In this subsection, we observe that using CIFAR-10 as a proxy dataset can search for more well-performing MPQ policies better than StanfordCars. On the other hand, StanfordCars has higher search efficiency than CIFAR-10. We conjecture this is because the category of CIFAR-10 is more similar to the target dataset ISLVRC-2012, and the data amounts of CIFAR-10 are more than that of StanfordCars. Meanwhile, we find that the performance loss of policies searched on StanfordCars is slightly larger than CIFAR-10 when the complexity constraint becomes tighter, _e.g.,_ the mixed-3bits results for MobileNet. Therefore, while it is feasible to search a well-performing MPQ policy by using an arbitrary proxy dataset, if the model requires more aggressive quantization, a proxy dataset with \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & W-bits & A-bits & Top-1 Q/FP (\%) & BitOPs (G) & Cost (h) \\ \hline \multicolumn{6}{c}{ResNet18} \\ \hline PACT & 3 & 3 & 68.1 / 70.4 & 23.09 & - \\ LSQ\({}^{*}\) & 3 & 3 & 69.4 / 70.5 & 23.09 & - \\ EdMIPS & 3MP & 3MP & 68.2 / 69.6 & - & 9.8 \\ GMPQ\({}^{*}\) & 3MP & 3MP & 68.6 / 70.5 & 22.8 & 0.6 \\ DNAS & 3MP & 3MP & 68.7 / 71.0 & 25.38 & - \\ FracBits & 3MP & 3MP & 69.4 / 70.2 & 22.93 & 150.1 \\ LIMPQ & 3MP & 3MP & 69.7 / 70.5 & 23.07 & 3.3 \\ \hline Ours-C & 3MP & 3MP & **70.0** / **70.5** & 23.07 & 0.9 \\ Ours-S & 3MP & 3MP & 69.6 / 70.5 & 23.06 & 0.3 \\ \hline PACT & 4 & 4 & 69.2 / 70.4 & 35.04 & - \\ LSQ\({}^{*}\) & 4 & 4 & 70.5 / 70.5 & 35.04 & - \\ DNAS & 4MP & 4MP & 70.6 / 71.0 & - & - \\ FracBits & 4MP & 4MP & 70.6 / 70.2 & 34.70 & 151.3 \\ LIMPQ & 4MP & 4MP & 70.8 / 70.5 & 35.04 & 3.3 \\ \hline Ours-C & 4MP & 4MP & **70.8** / **70.5** & 34.7 & 0.9 \\ Ours-S & 4MP & 4MP & 70.5 / 70.5 & 34.7 & 0.4 \\ \hline \multicolumn{6}{c}{ResNet50} \\ \hline HAQ & 4MP & 8 & 76.1 / 76.2 & 136.5 & - \\ BP-NAS & 4MP & 4MP & 76.7 / 77.5 & 64.4 & 35.6 \\ FracBits & 4MP & 4MP & 76.5 / 77.5 & 71.17 & 630.6 \\ \hline Ours-C & 4MP & 4MP & **76.8** / **77.5** & 70.43 & 1.73 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy and efficiency results for ResNet. “Top-1 Q/FP” represents the Top-1 accuracy of quantized model and full-precision model. “MP” means mixed-precision quantization. “Cost” denotes the MPQ policy search time that is measured by GPU-hours. “*”: reproduces through the vanilla ResNet architecture He _et al._ (2016). “Ours-C”: denotes the MPQ policies search on CIFAR-10. “Ours-S”: denotes the MPQ policies search on StanfordCars. The lowest accuracy degradation results are bolded in each metric. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & W-bits & A-bits & Top-1/5 (\%) & BitOPs (G) & Cost (h) \\ \hline PACT & 4 & 4 & 62.4 / 82.2 & 9.68 & - \\ LSQ & 3 & 3 & 68.3 / 88.1 & 5.8 & - \\ HMQ & 3MP & 4MP & 69.3 / - & - & - \\ FracBits & 3MP & 3MP & 68.7 / 88.2 & 5.78 & 237.2 \\ LIMPQ & 3MP & 3MP & 69.5 / 89.1 & 5.78 & - \\ Ours-C & 3MP & 3MP & **69.9** / **89.3** & 6.28 & 1.0 \\ Ours-S & 3MP & 3MP & 69.6 / 89.2 & 6.13 & 0.8 \\ \hline PACT & 6 & 4 & 67.5 / 87.8 & 14.13 & - \\ PROFIT & 4 & 4 & 69.1 / 88.4 & 9.68 & - \\ LSQ & 4 & 4 & 71.2 / 90.0 & 9.68 & - \\ HAQ & 4MP & 4MP & 67.5 / 87.9 & - & - \\ HAQ & 6MP & 4MP & 70.4 / 89.7 & - & - \\ FracBits & 4MP & 4MP & 71.4 / 90.0 & 9.63 & 250.2 \\ LIMPQ & 4MP & 4MP & 71.8 / 90.4 & 9.68 & - \\ \hline Ours-C & 4MP & 4MP & **71.8** / **90.5** & 9.30 & 1.1 \\ Ours-S & 4MP & 4MP & 71.7 / 90.3 & 9.86 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 2: Accuracy and efficiency results for MobileNetv1. “Top-1/5” represents Top-1 and top-5 accuracy respectively. more class-level similarity compared to the target dataset could be considered to further improve the performance. ### Complexity-Accuracy Trade-off In Fig. 3, we show the complexity-accuracy trade-off of LSQ Esser _et al._ (2020), EdMIPS Cai and Vasconcelos (2020) and our method for ResNet18 and MobileNet. Unless otherwise specified, the proxy dataset used in our method is CIFAR-10. For ResNet18, our method achieves significant performance gains compared to the mixed-precision approach EdMIPS. We even consistently have an absolute advantage of over 2% Top-1 accuracy. For MobileNet, our method provides a very high accuracy improvement within the constraints of approximate complexity. Especially, our method improves 4.9% Top-1 accuracy compared to LSQ at 3G BitOPs constraint. Meanwhile, our method has a much fine-grained trade-off thanks to the mixed-precision quantization. ### Ablation Study Although GMPQ has shown direct searching over the proxy dataset incurs severe performance degradation, there is no relevant literature to study the effect of using a subset of the target dataset (_e.g.,_ ISLVRC-2012) as the proxy dataset. To this end, we randomly sample 4% (roughly the same sample size as CIFAR-10) training data from ISLVRC-2012 and use them to search a 3-bits level policy for ResNet18 without/with the proposed method. As shown in Tab. 3, the subset of ISLVRC-2012 without proposed method still has about 1% performance degradation compared to CIFAR-10 with proposed method. This is because the data distribution in the subset is significantly different from the full set. When the proposed method is enabled, this subset yields superior performance than StanfordCars. That further demonstrates the effectiveness of our method, and indicates that we can gain more performance by leveraging the class-similarity between proxy and target datasets. ### Bit-width Assignment Behavior In Fig. 4, we visualize the searched MPQ policies for the mixed-3bit ResNet18 and MobileNet. For ResNet, we clearly see that almost the highest bit-width is given for the residual convolution layers. That is because these layers are more important for bypassing signals from shallow to deep layers Veit _et al._ (2016), as well as having fewer parameters. For MobileNet, we find that higher bit-width is assigned to the Depthwise-Convolution (DW) layers than the Pointwise-Convolution (PW) layers, as the DW layer is typical less redundant Tang _et al._ (2022). ### Effectiveness of Knowledge Distillation As knowledge distillation (KD) is a widely-used technique to regain the performance of quantized model Park and Yoo (2020); Huang _et al._ (2022), we study the effectiveness of KD for our method here. Follow SDQ Huang _et al._ (2022), we use a ResNet101 as the full-precision distillation teacher during the fine-tuning time. The distillation temperature is set to 1. We compare our method with GMPQ and SDQ at the 3-bits levels (about 23G BitOPs) search policies. As shown in Tab 4, our approach achieves the highest performance when knowledge distillation is applied. In particu \begin{table} \begin{tabular}{c c c c} \hline \hline Method & Teacher & Top-1 (\%) & BitOPs (G) \\ \hline Full-precision & - & 70.5\% & FP \\ Ours - w/o KD\({}^{*}\) & - & 70.0\% (-0.5\%) & 23.07 \\ GMPQ - KD & ResNet101 & 69.5\% (-1.0\%) & 22.8 \\ SDQ - KD & ResNet101 & 70.2\% (-0.3\%) & 23.5 \\ Ours - KD & ResNet101 & 70.7\% (**+0.2\%**) & 23.07 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of finetuning the ResNet18 with an external teacher model ResNet101. *: result from Tab. 1. Figure 4: Bit-width assignment for mixed-3bit MobileNet and ResNet18. Figure 3: Complexity-accuracy trade-off for ResNet18 and MobileNet. lar, compared to the state-of-the-art work SDQ under approximate complexity, our method attains an absolute accuracy improvement of 0.5%. That indicates our method can search the optimal MPQ policy properly on a small-scale proxy dataset. ## 5 Conclusion In this work, we propose to search the MPQ policy on a small-scale proxy dataset for a model trained on a large-scale one. To bridge the inconsistent data distributions, we not only focus on optimizing the accuracy on the proxy dataset, but also enforce a large-margin of the searched MPQ policy should be met. We regard this as a _class-level_ data exploitation for the limited proxy data, which is more data efficient than the _instance-level_ data exploitation Wang _et al._ (2021). Our class-level data exploitation renders the search policies can compact the features in the same classes and separate the feature into different classes, which is a favorable and dataset-independent property. The experiments validate our idea, and we use only 4% of data to search for the high quality MPQ policies, achieving the same accuracy as searching directly on the large-scale dataset, and speeding up the MPQ searching process by up to 300\(\times\).
2304.13678
A marker-less human motion analysis system for motion-based biomarker discovery in knee disorders
In recent years the NHS has been having increased difficulty seeing all low-risk patients, this includes but not limited to suspected osteoarthritis (OA) patients. To help address the increased waiting lists and shortages of staff, we propose a novel method of automated biomarker identification for diagnosis of knee disorders and the monitoring of treatment progression. The proposed method allows for the measurement and analysis of biomechanics and analyse their clinical significance, in both a cheap and sensitive alternative to the currently available commercial alternatives. These methods and results validate the capabilities of standard RGB cameras in clinical environments to capture motion and show that when compared to alternatives such as depth cameras there is a comparable accuracy in the clinical environment. Biomarker identification using Principal Component Analysis (PCA) allows the reduction of the dimensionality to produce the most representative features from motion data, these new biomarkers can then be used to assess the success of treatment and track the progress of rehabilitation. This was validated by applying these techniques on a case study utilising the exploratory use of local anaesthetic applied on knee pain, this allows these new representative biomarkers to be validated as statistically significant (p-value < 0.05).
Kai Armstrong, Lei Zhang, Yan Wen, Alexander P. Willmott, Paul Lee, Xujioing Ye
2023-04-26T16:47:42Z
http://arxiv.org/abs/2304.13678v1
# A marker-less human motion analysis system for motion-based biomarker discovery in knee disorders ###### Abstract In recent years the NHS has been having increased difficulty seeing all low-risk patients, this includes but not limited to suspected osteoarthritis (OA) patients. To help address the increased waiting lists and shortages of staff, we propose a novel method of automated biomarker identification for diagnosis of knee disorders and the monitoring of treatment progression. The proposed method allows for the measurement and analysis of biomechanics and analyse their clinical significance, in both a cheap and sensitive alternative to the currently available commercial alternatives. These methods and results validate the capabilities of standard RGB cameras in clinical environments to capture motion and show that when compared to alternatives such as depth cameras there is a comparable accuracy in the clinical environment. Biomarker identification using Principal Component Analysis (PCA) allows the reduction of the dimensionality to produce the most representative features from motion data, these new biomarkers can then be used to assess the success of treatment and track the progress of rehabilitation. This was validated by applying these techniques on a case study utilising the exploratory use of local anaesthetic applied on knee pain, this allows these new representative biomarkers to be validated as statistically significant (p-value < 0.05). ## Introduction The knee is one of the most commonly injured or affected joints in the human body, there are many risk factors associated with the knees such as age, weight, and occupation, with knee OA being the most common joint disorder in the United States [1]. As a result, in the UK alone there are over 90,000 total knee replacements each year which is one of the only methods of reducing the pain associated with walking and returning the patients to their daily lives [2, 3]. The costs associated with these total knee replacements is on average over \(\upxi\)7,000 per replacement or a cost per Quality-adjusted Life Years (QALY) gained of over \(\upxi\)1,300, which means that per year these operations cost the NHS over \(\upxi\)600 million [4]. The MRI scan is the commonly used examining the severity of OA, figure 1 demonstrates two severe cases of knee OA, which has led to a rapidly increasing waiting list creating more burdens on the NHS [5]. The cost and limited accessibility of MRI scans frequently create pressure on the NHS due to high demand from the population, in the most recent annual review it was reported that the median waiting time was 8.6 weeks [6]. Moreover, the delay of the MRI examination would lead to the degradation of the knee disorder. To this end, this study aims to develop an automated system for the analysis of objective measurements which can be used as a diagnostic biomarker to rival the current gold standards in the diagnosis of musculoskeletal (MSK) issues. To develop a fully end-to-end motion capture-based biomechanics solution for clinical analyses, there are many obstacles faced: the complexity and variability of clinical data capture environments, the clinical relevance of the tests performed, and the complexity in the analysis of the results. Each of these problems can be solved by a wide range of solutions, which are dependent on the capabilities of the clinical practice or hospital, However, not all of these options are viable if the aim is to create a cheaper and faster alternative to the traditional motion capture. To achieve this, it is important to not only assess the accuracy of different techniques but also to identify whether the techniques are suitable for clinical applications. For example, many studies have been performed to compare the accuracy of human pose estimation techniques but very few have developed methods to analyse the clinical significance of the human pose data for knee disease [7, 8]. The methodology outlined in this study has been developed to analyse clinical motion data both before and after treatment, thus developing a system to track and analyse the rehabilitation of patients. The main problem faced when applying motion capture and biomechanics analysis in a clinical setting is the environmental factors being difficult to control. The lighting of a room is a variable amount with varying directions throughout the day, these conditions need to be controlled to minimise the variation in any data collected [9]. Another problem faced when using motion capture for a clinical biomechanics analysis comes from the patients themselves, not every patient will be wearing the same clothes for example, which results in variations in the contrast between the recorded subject and the room [10, 11, 12]. Each individual patient will also have different functional capabilities, therefore, the actions performed must be chosen very carefully to develop a series of tests that everybody will be able to perform. To assess the viability of a marker-less RGB-based motion capture for clinical biomechanics analysis there are multiple different comparisons that are needed compared to the available alternatives. The major factors that go into determining the clinical significance of this technology are the sensitivity of the biomechanics extraction, and the accuracy of the human pose estimation in order to produce both representative and reproducible motion data. To alleviate the concerns of the methods currently used in clinical environments, we propose a new marker-less motion capture system to identify individualised biomarkers and provide a framework for the tracking of these biomarkers for tracking either the disease progression or the rehabilitation progress. This method utilises PCAs and customisable biomechanics calculations to identify new motion-based biomarkers and automatically produce a medical report to present to the patient or physician. To validate these methods, we applied these methods in a small clinical trial, by administering a local anaesthetic to a small population with knee pain it is possible to assess the sensitivity of this technique and determine the significance of the newly identified biomarkers. ### Background Biomechanics, the study of motion, has been frequently used in the field of sports science and medicine to allow more precise measurements to be taken to assess the peak performance of athletes and provide a better understanding of the muscles required to make improvements [13]. One field which benefits from this focus is kinesiology, by understanding which muscles contract and relax allowing people to move. However, this also requires expensive tools to measure the electromyography of the muscles, the amount of electricity measured from each muscle [14]. Therefore, newer methods for biomechanics analysis are required to meet the changing environment requirements of cheaper and easier to use techniques. The gold standard technique for biomechanics analysis is marker-based motion capture which has always relied on infrared (IR) depth sensors to capture the positions of retroreflective markers, this brings its own complications such as the placement of the markers, marker drift, or the markers being occluded from the cameras [15]. Traditional motion capture suffers from one of the largest drawbacks that results in a lack of clinical use, this technique requires a long time period dedicated to capturing the subject and a highly trained expert to both capture and prepare the motion data which is not often available in clinical environments [16]. Another technology developed to build on the gold standard motion capture techniques is smaller scale IR-based gait analysis. The KneeKG (Emovi, Canada) for example, uses a smaller set of retroreflective markers placed around the hip and knee to measure walking gait on a treadmill [17]. This allows for a clinical use of the gold standard technique but still has some of the same drawbacks; this method still relies on the skill of the practitioner with their marker placement, the understanding of the software used to capture the data, and the treadmill is not always a tool clinicians will have available. These drawbacks of motion capture systems has allowed the development of many new techniques to measure biomechanics. One of the most popular is the inertial measurement units (IMUs) which capture the acceleration and rotation of a sensor, using multiple IMUs and the data they provide it is possible to estimate the position and orientation of limbs [18]. These sensors have been widely used in gait analysis since they have uses both in a clinical setting on a treadmill but also in-the-wild, for this reason, IMU-based methods have been widely adopted by sports teams to assess the performance of their players both during a game or throughout their training [19]. However, due to the nature of the technique this still suffers from the same limitations as marker-based motion capture. In addition, to achieve the same complexity in the data in terms of size and accuracy, these more complex systems have an increased cost associated with them. Dedicated sensors can also be applied in another direction, focusing on forces rather than the position and orientation; these rely on force plates to measure the ground reaction force, this is therefore a more simplistic method and does not have the same amount of information as other techniques. Newer force plates use multiple load cells in the plate to measure the force at different points in the plate, this allows for many new measurements to be made including the stability of a subject or the difference between the takeoff and landing positions in a jump [20]. Some force plate-based methods can also capture gait either using a long stretch of force plates or treadmills with built-in force plates in the base, however, these come with an increased cost and space requirement which is not feasible for some clinical environments [16]. One main issue with using these pre-existing clinical tools is their limited data in their specific use-cases, the commercial options typically can only be used for gait analysis due to its prominence in the literature for its use in medical diagnosis [21]. However, this can cause complications with patients of the older demographics who find it difficult to walk for extended periods of time [22]. Another issue is that these technologies are focused on collecting very few biomarkers, which are focused on the lower limb biomechanics, and therefore are ignoring the importance of understanding human movement as a global co-ordinate system in which every force generated is inter-linked with multiple joints and muscles [23]. Currently, the technology linked to the gold standard is the use of video cameras with a dedicated time of flight near-IR depth sensor (RGB-D) such as the Azure Kinect (Microsoft, USA), this allows for a multimodal approach to motion capture on a much smaller scale. The Azure Kinect skeleton tracking, which utilises a recurrent neural network, has been directly compared to the gold standard motion capture techniques to produce comparable accuracy in terms of joint position and rotation [24]. Previous research has been done to compare this technique to the marker-based KneeKG to look at the uses in gait analysis and was found to be more representative of human movement with no restrictions from the markers on the body [25]. However, depth sensors can suffer from occlusion in some environments which are out of control of the clinician; these include too much natural light, dark flooring contrasting with clothes, and white walls causing the IR light to scatter [26]. Building on the deep learning-based methods used with RGB-D cameras, it is also possible to estimate the 3D positions and orientations of a skeleton model from 2D RGB images or videos [27]. These techniques were built-upon further by extracting a 3D mesh of a person from the 2D RGB data, this provides more anatomical features for face, body, and hand features. The current standard of 3D mesh model is SMPL (Skinned Multi Person Linear) and SMPL-X which have had many recent iterations and continuous improvement dependent on the desired use [28]. Building further upon these models that infer a 3D mesh from a given 2D RGB image, there are also multiple techniques which infer a body in motion from a 2D RGB image sequence which not only use the spatial domain for their inference but also access the temporal domain to create not only a physically feasible mesh but also with realistic motion using a motion discriminator in the pipeline [29, 30]. ## Results Feature engineering and biomarker identification shows that the flexion of the right and left knee are the most representative biomarkers during both the squat and sit-to-stand Initially the data created consisted solely of positions and orientations of joints in a 3D Cartesian co-ordinate system for both a squat and a sit-to-stand action, which first needs to be transformed and engineered into clinically relevant features. The PCA show the most representative features as a histogram shown in figures 2 and 3, this was derived from the total counts of each of the top five most represented features of each patent and in each action performed. Figure 2 for example, shows the most represented features among all patients in the squat action to be the mean and maximum knee flexion for both the left and right side. On the other hand, the sit-to-stand feature histogram as shown in figure 3 shows the most representative features also include both the arm abduction and the elbow flexion. Correlation analysis of poses captured by our method with a monocular camera and depth camera, showing that pose estimation of from our method with 2D frames can match to the performance of using a depth camera To confirm that the results from both data collection methods, the Azure Kinect skeleton, and the SMPL-based skeleton, were strongly correlated the Pearson Coefficient showed a p-value < 0.0005. This strong correlation shows that further work could be done on the data. Following the Pearson coefficient, the Regression Analysis tests were performed, which showed that the SMPL-based methods can be directly transformed to match that of the Azure Kinect using a simple linear regression model. The results of the regression analysis can be found in the supplementary information (Supp. 1A,1B,1C,1D), this is the visualisation of the effects of applying the linear regression model to the data. Application of these methods to a clinical trial showing the difference before and after the application of a local anaesthetic, thus showing most of the identified biomarkers are statistically significant (p-value < 0.05) To evaluate the responsiveness to change, we examined the paired t-tests and Bland-Altman plots depicted in tables 1 and 2 and figure 4. These visual representations indicate whether the variances between pre-treatment and post-treatment are attributable to the efficacy of the treatment. To observe the sensitivity of the SMPL-based methods the t and p-values for each of the extracted features can be found in tables 1 and 2, and a subsequent Bland Altman plot for one of the extracted biomarkers can be seen in figure 4. The Bland Altman plot, shows the smoothness of the squat action regarding the maximum of the right knee flexion, whereby at least 95% of the points fall within two standard deviations from the mean. In addition to this Bland Altman plot, subsequent plots for each of the significant biomarkers can be found in the supplementary information (Supp. 2A and 2B). These examples show that when comparing the pre-injection and post-injection of the local anaesthetic in both the Azure Kinect and SMPL-based methods there is a significant difference. However, when observing the p and t-values of each biomarker it is possible to determine that both techniques are sufficient when comparing the pre-injection and post-injection. The significance of these results was determined by the comparison of the associated critical value (1.728). This results in the work done of the mean right and left knee flexion, and the smoothness of the maximum right and left knee flexion being significant in the squat (p-value < 0.05). However, when looking at the biomarkers of the sit-to-stand action every extracted biomarker was found to have significance in the smoothness meanwhile the work done biomarkers only showed significance with the maximum right and left elbow flexion (p-value < 0.05). ## Discussion One important finding as presented by the PCA results in figures 2 and 3 is the method of extracting biomarkers from the motion data. These histograms represent the most representative features in the data regarding each action. This shows that these features are the most important within the action and these are therefore good candidates for biomarkers, these biomarkers could be used for the diagnosis of knee disorders and the monitoring of disease or rehabilitation progression [31]. When comparing biomarkers from both the SMPL and Kinect-based approaches there is a visible difference between the two sets of values, for example the standing position difference can be up to 25\({}^{\circ}\) in the knee angle this can be visualised in the supplementary information (Supp. 1A,1B,1C,1D). One potential cause for this difference is in the method of calculating the knee joint angle, which considers the position of the hip joint, and the hip joint is in a more anatomically correct location in the SMPL body model as compared to the hip joint location for the Kinect-based skeleton model [32]. Another potential source for this difference is the depth ambiguity in the RGB information since the Azure Kinect Skeleton model considers the depth information this approach would therefore take more information into account and can produce a more accurate position in the z-domain [33, 34]. Although these two sets of data are visually different, Pearson coefficient showing a p-value < 0.005 shows that the knee angle calculated from each of the pose estimation methods are strongly correlated. Using this linear regression approach allows the justification of this technique in a clinical setting, especially when considering the use-case in this study is to examine the change in movement capabilities before and after treatment. However, due to the technical limitations of the RGB-based and the inter-patient variability, some biomarkers for some of the patients are less correlated. The full correlation analysis for each patient and each biomarker can be found in the supplementary information (Supp. 1A,1B,1C,1D). Due to the nature of the experiment the method of determining the success of a treatment only requires the change in each biomarker to be examined rather than the true value of the initial and final motion results [35]. The importance of these findings is the implication that these methods can be easily adapted for feature engineering in different applications, ranging from different body part disorders to other sources of movement issues such as neurological disorders. Given that these findings were from a relatively small knee-based case study, the actions performed would need to be altered based on the desired application of the techniques. The results of the paired statistical tests allow multiple observations to be made regarding not only the results but also the importance and significance of the methodology. Firstly, the results in figure 4 help to show that the technique used for the data collection produced a sensitive enough co-ordinate system that can detect the changes as a result of the treatment. This observation can be made because of the treatment the participants received, in this case each were given a local anaesthetic and therefore any movements after the injection will be done so without the knee pain and therefore the removal of any psychological aspects of pain will produce movement more in-line with their physiological capabilities [36, 37]. A second observation which can be drawn from the statistical tests is the success of the feature extraction methodology, given that each of the features tested were initially extracted using the PCA. Albeit these features were reduced further for the paired t-test due to the nature of time series data, the features used were descriptions of the extracted features in terms of both the smoothness and the amount of force generated from said biomarker. However, given the previous knowledge provided by Henriksen, _et al._, the change to these biomarker descriptions as shown in tables 1 and 2, this leads to the conclusion that these biomarkers extracted using the PCA are statistically significant not only for determining the action from movement but also in determining the success of a treatment [38, 39]. Through the combination of each of the results presented, the methods described above show much promise in their uses in a clinical setting. These methods achieved the goals of developing a low-cost solution for clinical biomechanics assessments, with a wide array of potential uses not bound to lower limb assessments. In addition, this solution can also be used to identify new biomarkers involved in a wide range of movement debilitating injuries, illnesses, and disorders. Overall, this study has created a strong basis for using these techniques to quantify movement and create an objective method of performing an MSK analysis, this can also be achieved using any standard camera which encompasses mobile phones. Therefore, allowing the remote monitoring of disease progression and the identification of pre-disease stages to create an intervention strategy and reduce the strain on the healthcare industry. This study has also helped to outline the current limitations of the techniques described, for example, a lack of a controlled environment can lead to occlusion and jitter problems which would need to be addressed in future research. There are also limitations surrounding the design of clinical trials, in this case it is difficult to produce a truly representative sample. For example, it is difficult to determine whether the prevalence of the right-sided biomechanics was due to the sample population being right dominant or if the patients had bilateral pain this does not have the assumption of an equal amount of pain in each knee. ## Methods The methodology outlined in this study has been outlined in figure 5 which visualises the flow of data from the collection to the extraction of clinically relevant and statistically significant biomechanics features. This begins with the video being recorded on the Azure Kinect RGB-D camera, collecting both 1080p standard video at 30 frames per second and the near-IR depth video. This records the participants performing at least 3 repeats each of a sit-to-stand and squat action, this has been designed to take similar measurements but allows different metrics to be calculated. These two actions used were decided by clinicians to use actions that are clinically relevant and are often used in MSK assessments. The standard RGB videos were fed into a mesh reconstruction pipeline based on the Video Inference for Human Body Pose and Shape Estimation (VIBE) model, this predicts the SMPL parameters of a given subject based on monocular RGB video. This shows the CNN taking an input RGB image sequence and applying a gated recurrent unit to generate the mesh sequence, followed by the application of a self-attention layer to determine whether the motion is realistic compared to that of the training data. The depth information alongside the RGB video is analysed using the Azure Kinect SDK with skeleton tracking capabilities, however, this method only produces 3D joint positions whereas the VIBE model produces 3D joint positions, 3D joint rotations and a 3D mesh of the subject in question. The adoption of the VIBE model in our experiment was due to its ability to encode both spatial and temporal cues into the data using the adversarial training in a deep neural network with a self-attention mechanism. To ensure the reproducibility and optimal accuracy, the training and implementation details have been implemented using the parameters described in by Kocabas, _et al._, including: sequence length=16, temporal encoder=2-layer GRU with hidden size of 1024 and learning rate of \(5\times 10^{-}5\) and an Adam optimiser, SMPL regressor=2 fully connected layers of size 1024, motion discriminator=2-layer GRU with hidden size of 1024 and a learning rate of \(1\times 10^{-}4\) and an Adam optimiser, and the self attention=2 MLP layers of size 1024 with _tanh_ activation. This model was trained using InstaVariety as the 2D ground-truth dataset, MPI-INF-3D as the ground-truth 3D dataset, and 3DPW as the 3D ground-truth dataset for evaluation purposes; this training consisted of 30 epochs with 500 iterations per epoch and a batch size of 32[18, 40, 41] ## Feature engineering Following the standards in biomechanics, feature engineering was performed using a trigonometric approach for joint angle calculations. An example equation for this can be seen in equation 1, whereby each section refers to the Euclidean distances (l.l) between two joints located in a 3D cartesian coordinate system. This equation follows the cosine rule to find an angle of a triangle using the three known sides, in this case calculating the knee angle projected in 2D on the sagittal plane. A visualisation of this can be seen in the supplementary information, which shows a graphical representation of how the knee joint is displayed as a triangle with each of the sides representing the distance between the three joints in the leg. A second form of feature extraction was developed to reduce the dimensionality of the raw data to absorb the time variable from the time series data. The first of these methods approximates an impulse since the mass between the pre-injection and post-injection is the same, as shown in equation 2, whereby the integral of the change in rotational acceleration is approximated using the area under the curve. Then the inter-patient variability needs to be accounted for the lowest value of the curve is subtracted from all values of the data to account for the change in rotational acceleration rather than the total rotational acceleration. The second method calculates the integral of the second derivative of a curve, this method of describing a curve's slope can be seen in equation 3. In terms of feature extraction using the feature importance of the extrapolated features, the method of dimensionality reduction is the PCA[42]. This method initially requires the data to be annotated to create one single data that encompasses both actions performed, one by-product of this is that this new data can be subsequently used for action recognition. Once the data was prepared, the PCA was performed using two components accounting for the two actions. Thereby calculating the feature importance of each feature, which allows the calculation of the sum of counts and the creation of a histogram to identify the most representative features as measured by the feature importance. This Linear dimensionality reduction using Singular Value Decomposition PCA was performed on each patient to find the most represented features among all patients, and on both the pre-injection and post-injection data to identify whether the feature importance changes. For vectors: \(m=h-k\), \(n=a-k\), and \(p=h-a\) \[\theta=cos^{-1}\left(\frac{|m|^{2}+|n|^{2}-|p|^{2}}{2|m||n|}\right) \tag{1}\] Where \(k\) is the position of the knee, \(a\) is the position of the ankle and \(h\) is the position of the hip. \(|l|\) denotes the euclidean distance between the two points. \[J\propto\int\alpha\cdot dt \tag{2}\] Where \(J\) is the angular impulse of an action,\(\alpha\) is the rotational acceleration, and \(dt\) is the change in time. \[Smoothness=\int\left(\frac{d^{2}y}{dx^{2}}\right)^{2}\cdot dx \tag{3}\] Where \(\frac{dy}{dx}\) refers to the gradient of the line. ## 3 Statistical testing In this experiment, there are two key requirements for the statistical tests: to show the correlation between the two pose estimation techniques and to determine each method's sensitivity. To address the correlation there are two tests which have been used; the Pearson Coefficient and a Linear Regression Analysis, in this implementation both were performed using Python with the SciPy and scikit-learn libraries respectively [43, 44]. The Linear Regression Analysis was performed by training a simple linear regression model using two feature vectors, the same biomarker as produced by both pose estimation methods. Then providing the SMPL-based feature alone to the linear regression model and using the linear equation the data can be transformed into that of the Azure Kinect-based feature space. The second form of statistical test used to determine the sensitivity of the methods is the paired statistical test, this one-tailed test was performed with an n of 20 and an alpha of 1.729. To embed temporal information in the analysis, the paired t-test was performed on both the rotational impulse of the left and right knee joint and that of the smoothness of each knee joint motion as calculated with equations 2 and 3. These two temporal features can represent motion regarding different actions in the exams. Additionally, Bland Altman Plots were created to plot the mean and standard deviation of the data. This was performed for all participants for both actions and for each of the biomarkers calculated with equations 2 and 3. ## 4 Case study To show the sensitivity of these methods in a clinical environment these techniques were performed on a small case study of 20 patients each with a similar diagnosis of either single-leg knee pain or bilateral knee pain. Each of the patients were also of a similar age demographic, aged 55 and above, since age is a well-known biomarker of OA this increases the chance that their diagnosis is due to knee-OA rather than an injury [45]. To finalise the sensitivity study, each patient received a local anaesthetic injected into the knee with the diagnosed pain. This would remove the psychological change to movement caused by pain, providing the biomechanics analysis with a clear before and after treatment and allows us to assess the sensitivity of each capture method. The study protocol was approved by the University of Lincoln Ethics, Governance & Regulatory Compliance Committee, the study was performed in accordance with relevant institutional guidelines and regulations and all participants provided written informed consent prior to any data collection, this includes the storage and distribution of anonymised data and any results obtained as a result of this research.
2307.12036
Exploring the Relationship Between Personality Traits and User Feedback
Previous research has studied the impact of developer personality in different software engineering scenarios, such as team dynamics and programming education. However, little is known about how user personality affect software engineering, particularly user-developer collaboration. Along this line, we present a preliminary study about the effect of personality traits on user feedback. 56 university students provided feedback on different software features of an e-learning tool used in the course. They also filled out a questionnaire for the Five Factor Model (FFM) personality test. We observed some isolated effects of neuroticism on user feedback: most notably a significant correlation between neuroticism and feedback elaborateness; and between neuroticism and the rating of certain features. The results suggest that sensitivity to frustration and lower stress tolerance may negatively impact the feedback of users. This and possibly other personality characteristics should be considered when leveraging feedback analytics for software requirements engineering.
Volodymyr Biryuk, Walid Maalej
2023-07-22T10:10:27Z
http://arxiv.org/abs/2307.12036v1
# Exploring the Relationship Between Personality Traits and User Feedback ###### Abstract Previous research has studied the impact of developer personality in different software engineering scenarios, such as team dynamics and programming education. However, little is known about how user personality affect software engineering, particularly user-developer collaboration. Along this line, we present a preliminary study about the effect of personality traits on user feedback. 56 university students provided feedback on different software features of an e-learning tool used in the course. They also filled out a questionnaire for the Five Factor Model (FFM) personality test. We observed some isolated effects of neuroticism on user feedback: most notably a significant correlation between neuroticism and feedback elaborateness; and between neuroticism and the rating of certain features. The results suggest that sensitivity to frustration and lower stress tolerance may negatively impact the feedback of users. This and possibly other personality characteristics should be considered when leveraging feedback analytics for software requirements engineering. user feedback, personality types, personality traits, requirements engineering ## I Introduction Personality types, which are combinations of personality traits, have been studied for decades--not only in psychology but also in software engineering. Usually, a variety of personality tests such as Myers-Briggs Type Indicator (MBTI) [1], Big Five/Five Factor Model (BF/FFM) [2], Keirsey Temperment Sorter (KTS) [3], Sixteen Personality Factor Questionnaire (16 PF) [4], Adjective Check List (ACL) [5], or Eysenck Personality Inventory (EPI) [6] are used to group participants into different personality types [7]. In software engineering, research has focused so far on studying the personality types of developers and their impact on development performance, team dynamics, leadership performance, and task allocation [7]. The evidence suggests, e.g., that certain personality types prefer certain software engineering tasks and that the correct distribution of tasks has an impact on project success [7]. However, the success of software engineering projects is also largely dependent on the contribution of other stakeholders, most notability of users during requirements engineering and maintenance activities. Yet, little is known on whether and how user personality impact these activities. In recent years, the systematic collection and analysis of user feedback to inform requirements engineering and software evolution tasks has become a common practice [8]. Especially, since the emergence of social media and app stores, a significant amount of feedback on software can be collected and used by development teams, who also increasingly engaging in conversations with users [9]. Feedback can reveal the users' perspective on the software and its features that is not obvious to the developers, for example unexpected usage scenarios, unreported defects, or ideas for new features [10]. This work explores whether personality types can effect the feedback behavior of software users. If yes, personality traits should be taken into account to "calibrate" and interpret feedback collected online, in workshops, and interview sessions. Our intuition is that the personality type might have an impact on how individuals perceive software, and consequentially get engaged, and frame their feedback. For example, users with a certain, highly developed personality trait may get frustrated with certain features quicker than others or focus more on the negative aspects. Other users might feel obliged to explain their position, or are able to spot aspects of the software that others do not. This can be reflected in different properties of feedback such as software rating, wording, scope, mentioned features, or motivation and expectation. If such relationships exist, they should be taken into consideration in the collection, prioritization, and handling of feedback. We report on a preliminary study in the context of an introductory programming course, where we analyzed user feedback given by university students to the online teaching platform Moodle together with the personality traits of the students. Our research questions are as follows: * **RQ1**: Do feature ratings correlate with personality trait scores? * **RQ2**: Do personality trait characteristics correlate with the elaborateness of user feedback? We introduce the background for personality studies and close related work in Section II. Then, we present the design of our study in Section III, the results in Section IV, and threats to validity in Section V. Finally, we briefly discuss the findings in Section VI and conclude the paper in Section VII. ## II Background ### _Personality Traits and Types_ Personality traits are relatively enduring patterns in behavior, thoughts, and feelings, that manifest the tendency to respond in certain ways under certain circumstances [11]. The constellations of those unique personality trait features are called personality types and are used to define discrete groups of individuals [12]. In personality psychology, they are often used to better understand the diversity of responses from people in similar circumstances [13]. Three personality tests dominate the scientific studies so far: namely Myers-Briggs Type Indicator (MBTI) [1], Five Factor Model (FFM), and Keirsey Temperament Sorter (KTS) [7]. The FFM model is the most suitable for our research goal, as it produces more comprehensive information on all scales [14, 15]. Furthermore, MBTI lacks the representation of neuroticism [14], which is an important dimension when considering frustration during software usage and the subsequent feedback behavior. The FFM includes the following five personality traits: * **Agreeableness (A)** Individuals scoring high on this dimension tend to be compliant, cooperative, altruistic, and helpful to others [16, 17]. Individuals who score low on this dimension tend to be irritable, skeptical towards others, competitive, and egocentric [16, 17]. * **Agreeableness (A)** Individuals scoring high on this dimension tend to be compliant, cooperative, altruistic, and helpful to others [16, 17]. Individuals who score low on this dimension tend to be irritable, skeptical towards others, competitive, and egocentric [16, 17]. * **Conscientiousness (C)** Individuals scoring high on this dimension tend to be careful, thorough, responsible, organized, and forward planning [16, 17]. Those who score low on this dimension tend to be irresponsible, disorganized, and unscrupulous [16, 17]. High levels of conscientiousness manifest themselves in an achievement-oriented and dependable character. Low levels of conscientiousness, however, do not imply a lack of work ethics, but rather a lower ability to apply them [17]. * **Extraversion (E)** Individuals who score high on this dimension tend to be sociable, talkative, assertive, and sociable [16, 17]. Introverted individuals (those who score low on this dimension) are often retiring, reserved, cautious, and independent [16, 17]. * **Neuroticism (N)** Individuals who score high on Neuroticism have a tendency to experience sadness, embarrassment, anxiety, depression, and anger. They are prone to insecurity and irrational ideas, are less able to control impulses, and cope poorly with stress [16, 17]. Those who score low on Neuroticism tend to handle stressful situations well and to be calm, poised, and emotionally stable [16, 17]. * **Openness (O)** Those who score high on openness tend to be open-minded, imaginative, intellectually curious, and independent of judgment from others [16, 17]. Those who score low on openness tend to be down-to-earth, shy away from novelty, and seek conformity [16, 17]. The traits can be interpreted in isolation or in combination with each other and aggregated into _personality types_[18]. ### _Personality Studies of Software Developers and Users_ Beyond psychology, personality types have been in the focus of software engineering research to investigate effects on various phenomena such as pair programming, team effectiveness, individual performance, software task allocation, behavior preferences, education, and project management effectiveness [7]. However, only a few studies have investigated the personality of software users in contrast to software producers (developers, managers). Stachl et al. [19] correlated personality types with mobile app usage. Their results show that the personality type can be predicted by looking at the frequency, duration, timing of app usage, and feature usage such as average text length in messages. This suggests that the identification of personality types does not require users to participate in personality tests. When submitting feedback, a more realistic scenario is to analyze usage data than asking feedback providers to take a personality test. Various studies in human-computer interaction research examined the effects of personality on user interface preferences. Alves et al. used the FFM scale to show a difference in the preferences of GUI styles between different personality types [20]. The authors found that the personality traits have an effect on preferences of font size, information presentation density, and color themes. A study by Sarsam and Al-Samarrie [21] shows that users are more satisfied with the usage of a learning app if it is designed in accordance with the preferences associated with their strong personality traits. We are unaware of studies that have examined the impact of personality on user feedback. To the best of our knowledge, the closest work to our is by Tizard et al. [22], who studied the demographic properties of feedback providers. The authors surveyed 1040 software users about their feedback habits, software usage, and demographic information such as gender, age, ethnicity, education, and employment status. Their results show that usage duration and gender have a significant impact on the amount and frequency of feedback, as well as the positivity of feedback. However, they did not investigate the impact of personality on the provided feedback. ## III Methodology ### _Study Design_ Our study consisted of two parts: one for collecting user feedback and one for identifying the personality traits of the feedback providers. The questionnaires for each part were available to students independently throughout the last two weeks of the semester. Participants were able to complete either or both of them in any order. Each part lasted for about 15 minutes. In the first part, participants were asked to review (in text form) and rate (from 1-dissatisfied to 5-satisfied) several Moodle features. Moodle is an open-source learning platform designed to cover all aspects of teaching for educators, administrators, and students. The software is usually self-hosted by institutions and has been the main teaching platform at our department since 2019. We collected feedback on five Moodle features that were essential for our course: * **Material overview** is the main page of the course where all the items such as course material, assignments, and additional information are listed and grouped by week and topic. * **Forum** is a StackOverflow-like, simplified, Q&A thread-feature, that allows students to ask questions and get answers from teachers or peers, vote on answers, and mark questions as resolved. * **Notification** feature notifies about incoming chat messages, announcements, and course calendar deadlines: either on the Moodle web interface or via email. * **Code runner** is an IDE-like environment that enables teachers to create small programming assignments and the students to write and check code. The text editor in the _code runner_ only provides the students with a minimalistic text field without any of the usual IDE features such as auto compilation, code completion, or auto formatting. Before submitting the code, students can check the correctness of their program by executing the reference unit test. The code runner records each submission attempt so the number, duration and score is available for evaluation. Each week the course exercise consisted of 5-10 sub-tasks which must be completed within a week of publication and can be re-submitted indefinitely before the deadline. * **Quiz** supports the creation of different types of questions (such as multiple/single choice, drag and drop, and gap filling) that are solved by students and submitted in the same manner as the programming tasks. Both code runner and quiz are used by students for homework assignments that are graded automatically. We focused on those features as they were regularly used during the course: including solving weekly tasks, accessing learning material, and communicating with peers and teachers. We instructed the participants to explicitly state if they don't have any written feedback on a particular feature to prevent them from just skipping questions without reading. The written feedback questions included one part regarding problems with the respective feature and one regarding wishes and requests. The second part included the questions on the Big Five personality traits using the NEO-FFI-30 questionnaire [23]. We chose the 30 question version of the NEO-FFI to reduce the overall length of the study and increase the willingness to participate. Both parts included the same demographic questions at the end. So participants could correct or complement incorrect or incomplete answers from the other part. All questions except the personality test were optional to make the study shorter and potentially attract more participants. ### _Data Collection and Preparation_ The study participants were university students from a large introductory software engineering course at our department. The course is mandatory for first semester informatics students. It is also attended by students of other majors. We advertised the study during the main course session as well as in the main teaching platform Moodle. As incentive, we raffiled online shopping vouchers, which participants could receive by completing both parts of the study. The participation was completely voluntary and did not have any impact on the course performance or grading. Before the study, we collected the agreement of a local ethics committee at the university. Out of 124 total participants who enrolled in the study, 56 have completed both parts with all mandatory answers to calculate their personality traits. Before answering the research questions, we labeled the feedback into different categories and score the personality tests as described in the following. #### Iii-B1 Feedback Labeling Before the labeling, the first author read every piece of feedback to better understand the underlying data. We observed that most comments have multiple types of information. To label the information types in the feedback, we used a labeling scheme similar to Pagano and Maalej [10]. One of the authors manually labeled the feedback with 10 non-exclusive, basic information types _praise_, _criticism_, _shortcoming_, _improvement_request_, _bug_report_, _feature_request_, _content_request_, _other_app_, _noise_, and _rationale_. _Praise_, _criticism_ are generic categories for feedback that is positive or negative but non-informative to developers. _Shortcoming_ is a description of something that is wrong with the software, but without stating a what can be improved. _Improvement_request_ and _feature_request_ are similar to _shortcoming_, but with more specific information on what can be improved or added to the software. _Bug_report_ is the description of a bug or defect in the software that is obviously not intentional. We added _content_request_ to differentiate requests that are not concerning the software features but rather the course content. _other_app_ denotes that another software was mentioned or compared to Moodle in the feedback (e.g. by more experienced users). _Rationale_ is a category that denotes the presence of justification or elaboration of the feedback. For example, if a feedback text contains a _feature_request_ and additionally the participant describes why this feature is needed in their context, we added the label. #### Iii-B2 Personality Test Scoring We have scored the personality test according to the NEO-PI-R/NEO-FFI manual by Ostendorfer and Angleitner [24]. Each participant received a score in the interval [0, 50] for each personality trait. When taking a closer look at the answer patterns, we found that 3 participants answered _disagree_ or _partially disagree_ more frequently than expected. We also observed that the answers of 10 participants exhibit concerning pattern. 8 participants have long streaks of answering questions with the same value, including two participants who also have agreed or disagreed with the questions more frequently than expected. Additionally, two participants have disagreed with the questions more frequently than expected. The NEO-PI-R/NEO-FFI manual advises treating these cases with caution, but has no strong advice to discard them entirely. We conducted our subsequent correlation analyses with and without these 10 cases and did not find any notable difference. ## IV Results We first analyzed the demographic data of our participants. We collected data about _age_, _gender_, _semester_, and _study program_. The average participant in our sample is 20 years old and in their first study year. However, a large portion of participants did not specify their age or study duration. 21 participants are female, 18 participants are male, and 17 did not specify any gender. 43 participants are studying computer science and 13 have other majors. To simplify our analysis, we limit ourselves to the evaluation of personality traits and not personality types. Figure 1 shows that _agreeableness_, _conscientiousness_, and _openness_ are overall higher than _extraversion_ and _neuroticism_ in our sample. The means and standard deviations of personality trait scores, shown in Table I, are close to the original publication by Korner et al. [25]. Therefore, we assume that our sample is close to being representative. Notably, the mean of _openness_ and standard deviation of _neuroticism_ have the largest differences to the statistics of Korner et al. One possible explanation is that people willing to participate in such studies might generally expose a high _openness_ score. Figure 2 shows that, on average, the features were rated high, with a median of 4 (solid lines) and a means ranging between 3.23 and 4.13 (dotted lines). This corresponds to overall average rating on app store [10]. The _quiz_ feature was rated highest with a mean of 3.66, the _notification_ feature was rated lowest with a mean of 3.23. Table II shows the results of the correlation analysis conducted to answer RQ1. We found one significant correlation between a personality trait and a Moodle feature: namely between _neuroticism_ and _material_overview_(Table II) with a p-value of 0.05. This means that participants who are high on _neuroticism_ (and thus with tendency to experience sadness, embarrassment, anxiety, depression and anger) slightly dislike the _material_overview_. For RQ2, we assessed the elaborateness of written feedback. We particularly looked at the text length, number of mentioned features (Table III), and whether some rationale was provided in the feedback (Figure 3). Overall, the median word length of the feedback is 34 words. The longest written feedback is 220 words long. We found a positive but not significant correlation between _neuroticism_ and the length of written feedback. However, _neuroticism_ has a significant positive moderate correlation with the number of mentioned features (Table III). This means that in our sample, neurotic individuals tend to write longer feedback, that mentions more features. Additionally, we focus on rationale as RQ2 since it often contains important details for requirements and is an indicator for a deeper involvement by the feedback provider [26]. 39 participants provided rationale in their feedback. Those participants scored higher on personality traits _agreeableness_, _conscientiousness_, _extraversion_, and _openness_ and lower on _neuroticism_ (Figure 3). Fig. 1: Distribution of personality scores (N=56). Fig. 2: Distribution of feature ratings (N=56). The study data and code is publicly available with an open source license.1 Footnote 1: 10.5281/zenodo.8173418 ## V Threats to Validity ### _External Validity_ The rather small sample size in our study limits the generalizability of our results and may be the cause of statistical insignificance. We chose 0.05 as threshold for the \(p\)-value due to our small sample size, which can impact negatively the validity of our results. Our participant sample is limited to university students from a beginners programming course and does not represent the general population or the population. Even though our mean and standard deviation of the personality trait scores is similar to the more representative sample, tested by Korner et al. [25]. The participants were asked to provide feedback on certain features of the used software. The questionnaire design nudged the participants to differentiate between feedback that voices wishes and feedback that voices problems. Even though feedback is sometimes explicitly collected in requirements engineering, a big part of it is provided by the users on their own behalf. Therefore, our results might be limited to a specific type of feedback communication. A survey-based personality study is prone to over- or underestimation of certain variables. The willingness to participate in such studies often correlates with high scores on _agreeableness_ and _conscientiousness_[25]. Our self-selected sample is potentially affected by this. ### _Internal Validity_ We do not have a control group to test how the feedback behavior might be influenced by other factors, such as experience with user feedback or requirements in general. We tested for the skill level by looking at the progress of Moodle task completion (duration and number of attempts). We have no reliable way to detect pauses in the participants' work and can not reliably determine the real duration. However, generally, this should not impact the actual tasks in our study (give feedback and answer personality test). ### _Construct Validity_ We have implemented our personality test questionnaire as instructed by the publication of Korner et al. [25]. But we did not consult any psychologists prior to the admission of the test. We performed the scoring and evaluation of the NEO-FFI-30 questionnaire following the instructions from the original publication by Ostendorf and Angleitner [24]. However, we had no supervision by or consultation with psychologists and can not do an in-depth interpretation of the results. The questionnaire Korner et al. [25] is a reduced version of the original 240 or 60 questions inventory, and therefore possibly less reliable than the original one. ## VI Discussion ### _RQ1: Do feature ratings correlate with personality trait scores?_ Our results show a weak negative correlation between the personality trait _neuroticism_ and the rating of specific Moodle feature _material_overview_. This is in line with the study by Alves et al. who conclude that individuals high on _neuroticism_ prefer low information density GUIs. The Moodle GUI, however, is rather overloaded with different widgets, that are not always necessary for pursuing the course. We expected to see more correlations between other personality traits as well. Especially, we expected a high _neuroticism_ score to have a much higher negative impact on _code runner_ and _quiz_ rating, since they were used in a rather stressful course environment. On the other hand, we expected _forum_ and _notification_ to be rated higher by individuals who are high on _agreeableness_. Overall, we did not find a strong universal relationship between personality traits and rating feedback. But the observed trends should be checked and explored further. ### _RQ2: Do personality trait characteristics affect elaborateness of user feedback?_ The personality trait _neuroticism_ has a non-significant, weak positive correlation with the written feedback length, indicating that users who score higher on the _neuroticism_ personality trait tend to write longer feedback. At the same Fig. 3: Rationale provided in user feedback by personality trait (N=56). time, the number of mentioned features in their feedback is higher too, which indicates a higher elaborateness and overall engagement. However, these types of users seem to provide rationale less often than individuals who are high on other FFM scales. The ones who provide rationale in their feedback tend to have a higher score on the scales _agreeableness_, _conscientiousness_ and _extraversion_ and particularly _openness_. This makes sense, especially for high _agreeableness_ and _conscientiousness_ scores, since they stand for cooperativeness and dependability. We find some limited evidence that neuroticism may have an impact on the elaborateness of feedback. ## VII Conclusion Our results show some sporadic effects of personality traits on user feedback. However, we can not positively answer RQ1 with our result and only have an inconclusive answer for RQ2. We expected the correlations to be more pronounced for certain personality trait. We expected participants who score high on _neuroticism_ to allocate lower score to Moodle features, as they have used them in a stressful environment. We expected participants who score high on _conscientiousness_, _agreeableness_ to provide longer, more elaborate feedback due to the urge of helping others, inherent to these personality traits. We assume that the small sample and the specific context of university students could have an impact on the results and would like to repeat the study with a larger, more diverse sample and feedback on a wider range of software. Regarding the questionnaire, we believe that future work should include mandatory questions about the participants' demographics and previous feedback behavior, to control for those factors when evaluating the impact of personality. Our results suggest digging deeper into the impact of neuroticism on user feedback, since it is the only personality trait that shows positive results. Our study focuses on one type of feedback, that is requested from the users (pull feedback). To get a more complete understanding of the underlying phenomena, it is necessary to analyze feedback that was provided by users on their own behalf. We believe that analyzing user feedback from public sources and conducting the personality test afterward independently could improve the external validity of the results.
2304.04304
Stable Real-Time Feedback Control of a Pneumatic Soft Robot
Soft actuators offer compliant and safe interaction with an unstructured environment compared to their rigid counterparts. However, control of these systems is often challenging because they are inherently under-actuated, have infinite degrees of freedom (DoF), and their mechanical properties can change by unknown external loads. Existing works mainly relied on discretization and reduction, suffering from either low accuracy or high computational cost for real-time control purposes. Recently, we presented an infinite-dimensional feedback controller for soft manipulators modeled by partial differential equations (PDEs) based on the Cosserat rod theory. In this study, we examine how to implement this controller in real-time using only a limited number of actuators. To do so, we formulate a convex quadratic programming problem that tunes the feedback gains of the controller in real time such that it becomes realizable by the actuators. We evaluated the controller's performance through experiments on a physical soft robot capable of planar motions and show that the actual controller implemented by the finite-dimensional actuators still preserves the stabilizing property of the desired infinite-dimensional controller. This research fills the gap between the infinite-dimensional control design and finite-dimensional actuation in practice and suggests a promising direction for exploring PDE-based control design for soft robots.
Sean Even, Tongjia Zheng, Hai Lin, Yasemin Ozkan-Aydin
2023-04-09T19:44:46Z
http://arxiv.org/abs/2304.04304v1
# Stable Real-Time Feedback Control of a Pneumatic Soft Robot ###### Abstract Soft actuators offer compliant and safe interaction with an unstructured environment compared to their rigid counterparts. However, control of these systems is often challenging because they are inherently under-actuated, have infinite degrees of freedom (DoF), and their mechanical properties can change by unknown external loads. Existing works mainly relied on discretization and reduction, suffering from either low accuracy or high computational cost for real-time control purposes. Recently, we presented an infinite-dimensional feedback controller for soft manipulators modeled by partial differential equations (PDEs) based on the Cosserat rod theory. In this study, we examine how to implement this controller in real-time using only a limited number of actuators. To do so, we formulate a convex quadratic programming problem that tunes the feedback gains of the controller in real-time such that it becomes realizable by the actuators. We evaluated the controller's performance through experiments on a physical soft robot capable of planar motions and show that the actual controller implemented by the finite-dimensional actuators still preserves the stabilizing property of the desired infinite-dimensional controller. This research fills the gap between the infinite-dimensional control design and finite-dimensional actuation in practice and suggests a promising direction for exploring PDE-based control design for soft robots. ## I Introduction Soft robots offer the ability to interact with the world in a more life-like manner. However, due to their inherent flexibility, soft robots are considered to have infinite degrees of freedom. Thus, modeling and control of soft robots pose a new set of challenges to robotics research. The creation of programmable soft bodies employing materials that incorporate sensors, actuators, and computing is the primary problem for developing soft machines that live up to their full potential [1]. The most common strategy to control soft robots is using Piecewise Constant Curvature (PCC) models [2]. This method assumes that a soft robotic arm is made up of a limited number of curved segments. The major limitation of PCC models is that their accuracy suffers in instances of large deformation and external loading [3]. Another approach is using finite element methods (FEM) to model and control soft robots [4]. FEM are numerical methods that approximate Partial Differential Equations through interactions between many local nodes. FEM have the ability to model complex domains and have been shown to work in hardware [5], but these methods are extremely computationally expensive and it is quite difficult to implement in applications that require real-time responses. Additionally, control design using FEM must rely on additional approximations such as quasistatic assumptions, linearization, and model reduction [6], which can result in additional errors in the approximation. The models developed from continuum mechanics are more precise, particularly the Cosserat rod theory for soft robots that resemble slender rods. Using a set of nonlinear partial differential equations (PDE), the Cosserat rod theory describes the time evolution of the infinite-dimensional kinematic variables of a deformable rod subject to external forces and moments. The aforementioned PCC and FEM models can be thought of as finite-dimensional approximations of the Cosserat PDE models [7]. Cosserat-rod PDEs have been shown to be more accurate by a series of experiments and widely employed as the basis of soft robot simulators [8]. However, the existing control design based on Cosserat-rod PDEs has mainly relied on discretization to obtain finite-dimensional ODEs due to the lack of efficient control theory for nonlinear PDEs. PDE-based nonlinear control can avoid modeling uncertainties due to discretization and yield more interpretable and computationally efficient controllers. The vast majority of control methods for soft robots utilize open-loop control [9]. Open loop control methods are effective to demonstrate the feasibility of soft robots, but they do not allow the system to correct course if the real system deviates from the model. Closed-loop feedback control of soft robots with stability guarantees is still a challenging problem. Recently, we presented an infinite-dimensional feedback controller for the Cosserat rod model and proved its stability [10]. In this work, we address the problem of how to implement such an infinite-dimensional controller in practice using specific actuators in real-time, which fills the gap between theory and experiment. Specifically, we incorporate fabric series Pneumatic Artificial Muscle (fabric sPAM) actuators into the Cosserat-rod model and treat their air pressure as the actual inputs. The feedback system states are estimated via computer vision. The difficulty of implementing an infinite-dimensional feedback controller is that such a controller naturally lies in an infinite-dimensional functional space but the functional space that is implementable by a finite number of actuators is always finite-dimensional. To address this challenge, we formulate a convex quadratic programming problem to restrict the desired controller onto the actuators' implementable space and solve it to simultaneously determine the feedback gain functions and the actuator pressure. In this way, we guarantee that the actual feedback controller implemented by the actuators still preserves the stabilizing property of the desired infinite-dimensional controller. Ad ditional testing was conducted to maximize the convergence rate of the controller without producing an overshoot. The paper is organized as follows. A planar version of the Cosserat Rod Theory is introduced in Section II. In Section III, the models for the actuators and gravity are introduced. Additionally, the PDE Controller is discussed as well as the formulation of an optimization problem to find the ideal pressure. Next, in Section IV, the physical testbed is discussed including the fabric sPAM actuators and the Computer Vision system. Then, in Section V, our results are discussed. Finally, in Section VI, we summarize our contribution and discuss future directions for this research. ## II Modeling In this section, we introduce the PDE model for soft robots based on the Cosserat rod theory and incorporate fabric Series Pneumatic Artificial Muscle (fabric sPAM) actuators into the Cosserat-rod model for control purposes. Cosserat-rod models are geometrically exact models that describe the dynamic response of long and thin deformable rods undergoing external forces and moments [8] and have been widely used to model soft manipulators [11, 12, 13]. A Cosserat rod is idealized as a spatial curve consisting of four types of strains: bending, torsion, shear, and extension [8]. For slender robots, the linear strains (shear and extension) have a negligible effect compared with the angular strains (bending and torsion) [14]. Therefore, we can neglect these linear strain modes and obtain a reduced Cosserat-rod model which is also known as the Kirchhoff-rod model [15]. The modeling strategy and control algorithms described in this work are applicable to both the three-dimensional (3D) and two-dimensional (2D) systems, however, we will focus on the planar 2D case to simplify the notations and to explicitly show that the infinite-dimensional controllers can be approximated by a finite number of actuators in real-time. ### _Cosserat Rod Model for Planar Case_ Let \(\{\hat{y},\hat{z}\}\) define a fixed orthonormal basis in the two-dimensional world frame. When the soft robot is in its unactuated state the length of the rod is defined as \(L_{0}\) and the backbone lies along the z-axis. Because we are modeling a continuous backbone, all states depend on the independent variables of time \(t\in\mathbb{R}\) and the arc-length of the center line \(s\in[0,L_{0}]\). In this model, partial derivatives with respect to \(t\) and \(s\) are denoted by \(\partial_{t}\) and \(\partial_{s}\). The state of the rod can be described through the following vector: \[q(s,t)\equiv\begin{bmatrix}y(s,t)\\ z(s,t)\\ \theta(s,t)\end{bmatrix}\] where \(r=(y,z)\in\mathbb{R}^{2}\) denotes the position vector of the centerline. Additionally, the angle \(\theta\in R\) defines a local frame at each cross-section spanned by the orthonormal pair \(\{j,k\}\), where \(j=\cos\theta\hat{y}-\sin\theta\hat{z}\) and \(k=\sin\theta\hat{y}+\cos\theta\hat{z}\). The vector \(j\) is normal to each cross-section and is always tangent to the centerline of the robot. The forward kinematics of the system can be written as a function of \(w_{x}\), the angular velocity about the \(\hat{x}\) direction: \[\partial_{t}q=\begin{bmatrix}-z*w_{x}\\ x*w_{x}\\ w_{x}\end{bmatrix}\] Here, \(w_{x}\) is the angular velocity about \(\hat{x}\), which points out of the plane and completes the right handed coordinate system. The time rate of change of the angular velocity can be written as: \[\partial_{t}w_{x}=\frac{1}{\rho J_{x}}(EJ_{x}\partial_{s}u+l_{c}(s)+l_{g}(s)).\] where \(u\) is the angular strain and is defined as \(u=\partial_{s}\theta\). Additionally, \(\rho\) is the density of the backbone, \(J_{x}\) is the polar moment of inertia about the \(\hat{x}\) axis, \(E\) is the Young's Modulus of the backbone, \(l_{c}\) is the total moment generated by the pneumatic actuators, and \(l_{g}\) is the moment due to gravity (the actual values of each parameter are given in Table 1). The actuator and gravity model will be given in Section III. When we combine all this information into one expression, the 1D manifestation of Cosserat Rod theory can be described in a single wave equation: \[\partial_{tt}\theta=\frac{1}{\rho J_{x}}(EJ_{x}\partial_{ss}\theta+l_{c}(s)+l _{g}(s)). \tag{1}\] ### _Actuator Modeling_ In order to determine how much moment each chamber can apply to the system, we first model the amount of force each chamber can apply. We used the previous model that modeled the fabric sPAM actuators as ideal McKibben actuators [16]. Although the system is not a perfect cylinder at the extremes of the chambers and near the O-rings, we neglect boundary effects and assume that the actuators have a cylindrical shape when inflated (Fig. 2). According to a previous study [16] which utilizes actuators made from the same material, the force that can be applied by a fabric sPAM actuator is modeled as follows: Fig. 1: **Reference frames used in Cosserat Rod Theory. The global frame \(\{y\}\) and the local frame described by** \[F_{ideal}(\varepsilon)=(\pi r_{0}^{2})P[a(1-\varepsilon)^{2}-b],\ 0\leq \varepsilon\leq\varepsilon_{max}\] where \(a=3/\tan^{2}(\alpha_{0})\) and \(b=1/\sin^{2}(\alpha_{0})\). Additionally, \(P\) refers to the internal pressure, \(\varepsilon\) refers to the contraction ratio, and \(r_{0}\) and \(\alpha_{0}\) refer to the initial depressurized radius and braid angle. The force and also the moment that the actuators can apply depend on the contraction of the actuators. When the fabric sPAM actuators are actuated, they can generate force in the negative z direction. For both fabric sPAM actuators, the force acts along a line a fixed distance away from the center of the backbone. The distance from the center of the backbone to the center of the bladder is physically constrained to be d = 1.8 cm. Thus, when pressure is applied to an actuator, it contracts (shortens in length) and exerts torque in the same direction as the actuator. \[l_{c}(s)=F_{ideal}(\varepsilon)d=(\pi r_{0}^{2})P[a(1-\varepsilon)^{2}-b]d,\ 0\leq \varepsilon\leq\varepsilon_{max}\] ### _Modeling Gravity_ In order to model the torque due to gravity, we must define a couple of terms. Torques must be defined around a reference point, so we define \(s\) to be the reference point of interest at an arbitrary point along the length of the manipulator. We also define a lever arm term for torque \(r(s)\in[-s,L-s]\). This measures the perpendicular distance between the force's line of action and the point of interest. Because we assume that the force of gravity always points straight downward (in the +z direction), this is just the horizontal distance between the reference point and the point at which the force occurs. The torque due to gravity is comprised of two components: the torque caused by the continuous soft backbone and the torque caused by the reaction force of the pin that holds the backbone up. The torque due to the backbone can be calculated as follows: \[l_{g,back}(s)=\int_{0}^{L}\rho A_{c}g*r(\sigma-s)\,d\sigma\] where \(\rho\) is the density of the back, \(A_{c}\) is the cross-sectional area of the backbone, and g is the gravitational constant which is assumed to be \(g=9.81[m/s^{2}]\). In practice, this integral is approximated numerically through the known position of points along the length of the manipulator. Similarly, the torque due to the pin for the point of interest is computed as: \[l_{g,pin}(s)=-\rho A_{c}L*r(-s)\] Combining these terms, we obtain a complete expression for the torque due to gravity. \[l_{g}(s)=l_{g,back}(s)+l_{g,pin}(s)\] \[=\int_{0}^{L}\rho A_{c}g*r(\sigma-s)\,d\sigma-\rho A_{c}L*r(-s)\] ## III Controller Design In this section, we review an infinite-dimensional controller from our previous work [10] and show how to tune some parameters of this infinite-dimensional controller such that it becomes realizable by only finitely many actuators. Since shear and elongation deformations are ignored in the Kirchhoff case, the position of the soft robot can be uniquely determined by the angle \(\theta\). Thus, it suffices to consider the control problem for (1). After substituting the actuator and gravity models, we obtain the following complete control system: \[\partial_{t}\theta =w_{x} \tag{2}\] \[\partial_{t}w_{x} =l\] where \[l=\frac{1}{\rho J_{x}}(EJ_{x}\partial_{ss}\theta+(\pi r_{0}^{2})P[a(1- \varepsilon)^{2}-b]d+l_{g}), \tag{3}\] and the pressure \(P\) is the actual input. Assume the objective is to track the desired configuration trajectory given by \[\partial_{t}\theta_{*}=w_{*,x}.\] The position and velocity error terms are defined as: \[e_{\theta}=\sin(\theta-\theta_{*}),\quad e_{w_{x}}=w_{x}-w_{*,x}.\] Note that \(e_{\theta}\to 0\) implies \(\theta\rightarrow\theta_{*}\). In [10], we presented an infinite-dimensional geometric controller for rotational control of 3D Cosserat rod models, which asymptotically drives the soft robot toward the desired configuration trajectory. In the planer case, the geometric controller is reduced to \[l_{*}=\partial_{tt}\theta_{*}-k_{\theta}e_{\theta}-k_{w_{x}}e_{w_{x}}, \tag{4}\] where \(k_{\theta}(s),k_{w_{x}}(s,t)\in\mathbb{R}\) are feedback gains. We have the following convergence result for this controller. **Theorem 1**: _[_10_]_ _Consider the soft robot system (2). If there exist positive functions \(k_{\theta}(s),k_{w_{x}}(s,t)\) such that \(l\equiv l_{*}\) for all \(t\), then \(\big{(}e_{\theta}(s,t),e_{w_{x}}(s,t)\big{)}\to 0\) for all \(s\) exponentially._ This theorem implies that if we can design the pressure \(P\) such that the actual input (3) has the form (4), then we can guarantee the convergence of the configuration tracking objective. The challenge is that the desired controller (4) is Fig. 2: **Symbolic representation of soft manipulator** and visualization of the braid angle \(\alpha_{0}\) and initial radius \(r_{0}\). infinite-dimensional while the actual controller (3) always lies in a finite-dimensional functional space as \(P\) is finite-dimensional. If we prescribe positive values for \(k_{\theta},k_{w_{x}}\), it is almost impossible to find the correct air pressure \(P\) such that (3) holds. Nevertheless, we point out that \(k_{w_{x}}\) is allowed to be a function of \((s,t)\). Our key idea is to use this flexibility and tune the value of \(k_{w_{x}}\) at every \(t\) such that the corresponding desired controller (4) becomes realizable by the actual controller (3). This motivates us to formulate the following (convex) quadratic programming problem at every \(t\): \[\begin{array}{rl}\min_{P,k_{w_{x}}(s)}&\|l-l_{*}\|_{L^{2}}^{2}\\ \text{s.t.}&|P|\leq P_{max}\\ &k_{w_{x}}(s)\geq\bar{k},\quad\forall s,\end{array} \tag{5}\] where \(P_{max}\) is the maximum pressure and \(\bar{k}>0\) is a small constant. Intuitively, this programming problem enforces the actual and desired controllers (3) and (4) to be equal by simultaneously determining the values of the actuator pressure \(P\) and the feedback gain \(k_{w_{x}}\) in real-time, subject to the constraints the \(P\) is bounded and \(k_{w_{x}}(s)\) is positive (for guaranteeing stability). **Remark 1**: _A few comments are in order. First, the programming problem (5) is infinite-dimensional but can be numerically solved by discretization or parametrization. Since it is a convex quadratic programming, there exist efficient commercial solvers that can solve it in real-time. Second, whether a solution exists depends on whether the desired configuration is reachable given the actuator profile. Our hypothesis is that a solution always exists if the desired static configuration is assignable. This problem is under study. Even if the desired configuration is not reachable, (5) still tries to find inputs that drive the soft robot as close as possible to the desired configuration. Third, one may allow \(k_{\theta}\) to also be a decision variable in (5) which will increase the possibility of finding a solution, but the stability results in Theorem 1 no longer hold. Nevertheless, convergence is still observed in experiments as long as the motion is sufficiently slow. Finally, all results in this section can be easily extended to the 3D case by changing (4) to its 3D version given in [10]._ ## IV Experimental set-up Our experimental setup includes the soft manipulator, a frame from which the soft manipulator hangs, a Logitech HD Pro Webcam C920, and a Pressure Control Subsystem that consists of an Arduino Uno and two proportional QB3 regulators. The QB3 is a closed loop pressure regulator made up of a mechanical regulator mounted to two solenoid valves, an internal pressure transducer, and electronic controls. By turning on the solenoid valves, which pressurize the mechanical regulator's pilot, the pressure is controlled. Both valves regulate the exhaust and the inlet respectively. The soft manipulator is comprised of a flexible silicone backbone and two fabric sPAMs. As the robot moves, the webcam captures an image running at 30 Hz, and the pressure is calculated as a function of the error between the current configuration and the desired configuration. _Fabric sPAM Actuators_: The robot consists of two fabric sPAM actuators. Fabric sPAMs are well suited for soft robotics because of their durability, ease of construction, and fast response time [16]. The actuators were made of a single layer of woven, air-tight material. The material is airtight, silicone and urethane-impregnated, rip-stop nylon often used in tents and tarps. This fabric is reinforced in the radial direction like Mckibben actuators. When the pressure is turned off, the muscle is in its lengthened state. As the pressure is applied, the system expands radially and contracts axially. This applies force inward axially [16]. These actuators are fabricated by cutting rip-stop nylon at a \(45^{\circ}\) bias, enclosing the fabric into a \(r=1.5\)\([cm]\) cylinder, and sealing that cylinder with a Silicone glue (Smooth-On SilPoxy) based on the fabrication steps described in [16]. To increase contraction we add O-rings along the length of the robot. In order to maximize contraction, testing was conducted to determine the maximum O-ring spacing that would not result in inactive regions [17]. At an O-ring spacing of 3 cm, bulging was observed in each sub-chamber which meant there were no inactive regions. The final system consisted of two fabric sPAM actuators (L = 30 cm, D = 3 cm) made of rip-stop nylon. 3mm O-Rings were added in 3 centimeter segments along the length of the robot. These fabric sPAM chambers are attached to a flexible backbone. The backbone is 30 centimeters in length, has a cross-sectional area of \(1.68\cdot\)\(10^{-4}m^{2}\), and was made of the Dragon Skin 10 Silicone produced by Smooth-On. This material was selected because its flexibility would not inhibit actuation while having extremely high tensile strength subject to deformation. _Pneumatic Control Board_: The pneumatic control board includes two Proportion Air QB3 proportional regulators with built-in pressure sensors. This allows the regulator to control the closed loop pressure of each pressure channel \begin{table} \begin{tabular}{|l|l|l|} \hline Property & Symbol & Value \\ \hline Density & \(\rho\) & \(1070\)\(\frac{kg}{m^{3}}\) \\ Young’s Modulus & \(E\) & \(90\)\(kPA\) \\ Cross-sectional Area & \(A_{c}\) & \(1.68\cdot 10^{-4}\)\(m^{2}\) \\ Moment of Area & \(J_{x}\) & \(0.12\cdot 10^{-8}\)\(m^{4}\) \\ Length & \(L\) & \(0.3\)\(m\) \\ \hline \end{tabular} \end{table} TABLE I: Physical parameters of the backbone of the soft manipulator. Fig. 3: **Closed-loop shape tracking block diagram** to be the desired value. The desired pressure channel is controlled via a PWM signal from the Arduino. This PWM is converted to a true-analog signal via an analog conversion circuit. _Computer Vision State Observation_: In order to calculate the posture of the soft robot, the position, and orientation of points of interest must be known. To do this, ten circular markers with four color quadrants were added along the length of the backbone. We first calculated the position of all ten points from a grey-scale version of the image using a circular Hough transform (Fig. 5). This was implemented using MATLAB's function _imfindcircles_ from Image Processing Toolbox. Then we determined rotations of four lines connecting the central part of the marker and the centroids of all four color segments. Since the order of the color segments is known, we used these four angles to calculate the final marker's rotation angle. Using four color segments instead of one makes this approach more robust against noise. ### _Estimating Contraction_ The force of the pneumatic artificial muscles is dependent on the contraction which can be calculated from the curvature of the backbone. We can derive the arc length of the backbone, \(L=r\theta\), and the arc length of an actuator, \(L(1-\varepsilon)=(r-d)\theta\), as a function of the radius of curvature, \(r\), and the contraction ratio \(\varepsilon=\Delta L/L\) (a unitless representation of the shrinkage of the chamber) (Fig. 4). Since the backbone and chamber 2 are physically connected, they must share the same angle. This means that we can solve for \(\theta\) in each equation and eliminate it from the system: \[\frac{L}{r}=\frac{L(1-\varepsilon)}{r-d}\to r-d=r(1-\varepsilon)\] Using this expression, we can apply known pressures to the system and find the radius of curvature of the corresponding section of the backbone. The radius of curvature is determined by using the Pratt Method proposed in 1987 to find the circle that best fits the curvature of the backbone [18]. ### _Experimental Results_ The proposed controller is effectively an infinite-dimensional PD Controller. However, a full loop from image acquisition, to data processing, to setting pressure values takes about 0.5 seconds to complete. This means that any information about the angular velocity would not be applicable once the changes can be reflected in the robot. Thus, in practice, the dependence on the angular velocity of each point was eliminated making the system an infinite dimensional proportional controller. Although the controller will theoretically converge for any positive values of \(k_{\theta}\), we want to maximize this value so the controller converges within a reasonable amount of time. To do this, We identified of the \(k_{\theta}\) terms in optimization problem that would minimize the convergence time without excessive overshoot. In general, the convergence time decreases as the lower bound increases. However, as the lower bound approaches \(10^{6}\), overshoot starts to become a significant issue. In two of the five trials conducted, the soft manipulator took over two minutes to converge to the desired shape. In the instances that the system did converge with this lower bound, the soft manipulator converges to the desired location the fastest out of any trial. However, if the system got stuck continuously overshooting the target, this was considered a failure for the controller. The decision was made to trade precision for convergence speed and a lower bound of \(10^{5}\) was selected. Several tests were conducted to demonstrate the controller's ability to converge to a desired configuration. The plot in Fig. 6 shows the error of the system decaying over time so that the norm of the error vector is within 0.15 of the desired configuration. This 0.15 stopping criterion was determined by the reliability of the computer vision system. Fig. 4: **Unactuated and actuated states of the fabric sPAMs.****A.** Unactuated (left, P = 0 kPa) and actuated (right, P = 30 kPa) states of a single fabric sPAM. **B.** Unactuated (left) and actuated (right) states of a two-segmented arm. Fig. 5: **Image processing example (Left) An example image of the arm with markers. (right) Automatic detection of markers (red circles) and markers’ orientation (green lines inside red circles).** Theoretically, the system will remain fixed within a plane. However, small imperfections cause the manipulator to twist slightly, especially at higher pressures. Because of error due to twisting, the stopping criterion was implemented. Next, we observed how the controller performed across a range of pressure values as shown in Fig. 7. For all the pressure ranges, the found configuration corresponds to the desired shape quite well. For low pressure values, the actual configuration is nearly identical to the desired configuration. The shape deviates more for pressure above 20 kPa. We attribute this to two factors. The first is the twisting behavior that causes errors in computer vision estimation at higher pressure. Additionally, the majority of contraction occurs between 0-20 kPa, which means that above this range increase beyond 20, the pressure that corresponds to the stopping criterion will deviate more from the known pressure. However, the estimated shapes on the high end of the pressure are still reasonable estimates of the shape. Overall, these results show promise for the feasibility of partial differential control in hardware. With the development of more sophisticated computer vision methods and fine tuned models, the hope is to reliably demonstrate the ability to control soft robots of any kind with this technique. ## V Conclusion In this work, we demonstrated the efficacy of partial differential equation feedback control using a finite number of actuators in real time for the shape control of a planar soft robot. In order to do this, we presented a simplified version of Cosserat Rod Theory, manufactured a two-chamber pneumatic soft robot, and developed a computer vision algorithm to observe the state of the system. In future work, we plan to demonstrate that this controller works for other pneumatic actuators and extend the results to three dimensions. ## VI Awknowledgements We would like to thank Adam Czajka of the University of Notre Dame for his help with computer vision topics.
2305.13551
How Fragile is Relation Extraction under Entity Replacements?
Relation extraction (RE) aims to extract the relations between entity names from the textual context. In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context. However, existing work has found that the RE models memorize the entity name patterns to make RE predictions while ignoring the textual context. This motivates us to raise the question: ``are RE models robust to the entity replacements?'' In this work, we operate the random and type-constrained entity replacements over the RE instances in TACRED and evaluate the state-of-the-art RE models under the entity replacements. We observe the 30\% - 50\% F1 score drops on the state-of-the-art RE models under entity replacements. These results suggest that we need more efforts to develop effective RE models robust to entity replacements. We release the source code at https://github.com/wangywUST/RobustRE.
Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen
2023-05-22T23:53:32Z
http://arxiv.org/abs/2305.13551v3
# How Fragile is Relation Extraction under Entity Replacements? ###### Abstract Relation extraction (RE) aims to extract the relations between **entity names** from the **textual context**. In principle, textual context determines the ground-truth relation and the RE models should be able to correctly identify the relations reflected by the textual context. However, existing work has found that the RE models memorize the entity name patterns to make RE predictions while ignoring the textual context. This motivates us to raise the question: "are RE models robust to the entity replacements?" In this work, we operate the random and type-constrained entity replacements over the RE instances in TACRED and evaluate the state-of-the-art RE models under the entity replacements. We observe the 30% - 50% F1 score drops on the state-of-the-art RE models under entity replacements. These results suggest that we need more efforts to develop effective RE models robust to entity replacements. We release the source code at [https://github.com/wangywUST/RobustRE](https://github.com/wangywUST/RobustRE). ## 1 Introduction Recent literature has shown that the sentence-level relation extraction (RE) models may overly rely on entity names for RE instead of reasoning from the textual context Peng et al. (2020); Wang et al. (2022). This problem is also known as _entity bias_Longpre et al. (2021); Qian et al. (2021); Xu et al. (2022); Wang et al. (2022): the spurious correlation between entity names and relations. This motivates us to raise a question: "how robust are RE models under entity replacements?" Entity bias degrades the RE models' generalization, such that the entity names can mislead the models to make wrong predictions. However, a seemingly conflicting phenomenon is that RE models exhibit high (in-distribution) accuracy on standard benchmarks, such as TACRED. In our work, we find that these benchmarks are prone to have shortcuts from entity names to ground-truth relations (see Fig. 2), low entity diversity, and a large portion of incorrect entity annotations. These issues suggest that, given the presence of entity bias, the current benchmarks are not challenging enough to evaluate the generalization of RE in practice. Most existing methods for evaluating the generalizability of NLP focus on sentence classification Jin et al. (2020); Li et al. (2020); Minervini and Riedel (2018) and question answering Jia and Liang (2017); Ribeiro et al. (2018); Gan and Ng (2019), but these methods lack special designs to seize on the entity bias in RE. In this work, we propose a **type-constrained** and **random** entity replacement method: ENTRE. **Type-constrained** means we replace the named entity in the type [PERSON] or [ORGANIZATION] with the new entity belonging to the same type as the original entity. **Random** means we randomly select the entity names from a Wikipedia entity lexicon that consists of 24,933 organizations and 902,007 person entities for replacements. These two principles guarantee the effectiveness of entity replacement to produce valid and diverse RE instances. We apply ENTRE to TACRED to produce ENTRED, a challenging RE benchmark with fewer shortcuts and higher entity diversity. We evaluate Figure 1: The performance of state-of-the-art RE models drop a lot under entity replacements. the RE models on the instances with replaced entity names produced by ENTRE. We analyze the RE models under entity replacements in order to answer four research questions: (Q1) Does ENTRE reduce prediction shortcuts from entity names to the ground-truth relations? (Q2) Does ENTRE improve the entity diversity? (Q3) How do the strong RE models perform under entity replacements? (Q4) How to improve the generalization of RE? We observe several key findings. First, ENTRE reduces the shortcuts by more than 50% on many relations, and improves the subject name diversity by more than 25 times compared to TACRED. Second, the strong RE models LUKE Yamada et al. (2020) and IRE Zhou and Chen (2021) tend to memorize entity-relation patterns to infer the relation instead of reasoning based on the textual context that actually describes the relation. This phenomenon causes the model to be brittle to entity replacements, resulting in a significant performance drop of 30% - 50% in terms of the F1 score. Third, the recent causal inference approach CoRE Wang et al. (2022) improves the robustness at a higher magnitude than other methods. We believe the proposed benchmark ENTRED and ENTRE will benefit future research toward improving the robustness of RE. ## 2 Analysis of Entity Names in TACRED Before building ENTRED, we first analyze the existing popular RE datasets. Our analysis is focused on the following three perspectives: 1) the correctness of entity name annotations; 2) the diversity of entity names; 3) the prediction shortcuts from entity names to the ground-truth relations. In the popular TACRED Zhang et al. (2017), TACREV Alt et al. (2020), and Re-TACRED Stoica et al. (2021) datasets, we find that: first, there exist some portion of incorrect entity name annotations; second, many entity names are reused more than one hundred times across instances; third, the entity names in more than 70% of the instances act as shortcuts to the ground-truth relations. We introduce the details as follows. ### Incorrect Entity Annotations In the TACRED Zhang et al. (2017), TACREV Alt et al. (2020), and Re-TACRED Stoica et al. (2021) datasets, there exist quite a few incorrect entity annotations. To detect these incorrect entity annotations, we use a BERT based NER model Devlin et al. (2019) to automatically annotate the subject and object entity names in the TACRED dataset. Then, we conduct manual investigation on the entities where the NER annotations are different the original TACRED annotations. We find that more than 10% of the test instances contain incorrect entity annotations.1 We present two examples in Fig. 3. Using these mistaken entity annotations to evaluate the RE models compromise our goal of correctly measuring RE performance. Footnote 1: Including both incorrect span and type annotations. ### Diversity of Entity Names The TACRED, TACREV, and Re-TACRED datasets have low diversity of entity names: most entity names repeatedly appear in a large portion of instances (see Fig. 4). In the TACRED datasets, there are only 420 entity names repeatedly appearing as 15509 instances' subjects. For example, _"ShopperTrak"_, as the subject, has repeatedly appeared as the subject entity in 270 instances. This heavily repeated use of entity names increases the risk that RE relies on entity bias to make RE predictions. Also, with these benchmarks, it is impossible Figure 3: Two examples of incorrect entity annotations in TACRED. Figure 2: TACRED offers many shortcuts from entity names to ground-truth relations in the test set, where the model predicts the correct relation even when only given the entity names, despite all textual context being removed. As a result, it is not challenging enough to measure the generalization under entity bias. to comprehensively evaluate the generalization of RE models on a diverse set of entity names to imitate real-world scenarios. ### Causal Inference for Entity Bias We follow the prior work (Wang et al., 2022) to analyze the entity bias based on causal inference. (Wang et al., 2022) builds the causal graph of RE as a directed acyclic graph: \((E,X)\to Y\) in Figure 5. \(X\) is the input text, \(E\) denotes the entity mentions, and \(Y\) is the relation extraction result. On the edges \((X,E)\to Y\), the RE model encodes \(E\) and \(X\) to predict the relation \(Y\). Based on the causal graph displayed in Figure 5, we can diagnose whether the entities have shortcuts to relation. Wang et al. (2022) distill the entity bias by counterfactual analysis, which assigns the hypothetical combination of values to variables in a way that is counter to the empirical evidence obtained from data. We mask the tokens in \(X\) to conduct the intervention \(X=\bar{x}\) on \(X\), while keeping the variable \(E\) as the original entity mentions \(e\). In this way, the textual context is removed and the entity information is maintained. Accordingly, the counterfactual prediction is denoted as \(Y_{\bar{x},e}\) (see Figure 2). \(Y_{\bar{x},e}\) refers to the output, i.e., a probability distribution or a logit vector, where only the entity mentions are given. ### Shortcuts to the Ground-Truth Relations Existing work has found that the popular RE benchmarks' test sets provide abundant shortcuts from entity names to ground-truth relations (Wang et al., 2022; Peng et al., 2020). In other words, on many instances, the model need not "extract" the relation from the textual context but can infer the correct prediction directly through shortcuts from entities. To verify these observations, we conduct a preliminary study of the shortcuts using the strong RE model LUKE (Yamada et al., 2020) on the TACRED dataset. We first compute the instance-wise relation extraction result in the TACRED's test set. Then, we analyze the shortcuts from entity names to the relations based on causal inference (see details in Sec. 2.3). We find that there exists a large portion of instances having shortcuts from entity names to the ground-truth relations. We visualize the ratio of instances that present shortcuts in different relations in Fig. 6. Last but not least, we observe similar phenomena on other models and TACREV, Re-TACRED datasets as well. The analyses suggest that these benchmarks do Figure 4: The number of different subject entity names (red) is much lower than the number of instances (blue) in the test sets of the TACRED, TACREV, and Re-TACRED datasets. In other words, the diversity of entity names in these datasets’ test sets is limited. Figure 5: The original causal graph of RE models (left) together with its counterfactual alternatives for the entity bias (right). The shading indicates the mask of corresponding variables. Figure 6: The ratio of instances with shortcuts (the entity bias is as same as the ground truth relation) in the TACRED test set. not accurately evaluate the "extraction" capability of RE models without the shortcuts from entity names. In our words, the standard RE benchmarks are not challenging enough to evaluate whether the RE models can extract the correct relations from the textual context. In our work, we replace the entity names to reduce the shortcuts, to mitigate the possibility that RE models rely on the shortcut of entity bias to achieve over-optimistically high RE performance. Our ENTRED is able to better simulate real-world scenarios with fewer shortcuts and higher entity diversity, which is a better evaluation of the generalization of RE models. ## 3 Entity Replacement for RE We present ENTRE: a simple yet effective procedure to generate high-quality RE instances with entity replacements. ENTRE replaces entity names in the RE instances in a random and type-constrained manner. We apply ENTRE to the test set of TACRED to evaluate the state-of-the-art RE models' robustness under entity replacements. ### Targetting the Instances for Replacements We desire entity replacements to not affect the soundness of language. As we have analyzed in Sec. 2.1, there exist a significant amount of incorrect entity annotations. To handle these incorrect entity annotations, we use a BERT based NER model Devlin et al. (2019) to re-annotate the entities in the TACRED datasets. Then, we further conduct manual investigation over the entity annotations. We filter out incorrect entity annotation instances and only replace the tokens that belong to named entities. This ensures that our entity name replacements do not alter the ground-truth relation labels. Besides the incorrect entity annotations, there are also some entities for which replacement may inevitably cause noise. For example, some entities belong to the [MISC] (miscellaneous) class. If we replace a [MISC] entity with another [MISC] one, it is likely that we will break the semantics of the original sentence. In contrast, replacing the [PERSON] and [ORGANIZATION] entities with those belonging to the same type generally do not affect the ground-truth relations. We notice that all the instances in TACRED have a [PERSON] or [ORGANIZATION] entity as the subject or object. Therefore, in our work, we focus on replacing the [PERSON] and [ORGANIZATION] entities. ### A Large Lexicon of Entities We set the standards for the new entity names selected for replacements: 1. The new entity belong to the same type as the replaced one. 2. The new entity names are more diverse. These two principles contribute to making the resulting instances _natural_ - i.e., containing real, valid entities that are of the same class as the original entities, and are linguistically sound; _challenging_ - i.e., the new entities may not offer shortcuts to the model, which cannot easily get the correct extraction result by seeing only the entity names and _comprehensive_ - i.e., the robustness of RE is evaluated on a more diverse set of entities. To satisfy the above principles, we first build up a large entity name lexicon to provide the new entity names for replacements. The size of the entity name lexicon determines the diversity of entity names in our new RE benchmark ENTRED. Also, a larger entity name lexicon can help us to evaluate the generalization of RE models on more out-of-domain entity names in test time. Therefore, in addition to the entity names appearing in the TACRED, we collect the entity names from Wikipedia belonging to the category of person and organization to enrich the entity name corpus. Overall, we collect 24,933 organization and 902,007 person names from Wikipedia.2 Footnote 2: [https://dumps.wikimedia.org/emwiki/latest/emwiki-latest-pages-articles.xml.bz2](https://dumps.wikimedia.org/emwiki/latest/emwiki-latest-pages-articles.xml.bz2) ### Entity Replacements Based on the constructed entity lexicon, we propose ENTRE: a type-constrained and random entity replacement method. **Type-constrained** means we replace the named entity in the type [PERSON] or [ORGANIZATION] with the new entity belonging to the same type as the original entity. **Random** means we randomly select the entity names from our entity lexicon that consist of 24,933 organizations and 902,007 person entities for replacements. These two principles guarantee the effectiveness of entity replacement to produce valid RE instances. We iterate over TACRED instances and replace the entity names. We summarize ENTRE as the following pipeline: 1. Collecting the instances with predictions as same as the ground-truth relation. 2. Replace the entity names for the collected entities in Step 1. Repeat step 1. The above steps can be repeated for many times, and a higher repetition time leads to a higher level of the adversary. We can stop the repeating until all the entities in the lexicon have been used. But that will induce too long running time. Therefore, in our work, we set the maximum number of repetitions as 200. Step 1 requires the inference on many test instances, which is time-consuming. Considering that the F1 score's calculation of RE takes the "no_relation" as the background class, we can alternatively collect the instances not belonging to the "no_relation" class in Step 1. We denote such an alternate as ENTRE-fast, which saves 90% evaluation time in the experiments. We create the challenging RE benchmark ENTRED based on the public benchmark TACRED by applying ENTRE on the test set of TACRED. The overall statistics of ENTRED are shown in Table 3, alongside the statistics of the original TACRED dataset. The number of sentences in ENTRED is slightly smaller than that in TACRED because we filter out the instances having incorrect entity annotations. We showcase ENTRE using TACRED in this paper because of its popularity on evaluating RE models and comprehensive relation-type coverage. However, our ENTRE can be applied to other RE datasets. ## 4 Experiments In this section, we investigate ENTRE and use it to evaluate the robustness of the strong RE models LUKE Yamada et al. (2020), IRE Zhou and Chen (2021), and other methods that can improve the robustness of RE. Our experimental settings closely follow those of previous work Zhang et al. (2017); Zhou and Chen (2021); Nan et al. (2021) to ensure a fair comparison. We organize our results and analysis as four main research questions and their answers. ## 5 _Q1: Does_ Entre _reduce shortcuts_? ENTRE leads to fewer shortcuts from entity names to ground-truth relationsWe perform causal inference over ENTRED to analyze how many instances have shortcuts from entity names to the ground-truth relations after the entity replacements. We present the comparison of the shortcut ratio on ENTRED and TACRED on different relations in Fig. 7. We observe that ENTRED greatly reduces the shortcuts for more than 50% instances on most relations. As a result, when being evaluated using ENTRED, RE models have to extract the informative signals describing the ground-truth relations from the textual context, rather than rely on the shortcuts \begin{table} \begin{tabular}{l|c|c} \hline \hline **Benchmark** & TACRED & ENTRED \\ \hline \hline \(\#\) Sentences & 15,509 & 12,419 \\ \(\#\) Tokens & 539,306 & 457,121 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the TACRED and ENTRED benchmarks. Figure 8: The number of subject entity names, person entity names, and organization entity names in the test set of TACRED (red) and ENTRED (blue). Figure 7: ENTRED significantly reduces the ratio of instances with shortcuts (the entity bias is as same as the ground truth relation) compared with TACRED. from the entity names. ## 2 Q2: Does ENTRE improve diversity? Comparison between ENTRED and existing benchmarks.As we have analyzed in Sec. 2.1, the diversity of entity names in the existing benchmarks TACRED, TACREV and Re-TACRED are rather limited. These limitations hinder the evaluation of the generalization and generalization of RE. In our work, thanks to our larger lexicon built from the Wikipedia entity names, our ENTRED have much higher diversity than the TACRED and Re-TACRED, as shown in Fig. 8. With these diverse entity names, ENTRED is able to evaluate the performance of RE models on a larger scale of diverse entities, which better imitates the real-world scenario. enhances its entity-level generalization ability and makes RE models focus more on the textual context for inference, resulting in a better generalization under entity name replacements. Other methods, however, lead to lower improvements for LUKE, potentially because they cannot effectively capture the biased patterns between relations and entity names. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **TACRED** & **ENTRED (Ours)** & \(\Delta\) \\ \hline \hline LUKE (Yamada et al., 2020) & 72.7 & 45.0 & \(\downarrow 44\%\) \\ \hline w/ Resample (Burnaev et al., 2015) & 73.1 & 45.8 & \(\downarrow 37\%\) \\ w/ Entity Mask (w/o name, w/o type) (Zhang et al., 2017) & 21.3 & 21.0 & \(\downarrow 1\%\) \\ w/ Entity Mask (w/o name, w/ type) (Zhang et al., 2017) & 44.9 & \(\uparrow 2\%\) \\ w/ Entity Mask (w/ name, w/ type) (Zhang et al., 2017) & 72.3 & 61.2 & \(\downarrow 15\%\) \\ w/ Focal (Lin et al., 2017) & 72.9 & 47.1 & \(\downarrow 35\%\) \\ w/ CoRE (Wang et al., 2022) & **74.6** & **61.7** & \(\downarrow 17\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: F1 scores (%) and the performance dropping of RE on the test sets of TACRED and our ENTRED. The best results in each column are highlighted in **bold** font. We additionally report the performance drop (%) compared with the performance on the original TACRED dataset. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Original Instance** & **Original Prediction** & **New Entity Names** & **New Prediction** \\ \hline Finance Ministry spokesperson Chileshe Kandeta & & & \\ who confirmed this on Sunday said Magande & & American Association of & \\ signed a loan agreement of 31 million dollars & no\_relation \& & University Women, \\ with the \(\underline{\Delta\text{D}}\)fer the country ’s Pogyetry, & & Willingboro\_Chapter & \\ Reduction Budget Support. & & & \\ \hline John Graham, a 55-year-old man from Canada, & & & \\ is accused of shooting Aquash in the head and & & Liu Shaozhuo, & \\ leaving her to die on the Pine Ridge reservation & stateoprovince\_of\_death \& South Dakota & no\_relation \& \\ in South Dakota. & & & \\ \hline After the staffing firm Hollister Inc lost 20 of its 85 employees, it gave up nearly a third of its 3,750-square-foot Burlington office, allowing the property owner to put up a dividing wall to create a space for another tenant. & number\_of\_employees/members \& & Yoruba Academy, \$5. & alternate\_names \& \\ \hline Kercher ’s mother, Arline Kercher, tells court in emotional testimony that she will never get over her daughter ’s brutal death. & children \& & Sanju Yadav, Matti & no\_relation\& \\ \hline Lt. Assaf Ramon, the son of Israel ’s first astronaut, Col. Ilan Ramon, who died in the space shuttle Columbia disaster in 2003, was killed Sunday when an F16-A plane he was & children \& & Sanju Yadav, Matti & no\_relation\& \\ & & Koistiene & & \\ \hline Police have released scant information about the killing of 61-year-old Carol Daniels, whose body & & & \\ was found Sunday inside the Christ Holy & & Aaron Morgan, & \\ Sanctified Church, a weather-beaten building on a rundown block near downtown Anandarko in & & Angel Guillermo Heredia & no\_relation \& \\ Southwest Oklahoma. & & & \\ \hline \hline \end{tabular} \end{table} Table 3: A case study for LUKE on the relation extraction benchmark TACRED and our ENTRED. Underlines and wavy lines highlight the subject and object entities respectively. We report the original prediction, the new entity names for replacements and the prediction in ENTRED. Related Work Relation extraction is a sub-task of information extraction that aims to identify semantic relations between entities from natural language text (Zhang et al., 2017). It is an effective way to automatically acquire important knowledge and plays a vital role in Natural Language Processing (NLP). Relation Extraction is the key component for building relation knowledge graphs, and it is of crucial significance to natural language processing applications such as structured search, sentiment analysis, question answering, and summarization (Huang and Wang, 2017). Early research efforts (Nguyen and Grishman, 2015; Wang et al., 2016; Zhang et al., 2017) train RE models from scratch based on lexicon-level features. The recent RE work fine-tunes pretrained language models (PLMs; Devlin et al. 2019; Liu et al. 2019). For example, K-Adapter (Wang et al., 2020) fixes the parameters of the PLM and uses feature adapters to infuse factual and linguistic knowledge. Recent work focuses on utilizing the entity information for RE (Zhou and Chen, 2021; Yamada et al., 2020), but this leaks superficial and spurious clues about the relations (Zhang et al., 2018). Despite the biases in existing RE models, scarce work has discussed the spurious correlation between entity mentions and relations that causes such biases. Our work builds an automated pipeline to generate natural instances with fewer shortcuts and large coverage at scale to reflect the serious effects of entity bias on the RE models. There are also work in other domains aiming to evaluate models' generalization to perturbed inputs. For example, Jia and Liang (2017) attack reading comprehension models by adding word sequences to the input. Gan and Ng (2019) and Iyyer et al. (2018) paraphrase the input to test models' over-sensitivity. Jones et al. (2020) target adversarial typos. Si et al. (2021) propose a benchmark for reading comprehension with diverse types of test-time perturbation. These works focus on different domains than our research does, and they do not consider the composition of RE examples. Little attention is drawn to the entities in the sentences, and many attacks (e.g. character swapping, word injection) may make the perturbed sentences invalid. To the best of our knowledge, this work is among the first to propose a straightforward, dedicated pipeline for generating natural adversarial examples for the RE task, which takes into account the serious effects of entity bias in RE models. ## 6 Conclusion Our contributions in this paper are three-fold. 1) Methodology-wise: we propose ENTRE, an end-to-end entity replacement method that reduces the shortcuts from entity names to ground-truth relations. 2) Resource-wise: we develop ENTRED, a straightforward method for generating natural and counterfactual entity replacements for RE, which produces ENTRED, a benchmark for auditing the generalization of RE models under entity bias. 3) Evaluation-wise: our experimental results and analysis provide answers to four main research questions on the generalization of RE. We believe ENTRED and the entity replacement method ENTRE can benefit the community working to increase the RE models' generalization under entity bias.
2307.02225
Efficient Information Reconciliation for High-Dimensional Quantum Key Distribution
The Information Reconciliation phase in quantum key distribution has significant impact on the range and throughput of any QKD system. We explore this stage for high-dimensional QKD implementations and introduce two novel methods for reconciliation. The methods are based on nonbinary LDPC codes and the Cascade algorithm, and achieve efficiencies close the the Slepian-Wolf bound on q-ary symmetric channels.
Ronny Mueller, Domenico Ribezzo, Mujtaba Zahidy, Leif Katsuo Oxenløwe, Davide Bacco, Søren Forchhammer
2023-07-05T12:06:27Z
http://arxiv.org/abs/2307.02225v2
# Efficient Information Reconciliation for High-Dimensional Quantum Key Distribution ###### Abstract The Information Reconciliation phase in quantum key distribution has significant impact on the range and throughput of any QKD system. We explore this stage for high-dimensional QKD implementations and introduce two novel methods for reconciliation. The methods are based on nonbinary LDPC codes and the Cascade algorithm, and achieve efficiencies close the the Slepian-Wolf bound on q-ary symmetric channels. ## 1 Introduction Quantum Key Distribution (QKD) protocols allows for secure transmission of information between two entities, Alice and Bob, by distributing a symmetric secret key via a quantum channel [1, 2]. The process involves a quantum stage where quantum information is distributed and measured. This quantum stage is succeeded by post-processing. In this purely classical stage, the results of the measurements undergo a reconciliation process to rectify any discrepancies before a secret key is extracted during the privacy amplification phase. The emphasis of this research paper is on the phase of information reconciliation, which has a significant impact on the range and throughput of any QKD system. Despite the considerate development of QKD technology using binary signal forms, its high-dimensional counterpart (HD-QKD)[3] has seen significantly less research effort so far. However, HD-QKD offers several benefits, including higher information efficiency and increased noise resilience [4, 5, 6, 7]. Although the reconciliation phase for binary-based QKD has been extensively researched, little work has been done to analyze and optimize this stage for HD-QKD, apart from introducing the layered scheme in 2013 [8]. This study addresses this research void by introducing two novel methods for information reconciliation for high-dimensional QKD and analyzing their performance. Unlike the majority of channel coding applications, the (HD)-QKD scenario places lesser demands on latency and throughput while emphasizing significantly the minimization of information leakage. Spurred by this unique setting, the superior decoding performance of nonbinary LDPC codes [9], and their inherent compatibility with high dimensions, we investigate the conception and utilization of nonbinary LDPC codes for post-processing in HD-QKD protocols as the first method. The second method we investigate is the Cascade protocol [10]. It is one of the earliest proposed methods for reconciling keys. While the many rounds of communication required by Cascade and concerns about resulting limitations on throughput have led to a focus on syndrome-based methods [11, 12, 13] in the past decade, recent research has shown that sophisticated software implementations can enable Cascade to achieve high throughput even with realistic latency on the classical channel [14, 15]. Motivated by these findings, we explore the usage of Cascade in the reconciliation stage of HD-QKD and propose a modification that enables high reconciliation efficiency for the respective quantum channel. ## 2 Background In this section, we describe the general setting and channel model and introduce relevant figures of merit. We then continue to describe the two proposed methods in more detail. ### Information reconciliation The goal of the information reconciliation stage in QKD is to correct any discrepancies between the keys of the two parties while minimizing the information leaked to potential eavesdroppers. Generally, Alice sends a random string \(\mathbf{x}=(x_{0},...,x_{n-1})\), \(x_{i}=0,...,q-1\) of \(n\) qudits of dimension \(q\) to Bob, who measures them and obtains his version of the string \(\mathbf{y}=(y_{0},...,y_{n-1})\), \(y_{i}=0,...,q-1\). We assume that the quantum channel can be accurately represented by a substitute channel where \(\mathbf{x}\) and \(\mathbf{y}\) are correlated as a \(q\)-ary symmetric channel since errors are typically uncorrelated and symmetric. The transition probabilities of such a channel are as follows: \[\mathrm{P}(y_{i}|x_{i})=\begin{cases}1-p&y_{i}=x_{i},\\ \frac{p}{q-1}&\text{else}.\end{cases} \tag{1}\] Here, the parameter \(p\) represents the channel transition probability. We refer to the symbol error rate between \(\mathbf{x}\) and \(\mathbf{y}\) as the quantum bit error rate (QBER) in a slight abuse of notation but consistent with experimental works on HD-QKD. In our simulations, we assume the QBER to be an inherent channel property, making it equivalent to the channel parameter \(p\). In addition to the qudits, Alice also sends messages, e.g. syndromes or parity bits, which are assumed to be error-free. From a coding perspective, this is equal to asymmetric Slepian-Wolf coding with side information at the receiver, where the syndrome \(\mathbf{s}\) represents the compressed version of \(\mathbf{x}\), and \(\mathbf{y}\) is the side information. A more detailed explanation of this equivalence can be found in [16], while for an interpretation of Cascade in the context of linear block codes see [17]. Any information leaked to a potential eavesdropper at any point during the quantum key distribution must be subtracted from the final secret key during privacy amplification [18]. The information leaked during the information reconciliation stage will be denoted by \(\mathrm{leak}_{\mathrm{IR}}\). In the case of LDPC codes, assuming no rate adaptation, it can be upper-bounded by the syndrome length in bits, \(\mathrm{leak}_{\mathrm{IR}}\leq m\), with \(m\) being the syndrome length times \(\log_{2}(q)\). In the case of Cascade, it can be upper-bounded by the number of parity bits sent from Alice to Bob [19]. Using the Slepian-Wolf bound [20], the minimum amount of leaked information required to successfully reconcile with an arbitrarily low failure probability in the asymptotic limit of infinite length is given by the conditional entropy: \[\mathrm{leak}_{\mathrm{IR}}\geq n\mathrm{H}(X|Y). \tag{2}\] The conditional entropy (base \(q\)) of the \(q\)-ary symmetric channel, assuming independent and identically distributed input \(X\), can be expressed as \[\mathrm{H}(X|Y)=-((1-p)\mathrm{log}_{q}(1-p)-p\cdot\mathrm{log}_{q}(\frac{p}{ q-1})). \tag{3}\] A code's performance in terms of relative information leakage can be measured by its efficiency \(f\), given by \[f=\frac{\mathrm{leak}_{\mathrm{IR}}}{n\mathrm{H}(X|Y)}. \tag{4}\] It is important to note that an efficiency of \(f>1\) corresponds to leaking more bits than required by the theoretical minimum of \(f=1\), which represents the best possible performance according to the Slepian-Wolf bound. In practice, systems have \(f>1\) due to the difficulty of designing optimal codes, finite-size effects, and the inherit trade-off between efficiency and throughput. In the following sections, we restrict ourselves to \(q\) being a power of 2. Both approaches can function without this restriction, but it allows for more efficient implementation of the reconciliation and is commonly seen in physical implementations of the quantum stage due to symmetries. ### Nonbinary LDPC codes #### 2.2.1 Codes & Decoding We provide here a short overview over nonbinary LDPC codes and their decoding based on the concepts and formalism of binary LDPC codes. For a comprehensive review of those, we refer to [22]. Nonbinary LDPC codes can be described by their parity check matrix \(\mathbf{H}\), with \(m\) rows and \(n\) columns, containing elements in a Galois Field (GF) of order \(q\). To enhance clarity in this section, all variables representing a Galois field element will be marked with a hat, for instance, \(\hat{a}\). Moreover, let \(\oplus,\ominus,\otimes\), and \(\oslash\) denote the standard operations on Galois field elements. An LDPC code can be depicted as a bipartite graph, known as the Tanner graph. In this graph, the parity-check equations form one side, called check nodes, while the codeword symbols represent the other side, known as variable nodes. The Tanner graph of a nonbinary LDPC code also has weighted edges between check and variable nodes, where the weight corresponds to the respective entry of \(\mathbf{H}\). The syndrome \(\mathbf{s}\) of the \(q\)-ary string \(\mathbf{x}\) is computed as \(\mathbf{s}=\mathbf{H}\mathbf{x}\). For decoding, we employ a log-domain FFT-SPA [23, 24]. In-depth explanations of this algorithm can be found in [25, 26], but we provide a summary here for the sake of completeness. Let \(Z\) represent a random variable taking values in GF\((q)\), such that P\((Z_{i}=k)\) indicates the probability that qudit \(i\) has the value \(k=0,...,q-1\). The probability vector \(\mathbf{p}=(p_{0},...p_{q-1})\), \(p_{j}=\) P\((Z=j)\), can be converted into the log-domain using the generalized equivalent of the log-likelihood-ratio (LLR) in the binary case, \(\mathbf{m}=(m_{0},...,m_{q-1})\), \(m_{j}=\log\frac{\mathrm{P}(Z=0)}{\mathrm{P}(Z=j)}=\log(\frac{p_{0}}{p_{j}})\). Given the LLR representation, probabilities can be retrieved through \(p_{j}=\exp(-m_{j})/\sum_{k=0}^{q-1}\exp(-m_{k})\). We use \(p(\cdot)\) and \(m(\cdot)\) to denote these transforms. To further streamline notation, we define the multiplication and division of an element \(\hat{a}\) in GF\((q)\) and an LLR message as a permutation of the indices of the vector: \[\hat{a}\cdot\mathbf{m} := (m_{\hat{0}\oslash\hat{a}},...,m_{q^{-1}\oslash\hat{a}}) \tag{5}\] \[\mathbf{m}/\hat{a} := (m_{\hat{0}\otimes\hat{a}},...,m_{q^{-1}\otimes\hat{a}}), \tag{6}\] where the multiplication and division of the indices occur in the Galois Field. These permutations are necessary as we need to weigh messages according to their edge weight during decoding. We further define two transformations involved in the decoding, \[\bar{\mathcal{F}}(\mathbf{m},\hat{H}ij) = \mathcal{F}(p(\hat{H}ij\cdot\mathbf{m})) \tag{7}\] \[\bar{\mathcal{F}}(\mathbf{m},\hat{H}ij)^{-1} = m(\mathcal{F}^{-1}(\mathbf{m}))/\hat{H}ij, \tag{8}\] Figure 1: Example of a qudit using a time-bin implementation [21]. The dimension of the qudit is set by the number of bins grouped together, while the value is determined by the measured arrival time. where \(\mathcal{F}\) represents the discrete Fourier transform. Note that for \(q\) being a power of 2, the Fast Walsh Hadamard Transform can be utilized. The decoding process then consists of two iterative message-passing phases, from check nodes to variable nodes and vice versa. The message update rule at iteration \(l\) for the check node corresponding to the parity check matrix entry at \((i,j)\) can be expressed as \[\mathbf{m}_{ij,\mathrm{CV}}^{(l)}=\mathcal{A}(\hat{s}_{i}^{\prime})\bar{ \mathcal{F}}^{-1}(\underset{j\in\mathcal{M}(i)/j}{\mathrm{II}}\bar{\mathcal{F }}(\mathbf{m}_{ij\prime}^{(l-1)},\hat{H}_{ij\prime}),\hat{H}_{ij}), \tag{9}\] where \(\mathcal{M}(i)\) denotes the set of all check nodes in row \(i\) of \(\mathbf{H}\). \(\mathcal{A}\), defined as \(\mathcal{A}_{kj}(\hat{a})=\delta(\hat{a}\oplus k\ominus j)-\delta(a\ominus j)\), accounts for the nonzero syndrome [26]. The weighted syndrome value is calculated as \(\hat{s}_{i}^{\prime}=\hat{s}_{i}\oslash\hat{H}_{ij}\). The a posteriori message of column \(j\) can be written as \[\tilde{\mathbf{m}}_{j}^{(l)}=\mathbf{m}^{(0)}(j)+\sum_{i^{\prime}\in \mathcal{N}(j)}\mathbf{m}_{i^{\prime}j,\mathrm{CV}}^{(l)}, \tag{10}\] where \(\mathcal{N}(j)\) is the set of all check nodes in column \(j\) of \(\mathbf{H}\). The best guess \(\tilde{\mathbf{x}}\) at each iteration \(l\) can be calculated as the minimum value of the a posteriori, \(\tilde{x}_{j}^{(l)}=\mathrm{argmin}(\tilde{\mathbf{m}}_{j}^{l})\). The second message passings, from variable to check nodes, are given by \[\mathbf{m}_{ij,\mathrm{VC}}^{(l)}=\tilde{\mathbf{m}}_{j}^{(l)}-\mathbf{m}_{ij,\mathrm{CV}}^{(l)}. \tag{11}\] The message passing continues until either \(\mathbf{H}\tilde{\mathbf{x}}=\mathbf{s}\) or the maximum number of iterations is reached. To allow for efficient reconciliation for different QBER values, a rate-adaptive scheme is required. We use the blind reconciliation protocol [27]. A fixed fraction \(\delta\) of symbols is chosen to be punctured or shortened. Puncturing refers to replacing a key bit with a random bit that is unknown to Bob, for shortening the value of the bit is additionally send to Bob over the public channel. Puncturing, therefore, increases the code rate, while shortening lowers it. The rate of a code with \(p\) punctured and \(s\) shortened bits is then given by \[R=\frac{n-m-s}{n-p-s}. \tag{12}\] To see how rate adaption influences the bounding of \(\mathrm{leak}_{\mathrm{IR}}\) see [28]. The blind scheme introduces interactivity into the LDPC reconciliation. Given a specific code, we start out with all bits being punctured and send the respective syndrome to Bob. Bob attempts to decode using the syndrome. If decoding fails, Alice transforms \(\lceil n(0.028-0.02R)\rceil\)[29] punctured bits into shortened bits, and resends the syndrome. This value is a heuristic expression and presents a trade-off between the number of communication rounds and the efficiency. Bob tries to decode again and requests more bits to be shortened in case of failure. If there are no punctured bits left to be turned into shortened bits, Alice reveals key bits instead. This continues until either decoding succeeds or the the whole key is revealed. #### 2.2.2 Density Evolution In the case of a uniform edge weight distribution, the asymptotic decoding performance of LDPC codes for infinite code length is entirely determined by two polynomials [30, 31]: \[\lambda(x)=\sum_{i=0}^{d_{\textrm{v, max}}}\lambda_{i}x^{i-1}\quad\rho(x)=\sum_{i=0 }^{d_{\textrm{c, max}}}\rho_{i}x^{i-1}. \tag{13}\] In these expressions, \(\lambda_{i}\) (\(\rho_{i}\)) represents the proportion of edges connected to variable (check) nodes with degree \(i\), while \(d_{\textrm{v, max}}\) (\(d_{\textrm{c, max}}\)) indicates the highest degree of the variable (check) nodes. Given these polynomials, we can then define the code ensemble \(\mathcal{E}(\lambda,\rho)\), which represents all codes of infinite length with degree distributions specified by \(\lambda\) and \(\rho\). The threshold \(p_{t}(\lambda,\rho)\) of the code ensemble \(\mathcal{E}(\lambda,\rho)\) is defined as the worst channel parameter (QBER) at which decoding remains possible with an arbitrarily small failure probability. This threshold can be estimated using Monte-Carlo Density Evolution (MC-DE), which is thoroughly described in [32]. This technique repeatedly samples node degrees according to \(\lambda\) and \(\rho\), and draws random connections between nodes for each iteration. With a sufficiently large sample size, this simulates the performance of a cycle-free code. Note that MC-DE is particularly well suited for nonbinary LDPC codes, as the distinct edge weights aid in decorrelating messages [32]. During the simulation, we track the average entropy of all messages. When it falls below a certain value, decoding is considered successful. If this does not occur after a maximum number of iterations, the evaluated channel parameter is above the threshold of \(\mathcal{E}(\lambda,\rho)\). Utilizing a concentrated check node distribution (which is favorable according to [33]) and a fixed code rate, we can further simplify to \(\mathcal{E}(\lambda)\). The threshold can then be employed as an objective function to optimize the code design, which is commonly achieved using the Differential Evolution algorithm [34]. ### Cascade #### 2.3.1 Binary Cascade Cascade [10] is one of the earliest schemes proposed for information reconciliation and has seen widespread use due to its simplicity and high efficiency. Cascade operates in several iterative steps. Alice and Bob divide their strings into top-level blocks of size \(k_{1}\) and calculate their parity, where the size \(k_{1}\) usually depends on the QBER and the specific version of Cascade. They send and compare their parities over a noiseless classical channel. If the parities for a single top-level block do not match, they perform a binary search on this block. There, the block is further divided into two, and parities are calculated and compared again. One of the two sub-blocks will have a different parity than the corresponding sub-block of Alice. We continue the binary search on this sub-block until we reach a sub-block that has size one, which allows us to locate and correct one error per mismatched top-level block. Alice and Bob then move on to the next iteration, where they shuffle their strings and choose new top-level blocks of size \(k_{2}\). They then repeat the binary search on those. After correcting a bit in iteration \(i\), except for \(i=1\), Bob can look for blocks in previous iterations that contain this specific bit. The parity of these blocks has now changed as the bit got flipped, mismatching now with Alice's parity. This allows Bob to perform another binary search on them and correct additional bits. He can then again look for these additional bits in all earlier iterations, allowing for detected errors to "cascade" back. Successive works on the original Cascade protocol have been trying to increase its performance by either substituting the parity exchange with error-correction methods [35], or by optimizing parameters like the top-level block sizes [36]. All our comparisons and modifications are applied to a high performing modifications [17] achieving efficiencies of up to \(f=1.025\). This version has also been the basis for a recent high-throughput implementation, reaching a throughput of up to 570 Mbps [14]. #### 2.3.2 High-Dimensional Cascade We propose the following modification to use Cascade for high-dimensional data, which we will denote by high-dimensional Cascade (HD-Cascade). We only highlight the differences compared to the best-performing approach[17] in terms of efficiency designed for binary QKD. 1. Initially, we map all symbols to an appropriate binary representation. Prior to the first iteration, we shuffle all bits while maintaining a record of which bits originate from the same symbol. This mapping effectively reduces the expected QBER used for block-size calculations to \(\mathrm{QBER}_{\mathrm{BIN}}=q/(2(q-1))\mathrm{QBER}_{\mathrm{SYM}}\). 2. Upon detecting an error, we immediately request the values of all bits originating from the same symbol, if not already known. The conditional probability to be a one given the values of all previously transmitted bits for these bits is close to \(1/2\). To be precise, it is equal to \(1/2\) for bits that have not yet participated in any parity checks and then varies with the length of the smallest block they participated in [17]. If any of these bits are erroneous, the blocks they have been participating in now have a mismatching parity. We can therefore immediately run the cascading step on those requested bits in all iterations including the current one, detecting more errors. Note that this allows for a cascading process in the first iteration already. 3. The fraction of errors corrected in the first iteration is significantly higher (often \(>95\%\) in our simulations) compared to the binary version. This is due to the possibility of running a cascading process in the first iteration already. Consequently, we need to increase the block sizes for the following iterations as the dimensionality increases, see Table 2. ## 3 Results ### Nonbinary LDPC codes While the code-design and decoding techniques described above are feasible for any dimension \(q\), we focus on \(q=4\), \(8\) as those are common in current implementations [37]. Nine codes were designed with code rates between \(0.50\) and \(0.90\) for \(q=4\) (\(q=8\)), corresponding to a QBER range between \(0\) and \(18\%\) (\(24.7\%\)). We used \(100000\) nodes with a maximum of \(150\) iterations for the MC-DE, the QBER was swept in \(20\) steps in a short range below the best possible threshold. In the Differential Evolution, population sizes between \(15\) and \(50\), a differential weight of \(0.85\), and a crossover probability of \(0.7\) were used. A sparsity of at most \(10\) nonzero coefficients in the polynomial was enforced, with the maximum node degree chosen as \(d_{\text{v, max}}=40\). The sparsity allowed for reasonable optimization complexity, the maximum node degree was chosen to avoid numerical instability which we observed for higher values. The results of the optimization can be found in Table 1 in form of the node degree distributions and their performance according to density evolution. The efficiency was evaluated for the highest sustainable QBER. The all-zero codeword assumption was used for the optimization and evaluation, which holds for the given scenario of a symmetric channel [25]. For all rates, the designed thresholds are close to the theoretical bound. LDPC codes with a length of \(n=30000\) symbols were constructed using Progressive Edge Growth [38], and a log-FFT-SPA decoder was used to reconcile the messages. The simulated performance of the finite-size codes can be seen in Figure 4 for a span of different QBER values, each data point being the mean of \(100\) samples. We used the blind reconciliation scheme for rate-adaption. The mean number of decoding tries required for Bob to successfully reconcile is also shown. The valley pattern visible in the efficiency of the LDPC codes is due to the switching between codes of different rate, and a slight degradation in performance for high ratios of puncturing or shortening. The decoder used a maximum of \(100\) decoding iterations. As expected for finite-size codes, they do not reach the asymptotic ensemble threshold but show sub-optimal performance [26]. ### High-dimensional Cascade The performance of HD-Cascade was evaluated on the q-ary symmetric channel for dimensions \(q=4\), \(8\), \(32\), and for a QBER ranging from \(1\%\) to \(20\%\). The results are shown in Figure 2. For comparison, a direct application of the best-performing Cascade modification on a binary mapping is also included. The proposed high-dimensional Cascade uses the same base Cascade with the additional adaptations discussed earlier. For \(q=2\), HD-Cascade reduces to binary Cascade, resulting in equal performance. Both methods use the same block size of \(n=2^{16}\) bits for all cases. The used top-level block sizes \(k_{i}\) for each iteration \(i\) can be seen in Table 2, where \([\cdot]\) denotes rounding to the nearest integer. Additionally, the layered scheme is included as a reference. All data \begin{table} \begin{tabular}{c|c c} & **4-Dimensional** \\ Rate & DET & EEff & Ensemble (edge view) \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: Degree distributions for 4- and 8-dimensional nonbinary LDPC codes. The threshold calculated by Density Evolution (DET), the corresponding ensemble efficiency (EEff) and the node degree distribution in edge view. points have a frame error rate below 1% and show an average of 1000 samples. The wave pattern observable for the efficiency of Cascade in Figure 2 and Figure 4 is due to the integer rounding operation when calculating the block-sizes. This seems to be unavoidable, as block-sizes being a power of two have been shown to be optimal for the binary search in this setting [17]. The increase in both the range and secret key rate resulting from using HD-Cascade instead of directly applying binary Cascade is depicted in Figure 3. The improvement in the relative secret key rate \(r\) obtained using HD-Cascade is shown in Figure 5. This is calculated as \(r=\mathrm{skr_{HD\text{-}Cascade}/skr_{Cascade}}-1\). The used protocols are [39, 40], and experimental parameters for the simulation are derived from [41] for \(q=2\), \(4\), where a combination of polarization and path is used to encode the qudits. For \(q=4\), we also analyzed the performance of HD-Cascade on a subset of the actual experimental data, which confirms the simulated performance. For \(q=8\), \(32\) we used a generalization, additional losses might transpire due to increased experimental complexity which are not considered in the simulation. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \(k_{1}\) & & & \multicolumn{3}{c|}{\(\min(2^{\left[\log_{2}(1/\mathrm{QBER_{BIN}})\right]},n/2)\)} \\ \hline \(k_{2}\) & & & & \multicolumn{3}{c|}{\(\min(2^{\left[\log_{2}(2q/\mathrm{QBER_{BIN}})\right]},n/2)\)} \\ \hline \(k_{3}\) & \(k_{4}\) & \(k_{5}\) & \(k_{6}\) & \(n/16\) & \(n/8\) & \(n/4\) & \(n/2\) \\ \hline \end{tabular} \end{table} Table 2: Block sizes used for HD-Cascade. Figure 2: Efficiency of different approaches evaluated on a \(q\)-ary symmetric channel. Layered refers to the layered scheme, Bin to direct application of binary Cascade, and HD to the high-dimensional Cascade proposed in this work. Data points represent the mean of 1000 samples and have a FER of less than 1%. Figure 4: Top: Efficiency of using nonbinary LDPC codes and HD-Cascade for different QBER values for \(q=8\). Bottom: Number of decoding tries in the blind scheme used for the LDPC codes. Figure 5: Relative improvement of the secret key rate for using HD-Cascade compared to binary Cascade. Experimental data provided by recent experiment[41]. ## 4 Discussion ### Nonbinary LDPC codes Nonbinary LDPC codes are a natural candidate for the information reconciliation stage of HD-QKD, as their order can be matched to the dimension of the used qudits, and they are known to have good decoding performance [9]. Although they typically come with increased decoding complexity, this drawback is less of a concern in this context, since the keys can be processed and stored before being employed in real-time applications, which reduces the significance of decoding latency. Nevertheless, less complex decoder algorithms like EMS [42] or TEMS [43] can be considered to allow the usage of longer codes and for increasing the throughput. The node degree distributions we constructed show ensemble efficiencies close to one, \(1.037-1.067\) for \(q=4\) and \(1.024-1.080\) for \(q=8\). Note, that to the best of our knowledge, there is no inherent reason for the efficiencies of \(q=8\) to be lower than for \(q=4\), it is rather just a heuristic result due to optimization parameters fitting better. Although the ensembles we found display thresholds near the Slepian-Wolf bound, we believe that even better results could be achieved by expanding the search of the hyperparameters involved in the optimization, such as the enforced sparsity and the highest degree of \(\lambda\), and by performing a finer sweep of the QBER during density evolution. The evaluated efficiency of finite-size codes shows them performing significantly worse than the thresholds computed with density evolution, with efficiencies ranging from 1.078 to 1.14 for QBER values in a medium range. This gap can be reduced by using longer codes and improving the code construction, e.g. using improved versions of the PEG algorithm [44, 45]. The dependency of the efficiency on the QBER can further be reduced, i.e. flatting the curve in Figure 4, by improving the position of punctured bits [46]. While working on this manuscript, the usage of nonbinary LDPC codes for information reconciliation has also been proposed in [47]. They suggest mapping symbols of high dimensionality to symbols of lower dimensionality but still higher than 2 if beneficial, in similarity to the layered scheme. This can further be used to decrease computational complexity if required. ### HD-Cascade HD-Cascade has improved performance on high-dimensional QKD setups compared to directly applying binary Cascade. We can see significant improvement in efficiency, with mean efficiencies of \(f_{\text{HD-Cascade}}=1.06\), 1.07, 1.12 compared to \(f_{\text{Cascade}}=1.22\), 1.36, 1.65 for \(q=4\), 8, 32, respectively. Using the parameters of a recent experimental implementation of 4-dimensional QKD[41], a resulting improvement in range and secret key rate can be observed, especially for higher dimensions. For \(q=32\), an increase of more than 10% in secret key rate and an additional 2.5 dB in tolerable channel loss is achievable according to our simulation results. Our approach demonstrates high efficiency across all QBER values but we noted that the time required for executing the correction increases significantly with higher error rates. Apart from the inherent scaling of Cascade with the QBER that is also present for binary implementations, this is additionally attributable to the immediate cascading of same-symbol bits. While the many rounds of communication required by Cascade have raised concerns about resulting limitations on throughput, recent research has shown that sophisticated software implementations can enable Cascade to achieve high throughput even with realistic latency on the classical channel [14, 15]. We expect HD-Cascade to reach similar rates as its classical counterpart, as we expect the main difference with respect to throughput being an increased difficulty to batch together parity requests for parallelization due to the additional serial cascading for the same symbol bits while keeping the resulting penalty to efficiency minimal. Moreover, we believe that significant improvements in efficiency can still be achieved by further optimizing the choice of block sizes. ### Comparison Before comparing HD-Cascade and nonbinary LDPC codes, we want to mention the layered scheme, a binary LDPC code based scheme introduced in 2013. The layered scheme is based on decoding bit layers separately using \(\lceil\log_{2}(q)\rceil\) binary LDPC codes. It is similar in concept to the multilevel coding and multistage decoding methods used in slice reconciliation for continuous-variable (CV) QKD [48]. While the layered scheme allows for reconciliation using binary LDPC codes only, it brings its own drawbacks, like error propagation, bit mapping, and interactive communication. Its performance can be seen for \(q=32\) in Figure 3, notably for a much smaller block length (data read off Figure 5 [8]). Later experimental implementations report efficiencies of 1.25 [49] (\(q=3\), \(n=1944\), \(p=8\%\)) and 1.17 [50] (\(q=1024\), \(n=4000\), \(p=39.6\%\)). These papers report their efficiencies in the \(\beta\)-notation. \(\beta\) is commonly used in the continuous-variable QKD community, whereas \(f\) is more widespread with respect to discrete-variable QKD. They can be related via \[\beta(\mathrm{H}(X)-\mathrm{H}(X|Y))=\mathrm{H}(X)-f\mathrm{H}(X|Y). \tag{14}\] Overall, HD-Cascade and nonbinary LDPC codes show good efficiency over all relevant QBER values, with HD-Cascade performing slightly better in terms of efficiency (see Figure 4). HD-Cascade shows a flat efficiency behavior over all ranges, compared to the LDPC codes, which have a bad performance for very low QBER values and an increase in performance with increasing QBER. This behavior can also be observed in LDPC codes used in binary QKD [51, 52, 53]. While the focus of this work lies in introducing new methods for high-dimensional information reconciliation with good efficiencies, the throughput is another important measure, especially with continuously improving input rates from advancing QKD hardware implementation. While an absolute and direct comparison of throughput strongly depends on the specific implementation and setup parameters, relative performances can be considered. Cascade has low computational complexity but high interactivity which can limit throughput in scenarios where the classical channel has a high latency. For a constant efficiency, as approximately observed for Cascade, the number of messages exchanged scales with the QBER as it is proportional to \(\mathrm{H}(X|Y)\). Nonbinary LDPC codes, on the other hand, have low requirements on interactivity (usually below 10 syndromes per frame using the blind scheme) but high computational costs at the decoder. Their decoding complexity scales with \(q\) but not with the QBER, as its main dependence is on the number of entries in its parity check matrix and the node degrees. It should be noted that the QBER is usually fairly stable until the loss approaches the maximum range of the setup, e.g. see Figure 3, and that higher dimensions tend to operate at higher QBER values. It should be emphasized that for QKD, latency is not a big issue as keys do not need to be available immediately but can be stored for usage. QKD systems are usually significantly bigger and more expensive than setups for classical communication. This allows for reconciliation schemes with comparatively high latency and high computational complexity, for example by extensive usage of pipelining [54, 14, 55]. ## 5 Conclusion We introduced two new methods for the information reconciliation stage of high dimensional Quantum Key Distribution. The nonbinary LDPC codes we designed specifically for the \(q\)-ary symmetric channel allow for reconciliation with good efficiency with low interactivity. High-dimensional Cascade on the other hand uses a highly interactive protocol with low computational complexity. It shows significant improvement compared to directly applying Cascade protocols designed for binary Quantum Key Distribution, e.g. more than 10% for a 32-dimensional system for all possible channel losses. The Center of Excellence SPOC (ref DNRF123). ## Competing Interests The authors declare no competing financial or non-financial interests. ## Data Availability All data used in this work are available from the corresponding author upon reasonable request.
2301.00964
e-Inu: Simulating A Quadruped Robot With Emotional Sentience
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
Abhiruph Chakravarty, Jatin Karthik Tripathy, Sibi Chakkaravarthy S, Aswani Kumar Cherukuri, S. Anitha, Firuz Kamalov, Annapurna Jonnalagadda
2023-01-03T06:28:45Z
http://arxiv.org/abs/2301.00964v1
e-Inu: Simulating A Quadruped Robot With Emotional Sentience ### Abstract Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work. Keywords: Automated Gait Generation, Deep Learning, Emotion Recognition, Reinforcement Learning, Robot Simulation. ## 1 Introduction Development in robotics has been a fundamental stepping-stone in the progression of humanity. However, unlike most new technologies, robotics is still mostly limited to industrial or professional use, with the rather minuscule exceptions of home automation, smart home cleaners, etc. With that in mind, the need for pets, more specifically dogs, has grown profusely - both for emotional and safety needs. It can be quite difficult to care for an organic pet in a modern fast-paced life. To that end, we focus our aim on simulating an artificially intelligent quadruped robot [37] using PyBulet, modelled to serve the domestic needs of a sentient, albeit artificial, dog. Three fundamental problems accompany the development of the said robot, viz., emotion analysis, environmental awareness, and automated gait generation. The objectives are 1. To design a system that automates the detection of underlying emotions in speech, tonality, and facial expressions of humans present in the scene, 2. To detect the direction of sound sources and obstacles and, 3. To generate the necessary gait to travel towards the sound source 4. To generate the required audio-visual responses for the situation. This research discusses the existing ideas or models in the fields of Convoluted Neural Networks (CNN), Recurrent Neural Networks (RNN), Reinforcement Learning (RL), and some software engineering to meet the end goals, such as the use of TDoA. The primary goal is to focus on the emotional aspect of having a pet dog, not the guardian aspect for this paper. A point to note is that this paper does not develop the robotics of the state-of-the-art machines already established in the market. This paper discusses an alternate pathway in which these quadruped machines can be used in a domestic setting, developing the more utopian Neuromancer [1] version of the future. To this end, we endeavour to make a quadruped system capable of a certain degree of emotional intelligence, allowing the robot to feel much more natural. ## 2 Review of Previous Related Work A major inspiration in the research has been similar quadruped robots with more industrial use cases such as the ANYmal [2] by ANYbotics, Spot [3] by Boston Dynamics, and AlienGo [4] by Unitree Robotics. These quadrupeds were designed to be mainly robust and cover different scenarios that might crop up in an industrial scenario. However, e-Inu implements several key aspects common to all quadrupeds: gait generation and optimization, obstacle avoidance, and route planning. A similar project on the ground of quadruped bots was carried out by Marc Raibert, Blankespoor, Nelson et al.[5], Big Dog, whose goal was to make autonomous quadruped robots that resembled dogs. One of the recent papers that we explored was by Deng et al.[6] Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition. According to their findings, training and test data utilized for system development in speech emotion recognition typically fit each other precisely, but additional'similar' data may be accessible. Transfer learning enables the use of such similar data for training despite underlying differences to improve the performance of a recognizer. Their research provided a sparse autoencoder technique for feature transfer learning for speech emotion identification, which learns a common emotion-specific mapping rule from a limited margin of labelled data in a target domain. Then, newly reconstructed data were obtained using this method on emotion-specific data from a different domain. The experimental findings on six typical databases demonstrated that their approach greatly outperforms learning each source domain independently. The basic idea behind the sparse autoencoder-based feature transfer learning method was to use a single-layer autoencoder to find a common structure in small target data. Then apply that structure to reconstruct source data to complete useful knowledge transfer from source data into a target task. They used the reconstructed data to create a speech emotion identification engine for a real-world problem presented by the Interspeech 2009 Emotion Challenge. The proposed technique effectively transfers knowledge and improves classification accuracy, according to experimental results with six publicly accessible corpora. In the work of Lu et al. [7], as a downstream task, they proposed using pre-trained features from end-to-end ASR models to perform speech sentiment analysis. End-to-end ASR features that use both acoustic and text information from speech produced encouraging results. As the sentiment classifier, an RNN with self-attention was used, which also gave an accessible visualization using attention weights to help comprehend model predictions. The IEMO-CAP dataset and a new large-scale speech sentiment dataset SWBD-sentiment were employed for evaluation. With over 49,500 utterances, they increased the state-of-the-art accuracy on IEMO-CAP from 66.6%to 71.7% and reached an accuracy of 70.10% on SWBD-sentiment. As a result, it was proved that pre-trained features from the end-to-end ASR model are useful for sentiment analysis. However, the work of Lu et al. [7] was the fundamental motivation for our model to analyze emotions from audio/ speech. Their work demonstrated a spoken emotion identification system based on a recurrent neural network (RNN) model taught by a fast learning algorithm. It considered the long-term context influence as well as the unpredictability of emotional label expressions. A robust learning method using a bidirectional long short-term memory (BiLSTM) model was used to extract a high-level representation of emotional states in terms of their temporal dynamics. To avoid the ambiguity of emotional labels, it was assumed that the label of each frame was viewed as a sequence of random variables so that all frames in the same speech were mapped into the same emotional label. The proposed learning algorithm was then used to train the sequences. When compared to the DNN-ELM-based emotion identification system utilized as a baseline, the suggested emotion recognition system improved its weighted accuracy by up to 12%. Their method revealed how recurrent neural networks and maximum-likelihood-based learning techniques might be used to improve emotion recognition. This is also when we learned about Mel Frequency Cepstral Coefficients and their application as a feature extractor from audio. We discovered the works of Mermelstein [8], Davis and Mermelstein [9], and Bridle and Brown [10] as we delved deeper. Maghilnan and Kumar [11] talked about a sentiment analysis algorithm that uses features taken from the voice stream to discern the moods of the speakers in the conversation. Their process included pre-processing with VAD, implementing their Speech Recognition System, implementing their Speaker Recognition System with MFCC, and ultimately implementing their Sentiment Analysis System. Their research offered a generalized model that takes an audio input including a discussion between two persons and examines the content and identity of the speakers by automatically translating the audio to text and performing speaker recognition. The system performed well with the artificially generated dataset; they were working on gathering a larger dataset and boosting the system's scalability. Though the system was accurate in understanding the sentiment of the speakers in a conversational discussion, it could only manage conversations between two speakers speaking one at a time. Moving on to the next part of the emotion recognition system, facial emotion detection, several works have explored this particular problem. One of the first works in detecting emotion from images was done by Gajarla and Gupta [12]. The approach used in this methodology relied upon the large pre-trained object detection model VGG16 [13] which allowed for the facial detection model to be quickly trained since using VGG16 meant that the model already knew how to pick up cues from an image. This transfer learning method allowed Gajarla and Gupta [12] to work very well by replacing the final layer of the VGG16 model for a different classification layer. The image features were once extracted using VGG16; we passed to an SVM that allowed the overall model to be trained on the task of detecting emotions. This work also explored different architectures such as using the One vs All SVM approach with different VGG16 weights as well as using ResNet50. Gajarla and Gupta [12] obtained impressive results of 73% using the resNet50 model on a dataset consisting of images scraped from "Flickr". One of the main issues faced during this work was the fact that since the images were scrapped from Flickr, the labelling of the images is quite subjective and the ambience and lighting of an image could very easily throw off the model. Another approach to facial emotion detection was carried out by Dachapally [14] using representational autoencoders units (RAU). The autoencoders allow for a unique representation to be formed for different emotions allowing the images to be easily differentiated. According to their findings, the use of autoencoders to form generalized encoded features of the different emotions since the model looks at different faces whilst training. This methodology also worked well since Dachapally [14] used the JAFFE dataset which has 215 images of 10 different female models posing for 7 emotions. All the images in the training set were of Japanese women, so, all the samples come from the same ethnicity and of the same gender. This fact allows the autoencoders to perform daily decently compared to the transfer learning methodologies discussed by Gajarla and Gupta [12]. The work done by Bargal et al.[15] used an approach very different from the aforementioned approach to the facial emotion recognition problem. This approach used a system that passed the image through three different pre-trained image models, namely VGG13, VGG16, and ResNet50. This parallelization during the extraction of the features from the image model allows the images to be broken down to different levels, resulting in much richer features when compared to using just one pre-trained model. The features from these three models are then individually passed through Signed Square Root (SSR) and L2 Norm before being combined again. This work used the EmotiW'16 Dataset and additional data collected by crawling the web for images tagged with emotion classes. Using this methodology of parallelizing the image feature extraction, Bargal et al. [15] achieved 56.66% on the test dataset, improving over the baseline by a little over 16%. This work also showcased the differences in using a single pre-trained model over parallelization during feature extraction, with the parallel model achieving around 2% more accuracy when using just the VGG16 model. Several methods have been implemented to optimize both the smoothness and the speed of a gait of a quadruped robot. The initial approach for this goal was heavily dependent on human observations to plan out the path of the leg that offered the best leg movement. Recently, however, the approaches have started implementing deep reinforcement learning to allow for the model to be created without much human intervention. One of the first approaches to gait analysis, Hengst et al. [16] was dependent on human observations to make a fixed path for the leg to move. This fixed path was configured using three aspects of the gait - movement, speed, and leg stance. These parameters allowed [16] to form the four corners of a rectangle, Figure 1, which laid out the gait's locus and the ability to align the rectangular locus with the robot's body. The robot's leg was moved about the locus using a series of waypoints that were generated per the movement parameters. These waypoints were divided into two equal groups - one for the groundstroke and the other for the rest of the rectangle. Shortly after, Kim and Uther [17] improved on the Rectangular Locus method by creating a new quadrilateral walk locus described by the four offsets from the original locus, Figure 2, and the speed of the robot's feet around the new locus is the same throughout. This new gait was inherently smoother due to the increase in the number of locus points, the optimization of the locus was to employ Powell's (direction set) method for multidimensional minimization, as outlined in Press et al. [18]. The previous methods can produce satisfactory results, but require a lot of time and human resources for the path to be properly understood and configured to allow for smooth movement of the leg and the robot overall. Clune, Beckmann, Ofria, et al. [19] came up with a new generative encoding methodology for evolving neural networks - HyperNEAT. HyperNEAT was especially impressive compared to the previous methods since it required absolutely no manual tuning since the neural networks eventually found a possible solution to the gait. An added advantage was that generative encoding can more easily reuse phenotypic modules resulting in better leg coordination. Owaki & Ishiguro [20] followed an entirely different ideology compared to the previous methods. Their approach was based on the fact that the movement of the limbs on a quadruped follows a repeating pattern that can be simulated using a "Central Pattern Generator" (CPG) model. Past experiments involving decerebrate cats indicate that cats also have a biological CPG in their spinal cord [21], [22]. This method also allowed for spontaneous gait transition, from walking to trotting to cantering to galloping, which could be achieved by changing one parameter related to speed while maintaining balance and having a low movement cost. Lodi, Shilnikov, and Storace [23] also worked on the fact that quadrupeds follow a CPG for interlimb communication and synchronization. They proposed a method for designing and analyzing Figure 1: Rectangular Locus (adapted from [16]) CPGs, based on multi-parameter bifurcation theory using a recently proposed software tool called CEPAGE [24]. The method is applied to two CPGs, one bio-inspired and one purely synthetic. In both cases, the analysis of the CPGs allows for a way to easily obtain different gait sequences by tweaking the bifurcation parameters. While the proposed method involved some level of human interference as it is mainly a rule-based designing approach, CPGs produce rhythmic patterns allowing for much easier analysis of the gait and the possibility of spontaneous gait transitions. This paper discusses the development of a domestic-savvy one. The robot in discussion can understand emotions from visual and auditory stimuli and respond to the same with various motor and auditory responses, along with an LCD to make humans more receptive to it. ## 3 The proposed Architecture: e-Inu The e-Inu architecture incorporates four discrete modules to achieve the desired goals. The Emotion detection module is responsible for all aspects of the detection of emotion both audio and visual. The TDoA module gives the robot dog positional awareness. The gait generation module is responsible for generating the gait and mapping the robot's movement from one position to another. Figure 2: Optimized Rectangular Locus (adapted from [17]) And finally, the audio-visual feedback module helps the robot communicate back with the human and the environment. The components are discussed in the following segments. ### Emotion Detection The emotion detection module considers both the audio and visual inputs as external stimuli and uses separate deep learning networks to compute the same. For the emotion analysis on audio features, we use Mel Frequency Cepstral Coefficients that are extracted from the generated audio and pass it through a pair of LSTM layers sandwiched by four dense layers, two on each side. As for the emotion analysis on video features, we use Haar Cascade to detect facial structures, OpenCV, and a no-top version of VGG16 trained on places365, whose inputs are flattened and passed through dense layers. MFCCs jocularly referred to in the academic circle as the "Most-Frequently Considered Coefficients", are usually the one-does-it-all when it comes to audio data processing. Any sound produced by humans is dictated by the form of their vocal tract, as also the tongue, teeth, etc. If this form is precisely identified, every sound generated by humans can be appropriately described. The envelope of the temporal power spectrum of a speech signal depicts the vocal tract, and MFCC represents just that. The first thirteen coefficients, or the lower dimensions of MFCC, are considered features representing the aforementioned spectral envelope. The higher dimensions express further details about the spectral features. For various phonemes, envelopes are sufficient to express the difference, allowing us to recognize phonemes using MFCC. But in our case, given that more data is good data, we experimentally arrived at the inference that the forty-dimensional features of MFCCs do the best job. MFCC is a time series, as data is represented sequentially with the y-axis representing frequency and the x-axis representing time. And it is widely recognized that LSTM does a very good job of drawing statistical inferences from the said time series. A transformer can also fit the job description very well, but it will be an overhaul of computational resources, and simply unnecessary. Transformers are only economically ideal when it comes to extremely long sequences. But in our case, the sequence length per time step is only forty. Convolutional Networks (CNNs) show the most promise as with most computer vision-related tasks. The performance of CNNs in general computer vision tasks improves when the architecture is made deeper or wider. This trend also translates to the task of facial emotion recognition. The VGG-16 is a CNN with a depth of 16 layers that won ILSVRC (Imagenet), 2014. One may load the pre-trained instance of the network that has been trained on over a million photos from the ImageNet database. The pre-trained network can categorize photos into thousands of different categories. As a consequence, the network has learnt rich feature representations for a diverse set of pictures. The network's picture input size is 224 by 224. However, experimentally we found that the Places-365 version of VGG16 does a better job of facial emotion recognition than the Imagenet variant. This can primarily be attributed to the fact that the Places365 database (which is the latest version of the Places2 Database) does a better job of training CNNs for scene recognition. That said, since we are extracting the deep scene features from the higher-level layers of VGG-16-Places-365, they augment the performance in identifying generic features for facial feature recognition useful for emotion identification. #### 3.1.1 Dataset For this purpose, we used a combination of four datasets for emotion recognition in audio files, and one for emotion recognition from video (processed as still photo frames in practicality). All combinations of these fundamentally had the same 7 cardinal emotions: anger, neutral/contempt, disgust, fear, sadness, happiness, and surprise. The datasets used for audio were RAVDESS [25], CREMA-D [26], SAVEE [27] and TESS [28]. For video (still photographs), we used CK+ [29]. #### 3.1.2 Model The audio input is processed and Mel frequency cepstral coefficients (MFCCs) are extracted with 40 dimensions. These are stored in a NumPy data array as input variables. The labels are stored similarly to the output variables. The model consists of eight layers. The first one is a standard input layer of dimensionality [1, 40]. The next is a dense layer of 64 nodes, followed by another dense layer with 128 nodes. To increase the feature space, thus allowing for more efficient backpropagation and better learning, we tend to use dense layers that connect the input layer to our next and probably the most important segment of the model - the LSTMs. Now, as to why we use different sizes of the layers, if we directly scale up from 40 to 128 just to increase the feature space, we noticed that using two fully connected layers with 128 nodes between the input node and the LSTMs led to significant instability and a drop in the accuracy metric. Thus, we take a more measured and graded approach. As discussed before, the inputs from the dense layer with 128 nodes are fed to two LSTM layers each with 128 nodes and sent forward to two other dense layers with 128 and 64 nodes respectively. This approach towards using two LSTMs stacked up on one another is called a hierarchical or stacked LSTM model. This allows for the hidden states from the first LSTM to propagate into the second. Stacking the LSTM layers deepens the model, more correctly defining it as a deep learning Figure 3: Model Pipeline for Emotion Detection from Audio Features approach. The depth of neural networks is often ascribed to the approach's performance on a wide range of difficult prediction tasks. And in this case, although it makes the model more computationally expensive, it also gives us the required accuracy boost. As for the usage of the dense layers after the stacked LSTMs, it is because the output of an LSTM cannot directly be passed into a softmax activation function. They usually also output the hidden internal state \(h\), equating the number of units to the dimensionality of this output. This is usually not the desired dimensionality of the required output layer, which is seven in our case. When we specify a dense layer(s) after the LSTM(s), it corresponds to \[\text{y(t)}=\text{W * h(t)} \tag{1}\] where y(t) are the logits one needs to pass to a softmax layer, and W is simply the weight connection matrix of this last layer. As before, the graded approach in the reduction of node size in the dense layers allows us to stem instability in the model, and gradually bring the node size down to the required cardinality of the output layer. The final layer is another dense layer with 7 nodes, which constitutes our output layer. For the output layer, softmax is used, and for the rest of the dense layers, ReLU is used as the activation function. The standard dropout used is 20%. This is depicted pictorially above in Figure 3. As for the emotion detection from video, we make use of the VGG16 architecture. In our work, we used the VGG16-places365 model with no top layers (using the first 13 layers of the model). After using a Haar Cascade for facial feature detection, the images were processed as 224x224 in pixel Figure 4: Model Pipeline for Emotion Detection from Video Features dimensionality. The output of the VGG16 layers was flattened and then fed to a dense layer with 512 nodes, ReLU activation, and a drop out of 20%. Then it was again fed to a dense layer of ReLU activation, but consisting of 256 nodes. This is then finally passed to our output layer with 7 nodes and softmax activation, Figure 4. The dense layers before the final output layer function in a similar way to the audio emotion detection model described above. The relatively gradual decrease in layer cardinality of the nodes helps in reducing instability. These also aid the VGG-16 flattened output by enhancing the classification power of the model as a whole. To address the situation where we may get output registered from both the emotional detection models, we use a ranking system to choose the emotion with the highest priority as the final registered output, as seen in Table 1. ### Time Difference of Arrival (TDOA) Detection Sound localization is imperative in this work as to be able to close the gap between a real dog and a robotic construct, the robot must be able to intuitively be able to respond to the vocal prompts by turning around to face the direction of the speaker. Most of the work done in this regard works on the principle that sound localization can be achieved by using an array of microphones that pick up the sound waves that bounce back off the walls and objects. By capturing the sound using different microphones, the Time Difference of Arrival (TDoA) [30], [31] can be calculated to find the direction and distance from the sound source. TDoA itself is quite a simple concept which uses the time difference at which the different microphones in the microphone array pick up the same sound. Since the distance and angles between the microphones themselves are known, it is possible to calculate the distance and the direction of the point of the origination of the sound, Figure 5. \begin{table} \begin{tabular}{|c|c|} \hline Rank & Emotion \\ \hline 1 & anger \\ \hline 2 & disgust \\ \hline 3 & fear \\ \hline 4 & sadness \\ \hline 5 & surprises \\ \hline 6 & happiness \\ \hline 7 & neutral/contempt \\ \hline \end{tabular} \end{table} Table 1: Priority rank for emotions The calculation of TDoA can be done with only two microphones, however more are necessary to determine whether the sound source is originating from in front of the array or from the back. To this end, we use four microphones arranged in a cross-like manner, Figure 6, which allows the simulated robot to successfully identify the distance and direction of the sound originating anywhere. TDoA is implemented by starting a timer as soon as any one of the microphones picks up a sound signal and the timer stops when the microphone directly opposite to the first microphone picks up the same sound. The resulting time is the TDoA between the pair of microphones, this can then be used in the following formulae to calculate the angle from the first microphone from which the sound is originating. \[\theta=\arccos(\frac{\Delta t\ast\nu}{d}) \tag{2}\] where \(\theta\) is the angle of elevation from mic 1 to the source of sound (considering, mic 1 and mic 2 lie on the x-axis), \(\Delta t\) is the difference of arrival times (for sound) between the two microphones, \(\nu\)is the velocity of sound, and d is the shortest distance between them. Figure 5: TDOA for two microphones Figure 6: Placement of microphones (Seen from top view) ### Gait Generation In this work, we use a two-joint leg structure along with the implementation of Proximal Policy Optimization (PPO)[32] for the actual gait generation. As seen from the works done by Hengst, Ibbotson, Pham, et al.[16] and Kim & Uther [17], one can calculate and decide on a locus for the path of the robots' legs in the simulation, however, such practice is quite impractical. This is because a locus will not be able to adjust for the variance in terrain surfaces smoothly. For this reason, this work uses the PPO algorithm, a popular Reinforcement Learning method that trains the model by learning the experience from small batches of the training simulation. The learnt experience is then used to update the decision-making policy. Once the model has gone over the entire train set, the learnt mini-batch experiences are discarded and a new batch is trained upon using the updated policy. This "on-policy learning" ensures that information from bad training batches does not propagate Forward too much, making an awkward or unusable gait for the quadruped. While this does mean that there is a lot less variance in the training, the final result will be a much smoother training process and also ensure that the model does not develop any habits in the gait that cause senseless actions. In model-based RL a model of the environment is learned, which enables the agent to plan. The transition probability distribution and the reward function constitute the model. Model-free RL manages to infer without such a model. Model-based RL enables agents to weigh available actions and their implications explicitly. This allows an agent to plan and leads to sample efficiency during training. It was successfully used in AlphaZero, a program that mastered e.g. Go or Shogi. However, in many application cases, it seems to overfit to the point where models completely fail in a real environment. While being a less sample-efficient model-free RL is easier to implement and less prone to overfitting. Model-free RL is further divided into the families of Policy Optimization and Q-Learning. With Q-Learning an Optimal Q-function is estimated and optimized. While being more sample-efficient, performance stability is dependent on how well the Optimal Q-function can be estimated. Policy Optimization on the other hand optimizes the agent performance directly, resulting in more stable and reliable performance. For Policy Optimization a policy is explicitly represented and optimized to maximize return. This paper uses the Proximal Policy Optimization (PPO) algorithm with a hybrid policy defined as a(t) - user policy, \(\pi\)(o) - feedback. This hybrid policy allows the model to be changed easily from being fully user-specified to completely dependent on the gait learnt whilst training. For example, for a completely user-defined model, setting the feedback component's upper and lower bounds to 0, we ensure that no information that is learnt while training is propagated to the next train batch. An OpenAI gym environment, is used to learn how to gracefully assume a pose avoiding too fast transactions. OpenAI's Gym allows us to easily create and use a physics environment based on the simulation. While Gym does have many pre-existing environments for many different use-cases, we needed to modify the environment till we stripped everything but the physics. We then make a new actor to represent the dog, which our PPO model then controls. It uses a one-dimensional action space with a feedback component \(\pi\)(o) with bounds [-0.1, 0.1]. The feedback is applied to a sigmoid function to orchestrate the movement. When the -playground flag is used, it's possible to use the pyBullet UI to manually set a specific pose altering the robot base position (x,y,z) and orientation (roll, pitch, jaw). ### Audio-Visual Feedback The feedback is provided through rather elementary software engineering. An LCD and a speaker output feedback mapped to each emotion. Although rather trivial, we believed it to be a necessary touch to offer a more natural final product. ## 4 Experimental Setup As the primary goal of this work is to create a simulation of a dog-like quadruped robot, the standard environment for gait simulation had to allow for inputs that are usually not needed. For the emotion recognition, we used a fairly simple set-up - a webcam for the video emotion recognition and a microphone for the audio emotion recognition modules. These two inputs were passed through to a modified gym environment to allow for the simulation to run inside a single system. The audio passthrough also lets us implement TDoA inside the simulation, and we implemented a point-and-click method of introducing the sound source in the environment to test the same. The terrain is simulated in the gym environment to be random, to allow for accurate testing of the robot. The TDoA also allows us to orient the robot in terms of autonomous gait generation through the multi-terrain system in the simulation. However, we did not use a particular metric to judge the quality of locomotion in the random terrain environment. It was a question of whether can or cannot. As for the audio and visual feedback, these were obtained by outputting a.wav track recording of dog sounds corresponding to the emotion inferred and a Tkinter GUI screen to show a loop of translated LCD controller code to recreate the images we made to show the robot's emotion (in response to the emotion inferred) in an LCD screen set on what analogically would be its face. A four-microphone system is used to get spatial audio. Alongside the audio emotion recognition module, the inputs are fed parallelly to the TDOA module to get an estimate of the direction of the source of the sound. An RGB camera is used to take video stills, and Haar Cascade is used to extract facial features. These are then passed on to the emotion analysis module to get the video and audio emotion inference. As discussed prior, we use a hardcoded table assigning each of the seven cardinal emotions to a priority-based ranking system, in terms of how important it is to react to the emotion. Now the emotional inferences are processed and the emotion with a higher priority is acted upon (between the two most probable emotions inferred from the pair of emotion recognition modules). For non-urgent emotions, the squat position is triggered and further feedback is provided through the LCD and speaker. For a more urgent emotion inference in the simulation like anger or sadness, the robot locomotes towards the source of the sound while providing feedback through the LCD and speaker ensemble. ## 5 Results and Analysis The emotion detection module for audio input achieved an overall test accuracy of 63.47%, Figure 7, over the compiled dataset including RAVDESS, CREMA-D, SAVEE, and TESS. The module for the emotion detection from video input did surprisingly well with an accuracy of 99.66%, Figure 8, in the CK+ dataset, beating the previous 3rd highest (in 7 emotions accuracy), and lagging behind the 2nd highest accuracy score just by roughly.04% as per the benchmark in paperswithcode.com [32], not considering approximation. We would like to make a note here, that although the FN2EN [33] tops the dataset leaderboard in paperswithcode.com, this is attributed to their highest accuracy in the 8 emotion classes category with an accuracy of 96.8%, while it has an accuracy of 98.6% in the 6 emotions category. They do not have available test scores in the 7 emotions category. Because our fundamental end-goal is to perform the task of classification, Categorical Cross-Entropy works best for our loss function in both types of emotion recognition, viz., from facial features and speech. Figure 7 shows that the model converges around the 48th epoch. There it stabilizes around the 60% mark of validation accuracy. But the model tends to overfit a little, shooting towards the 68% mark (in the training accuracy metric). The same is reflected in the loss function graph. The fact that the validation loss and accuracy start decoupling from the train loss and accuracy suggests that the bias-variance tradeoff starts getting skewed. After the 48th epoch, the model tends to get biassed towards the training dataset. With that said, however, experimentally the results showcased in, Figure 7, were the best results that we obtained from several different variations of the model, Figure 3, not considering the training epoch length. Figure 8 shows spikes forming between the 20th and the 27th epochs. These anomalies are an inevitable side effect of Adam's Mini-Batch Gradient Descent (we use a batch size of 32). Some mini-batches have accidental "unlucky" tuples for the optimization, causing these and affecting the cost function and the accuracy metric. When we implemented Stochastic Gradient Descent (the same as when the batch size is one), we noticed that the cost function has even more anomalies. This does not occur, however, if we use (Full) Batch Gradient Descent. This is because it uses the training dataset entirely during each optimization epoch. However, Adam optimization does do the job better than most when the end justifies the means. Figure 7. Model loss and accuracy for emotion recognition from audio features. The simulation for gait generation worked as expected, Figure 9, achieving locomotion in all the simulated terrains. This gym environment is used to learn how to gracefully start the different actions and then stop them after reaching the target position. Walking uses a two-dimensional action space with a feedback component \(\pi(\text{o})\) with bounds [-0.4, 0.4], while for galloping we use a two-dimensional action space with a feedback component \(\pi(\text{o})\) with bounds [-0.3, 0.3]. For both walking and galloping a correct start contributes to void the drift effect generated by the gait in the resulting learned policy. For standing up the action space is equal to 1 with a feedback component \(\pi(\text{o})\) with bounds [-0.1, 0.1] used to optimize the signal timing. The signal function applies a 'brake', forcing the robot to assume a halfway position before completing the movement. Figure 8: Model loss and accuracy for emotion recognition from video features ## 6 Conclusions and Future Scope In this section, we discuss more practical features that can be added to this quadruped if one is possessed the requisite resources and time. The primary point to focus on should be to avoid what we like to call "trip-fall-die", to explain which, we make clear that presently, we do not have any dynamic obstacle avoidance implemented in the gait generation model. In actual fabrication, one must also consider adding a gyroscopic stabilization module into the robot. To talk about the more human-centric aspects, one might consider gesture recognition using skeleton structure analysis of humans around the bot and provide necessary feedback. The requirement of responding to a name (a name assigned to the robot) and following the source of sound to the caller may also be implemented. If the video emotion inference produces an urgent emotion, it also needs to go towards the detected human. In case of medical or other emergencies, simple modules can be integrated to call emergency services as necessary. Online training can be introduced to train for patterns of behaviours, or perhaps even to go as simplistic and learn tricks that a dog could perform. A support dog module can also be included to comfort the owner in times of emotional need. Another interesting feature that we would love to see would be a sentinel mode, where the quadruped walks the perimeter of an assigned area, barks at faces unknown to its memory, and maybe even gives warning before stunning any potential criminal attempting a break into the said premises. As discussed above, this paper aims to establish the framework for simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. With that said, this work was mainly construed as a proof of concept to showcase the fact that with existing work in literature we will be able to put together a system that can mimic a real dog. In our simulation, we show that the use of age-old techniques such as the MFCC algorithm can be brought back to life using newer architectures in an ensemble to complement it. This audio emotion detection performs relatively well, approximately 63.47% while needing minimal computational resources. However, it is important to note that this does not perform at par with the current works in the related datasets, viz, the ERANNs on the RAVDESS dataset with a top accuracy of 73.4% [34], or the Zeta Policy training on the SAVEE dataset, with an accuracy of 73.4% [34]. Figure 9: Quadruped Simulation \(68.90\pm 0.61\%\). On the other hand, the video emotion detection system is a novel architecture that was created for this work and produces results that are almost at par with the state of the art. The accuracy attained (99.66%) is only topped by models FAN [35] and Vit + SE [36] with respective accuracies of 99.7% and 99.8%. The other main component is locomotion (automated gait generation) which is simulated in this work by using the PPO algorithm and the TDoA algorithm. The PPO algorithm is especially quick in learning due to its "on-policy" learning methodology allowing for the simulated dog to exhibit a very smooth gait through the various cadences and changes. This then allows for the simulated quadruped robot to be reactive to the simulated stimuli, thus allowing us to say that it performs as expected and meets the goal of this paper.
2307.15759
Lessons in Reproducibility: Insights from NLP Studies in Materials Science
Natural Language Processing (NLP), a cornerstone field within artificial intelligence, has been increasingly utilized in the field of materials science literature. Our study conducts a reproducibility analysis of two pioneering works within this domain: "Machine-learned and codified synthesis parameters of oxide materials" by Kim et al., and "Unsupervised word embeddings capture latent knowledge from materials science literature" by Tshitoyan et al. We aim to comprehend these studies from a reproducibility perspective, acknowledging their significant influence on the field of materials informatics, rather than critiquing them. Our study indicates that both papers offered thorough workflows, tidy and well-documented codebases, and clear guidance for model evaluation. This makes it easier to replicate their results successfully and partially reproduce their findings. In doing so, they set commendable standards for future materials science publications to aspire to. However, our analysis also highlights areas for improvement such as to provide access to training data where copyright restrictions permit, more transparency on model architecture and the training process, and specifications of software dependency versions. We also cross-compare the word embedding models between papers, and find that some key differences in reproducibility and cross-compatibility are attributable to design choices outside the bounds of the models themselves. In summary, our study appreciates the benchmark set by these seminal papers while advocating for further enhancements in research reproducibility practices in the field of NLP for materials science. This balance of understanding and continuous improvement will ultimately propel the intersecting domains of NLP and materials science literature into a future of exciting discoveries.
Xiangyun Lei, Edward Kim, Viktoriia Baibakova, Shijing Sun
2023-07-28T18:36:42Z
http://arxiv.org/abs/2307.15759v1
# Lessons in Reproducibility: Insights from NLP Studies in Materials Science ###### Abstract Natural Language Processing (NLP), a cornerstone field within artificial intelligence, has been increasingly utilized in the field of materials science literature. Our study conducts a reproducibility analysis of two pioneering works within this domain: "Machine-learned and codified synthesis parameters of oxide materials" by Kim et al., and "Unsupervised word embeddings capture latent knowledge from materials science literature" by Tshitoyan et al. We aim to comprehend these studies from a reproducibility perspective, acknowledging their significant influence on the field of materials informatics, rather than critiquing them. Our study indicates that both papers offered thorough workflows, tidy and well-documented codebases, and clear guidance for model evaluation. This makes it easier to replicate their results successfully and partially reproduce their findings. In doing so, they set commendable standards for future materials science publications to aspire to. However, our analysis also highlights areas for improvement such as to provide access to training data where copyright restrictions permit, more transparency on model architecture and the training process, and specifications of software dependency versions. We also cross-compare the word embedding models between papers, and find that some key differences in reproducibility and cross-compatibility are attributable to design choices outside the bounds of the models themselves. In summary, our study appreciates the benchmark set by these seminal papers while advocating for further enhancements in research reproducibility practices in the field of NLP for materials science. This balance of understanding and continuous improvement will ultimately propel the intersecting domains of NLP and materials science literature into a future of exciting discoveries. ## 1 Introduction Natural Language Processing (NLP), a dynamic subfield of artificial intelligence, has revolutionized how computers understand, interpret, and generate human language. This powerful technology, encompassing models ranging from the simplistic Bag of Words to the more advanced architectures such as Bidirectional Encoder Representations from Transformers (BERT) [1] and Generative Pre-trained Transformer (GPT) [2, 3], has found successful deployments across a broad range of applications, from search engines and voice assistants to machine translation and knowledge extraction. Recently, these NLP models have been harnessed to streamline research in the realm of materials science literature. [4]. Researchers have custom-trained these models to delve into the vast corpus of materials science literature, thereby facilitating the extraction of hidden patterns and accelerating scientific innovation. Through the automation of data extraction and analysis, these NLP models catalyze interdisciplinary collaboration, promote the discovery of novel materials [5], and provide valuable insights for efficient resource allocation. The intersection of NLP and materials science holds the potential not only to expedite the discovery process but also to usher novel materials and technologies swiftly to market [6]. Despite the burgeoning growth of this field, underscored by the proliferation of research articles, a critical aspect often overlooked is the reproducibility of these studies. To address this concern, our work focuses on the reproducibility analysis of two seminal publications in the field of NLP applied to materials science literature. The first publication, "Machine-learned and codified synthesis parameters of oxide materials" by Kim _et al._[7], published in _Scientific Data_ in 2017, employed NLP tools to extract synthesis parameters from over 76,000 relevant research articles. The methodologies and datasets introduced in this work have significantly contributed to the community, as evidenced by approximately 100 citations in various studies. The second publication is "Unsupervised word embeddings capture latent knowledge from materials science literature" by Tshitoyan _et al._[8], published in _Nature_ in 2019. This groundbreaking work applies NLP to distill materials science-related knowledge from the abstracts of more than three million research articles, resulting in a pioneering model, Mat2Vec. The impact of this research is unmistakable, having received over 400 citations since its publication. In our analysis, we reviewed the information and codebases provided in both papers and endeavored to replicate and reproduce the reported results to the greatest extent possible. Replication in this context refers to using the provided machine learning models, testing them against the same test cases referenced in the papers, and ensuring the results align. Reproduction, however, necessitates retraining and possibly rebuilding the reported models from scratch. In this study, we document each step of our procedure, offering feedback and recommendations along the way, thereby contributing to the ongoing conversation on research reproducibility in the field. We also evaluate a cross-comparison of the NLP models in the two studies, as they both use Word2Vec models via the same underlying software library [9]. The cross-comparison efforts serve as a benchmark for assessing the generalizability of the reported models to applications beyond the original dataset used for their model development. ## 2 Paper 1: Kim et al., 2017 In their study published in 2017, Kim _et al._ addressed a significant gap in materials science by devising a comprehensive, autonomously compiled database for oxide material synthesis planning, and it has since been used for developing data-informed synthesis strategies. The authors achieved this by harnessing the capabilities of NLP, particularly the Word2Vec model[10], to extract synthesis information from over 76,000 previously published research articles with oxide material synthesis information. Word2Vec is an NLP model that can transform words into numerical vectors when trained, capturing semantic relationships between words. Consequently, it facilitates the understanding of context and the detection of synonyms and antonyms, thereby enabling more advanced operations like "king" - "man" + "woman" = "queen". This capability has proven instrumental in the task of autonomous information extraction. For greater precision in material synthesis information extraction, the authors adopted a two-stage training process. Initially, the Word2Vec model was pre-trained on 640,000 unlabeled full-text articles on materials synthesis to learn accurate vector representations of domain-specific terms. Then another supervised model was trained using the Word2Vec model as a featurization step, and this combination of models was used to categorize words in articles into "material", "condition", "operation", etc. using 20 manually annotated articles. The synthesis information was subsequently extracted using a heuristic-rule guided analysis of the higher-level relationship between words. This analysis employed a grammatical parser's outputs, with relationships between chunks determined from parse tree dependencies, and word-order proximity serving as a secondary measure. The extracted synthesis parameters were then filtered and compiled into the dataset, offering a comprehensive robust resource for the synthesis of oxide materials. The authors included two Github repositories. The first one, referred in this study as the Model repository, hosts the pre-trained Word2Vec model implemented with the gensim python package [9]. The second repository, which we refer to it as the Plot repository, contains a tutorial Jupyter notebook explaining how the figures in the paper were generated. Here, we run the scripts provided in both repositories in the attempt to reproduce the work presented in this study. The goals are to better understand how the results were obtained and to test the model developed in this study on other materials systems beyond the examples given in the manuscript. In both the Model and Plot repositories, the authors provided explanations and instructions in the README.md files. The installation procedure provided runs successfully with no bugs (tested on both Ubuntu and MacOS machines), and we are able to load the pre-trained Word2Vec model shared by the authors. A sample script for testing the pre-trained model is also provided in the model repository. With the script, we are able to replicate the results presented by the authors. Notably, the database identified \(Li_{4}Ti_{5}O_{12}\) and six other lithium-containing oxides as the most similar materials to \(LiFePO_{4}\) (Table 1). It is worth mentioning that 'LFP', a commonly used abbreviation for lithium iron phosphate, was also recognized as one of the most similar materials to \(LiFePO_{4}\), with a similiarty score of 0.667. This finding suggests that the model is capable of identifying the similarity between \(LiFePO_{4}\) and LFP, as one would expect from a Word2Vec model, which captures context-based similarity and thus tends to work especially well for capturing synonyms. However, the model were not trained to tell that LFP and \(LiFePO_{4}\) refer to the same compound. We also tested the code with a new word \(CsPbI_{3}\), which was not included in the example script, and the output was scientifically meaningful (Table 1). Additionally, we were able to reproduce other functionalities of the model, such as detecting outlier processing conditions and determining the similarity metric between two known materials by providing their names in letters instead of chemical formulas (Table 2). Interestingly, we observed that whether the first letter is capitalized makes a difference in similarity prediction, an effect that was not included in the original study. This is likely due to choices in the text preprocessing for the model, as it is reasonable to preserve the upper and lower casing for the text that contains chemical formulas, where the casing carries semantic meaning (e.g., \(LiCo_{3}\) is not the same as \(LiCO_{3}\)). Although the scripts and pre-trained model snapshot offer adequate information to evaluate some example applications of the model, the code or details for the Word2Vec model's architecture and training process are not provided. Moreover, the training data used to train the model is not openly available, likely (and understandably) due to publisher policies from which the data was obtained. Therefore, we were not able to reproduce the Word2Vec model from scratch. Further, implementations of the rest of the workflow are not provided in the repository, so we don't really know how they are coded and applied, hence not able to replicate or reproduce the complete workflow. In the Plot repository, the authors included a notebook that presents a guide on how to generate the figures presented in their study. Although the code was originally written in Python 2, we were able to execute it in Python 3 with minor changes. This underscores the importance of explicitly indicating the programming language used in the repository. The authors made the data available in JSON format through an online data repository (figshare), and we were able to replicate all the figures presented in the notebook. However, it appears that the data required to generate the plots were hardcoded within the script, meaning that we could only reproduce the process of generating the figures but not the results themselves. Therefore, we cannot verify the accuracy of the results presented in the paper. Overall, the substantial contribution of this work to the field is unquestionable. The primary focus is to construct the database of synthesis parameters for oxide materials, and the authors have made a commendable effort to ensure \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline **Categories** & \multicolumn{2}{c|}{Example provided:LiFeO4} & \multicolumn{2}{c|}{Example replicated: LiFeO4} & \multicolumn{2}{c|}{New example: CsPbI3} \\ \hline **No.** & Materials & Similarity & Materials & Similarity & Materials & Similarity \\ **1** & Li4Ti5O12 & 0.768 & Li4Ti5O12 & 0.768 & CaVO3 & 0.703 \\ **2** & LiMn2O4 & 0.756 & LiMn2O4 & 0.756 & CH3NH3PbBr3 & 0.703 \\ **3** & LTO & 0.714 & LTO & 0.714 & SrVO3 & 0.670 \\ **4** & LiCoO2 & 0.707 & LiCoO2 & 0.707 & BiCuOSe & 0.656 \\ **5** & LiMnPO4 & 0.696 & LiMnPO4 & 0.696 & CSPbBr3 & 0.655 \\ **6** & FePO4 & 0.682 & FePO4 & 0.682 & wurzite & 0.654 \\ **7** & LFP & 0.667 & LFP & 0.667 & CH3NH3PbI3 & 0.649 \\ **8** & LiNi0.5Mn1.504 & 0.662 & LiNi0.5Mn1.504 & 0.662 & CZGS & 0.647 \\ **9** & FeF3 & 0.658 & FeF3 & 0.658 & Cu2ZnSnSe4 & 0.644 \\ **10** & LiV3O8 & 0.658 & LiV3O8 & 0.658 & In4Se3 & 0.642 \\ \hline \end{tabular} \end{table} Table 1: Reproduced results using the Word2Vec model for identifying the most similar materials given reference material. The first column is the provided sample output using the reference material “\(LiFePO_{4}\)”. The second column is the reproduced results using the same material. The third column is the result with \(CsPbI_{3}\) as the reference material, which was not included in the original work \begin{table} \begin{tabular}{|c|c c c c|} \hline & Material 1 & Material 2 & Similarity & Reference similarity \\ \hline **1** & titania & zirconia & 0.599 & 0.599 \\ **2** & anatase & rutile & 0.847 & \\ **3** & magnesia & lime & 0.577 & \\ **4** & hermatite & corundum & 0.524 & \\ \hline **5** & Titania & Zirconia & 0.668 & \\ **6** & Anatase & Rutile & 0.798 & \\ **7** & Magnesia & Lime & 0.635 & \\ **8** & Hermatite & Corundum & 0.626 & \\ \hline \end{tabular} \end{table} Table 2: Reproduced results using the Word2Vec model for calculating the similarity between two materials. the reference similarity is provided by the original work that their workflow can be replicated and built upon by other researchers. This is evident from their provision of the workflow structure, the pre-trained Word2Vec model, scripts for figure generation, and detailed Readme files in the repositories. Using the information at hand, we were able to replicate a portion of the results, which speaks to the value of the authors' approach and their commitment to reproducibility. From the perspective of enhancing reproducibility in future work, there are areas that could be further clarified to enhance reproducibility. For one, the paper discusses multiple models, but more explicit descriptions of these models and their hyperparameters could be beneficial. For instance, the authors employed a binary logistic regression classifier from the Scikit-learn library, and while the model is described, the inclusion of details such as decision boundary and other hyperparameters could aid understanding. Similarly, the use of a pre-trained Word2Vec model followed by training of a baseline and a human-trained neural network is mentioned, but the absence of detailed descriptions of the network layers and the training hyperparameters leaves room for further clarification. The inclusion of such details could promote a deeper understanding of the methods employed and facilitate smoother transitions for those building upon this work. On the other hand, it might be beneficial if the scripts provided for reproducing the plots in the paper could start directly from the database, rather than from hard-coded values. ## 3 Paper 2: Tshitoyan el al., 2019 Although the second paper by Tshitoyan el al. [8] uses the same tool, namely the Word2Vec model, the goal of the work was different. In this work, the authors harnessed the power of the Word2Vec model to encode comprehensive knowledge about various materials reported in the scientific literature and showed that the embeddings could be used to make useful predictions. To achieve this, the model was trained on the abstracts of over three million materials science-related research articles. Due to the nature of the model, carefully designed application of domain knowledge is needed to apply the model and extract meaningful information. In this work, the authors conducted three key studies: (1) they plotted and performed simple arithmetic operations on the Word2Vec embeddings, thus demonstrating that physical knowledge was indeed encoded within the model; (2) they leveraged similarity scores to identify potential thermoelectric materials, demonstrating a novel method to get valuable insights from the model; and (3) they trained and tested models on articles published until various points in time, to underscore that the model, informed by existing literature, could reliably guide future research. Hence, the authors demonstrated that the Mat2Vec model can efficiently encapsulate materials science knowledge from the literature without the need for human supervision or labeling. The unsupervised model can anticipate functional materials years before their actual discovery, suggesting that a wealth of latent knowledge about future discoveries is nestled within past publications. The paper offers evidence for these claims by detailing multiple use cases of the mat2vec model through illustrative figures and text. Here, we try to replicate and reproduce the studies mentioned in the paper, and also extend the studies to other materials. We started by trying to install the Mat2Vec package, which is provided as supplemental material for the paper and open-sourced on GitHub along with pre-trained models (the Word2Vec model trained to abstracts of research articles). The codebase is well-documented and in our opinion, straightforward to follow. A tutorial for installing and testing the package is also provided in the Readme file, and we found no issues installing the package following the steps (on both Ubuntu and MacOS machines). Further, the pre-trained models can be loaded with no error, and the test results of the similarity study given in the paper and tutorial can be reproduced with only minor numerical noise (Table 3). The figure for demonstrating the word relationship based on the embedding (Figure 1b of the original paper) (Figure 1), and the figure for elemental embedding (Figure S1a of the original paper) (Figure 2) are also reproduced with the pre-trained model. In these cases, we also tried to extend the study to other materials, and all got reasonable results, so the claims in the paper are further confirmed. Interestingly, for the elemental embedding study, we also tried to use elemental symbols instead of names and got slightly different results (Figure 2c). It appears the cluster boundaries are not as clean as the one based on element name embedding (Figure 2b). However, when we tried to train a new model with the package, we got an unexpected error that the model can not be trained. Further investigation showed that it was because one of the Mat2Vec's dependencies, gensim [9], has been updated and the latest version is not compatible with the infrastructure of Mat2Vec. Downgrading gemin to version 3.7.1 solved this issue. Moreover, the original dataset used to train the models is not provided, which prevented us from reproducing their Mat2Vec model from scratch, so we were unable to reproduce the last study of the work. Presumably, this is because of potential IP concerns or publisher agreements. The authors did, however, provide a comprehensive explanation of how they acquired and processed the training data. Also, the authors provided a training script with default training parameters included. Overall, this paper offers an impressive example of reproducibility, a practice from which the materials science community could learn. The authors have packaged their work in a tidy, comprehensive way that encourages further exploration and extension of their ideas. The code is not only well-documented, but also supported with tutorials that guide users through installation and model testing. We commend the authors for their attention to detail, as evidenced by the ease with which we were able to reproduce and expand upon the results and figures in the paper. However, like any exploratory work, there are areas that offer room for improvement. During our investigation, we encountered two minor issues. The first pertained to the versioning of Mat2Vec's dependencies, specifically gensim, and the second related to the absence of the training data. Reflecting upon this experience, we propose that: 1) It could be beneficial to stabilize the versions of dependencies rather than linking them to their most recent iterations, thereby circumventing potential compatibility issues. 2) Although the authors have justifiably omitted their training data, providing more detailed information about its acquisition and processing could bolster the interpretability and reliability of the results. A greater degree of transparency in this area, potentially extending to the sharing of processing scripts, would be advantageous. Delving deeper into the second point, the concern here is not necessarily model accuracy but also the potential for bias in the trained model that might be unnoticed without access to the original data. The inability to peer-review the dataset and the training process means the community may inadvertently overlook biases. These could take the form Figure 1: Reproducibility study of word relationships based on the predicted word embedding using the Mat2Vec model. The original plot (a) is taken from the original paper [8] (Figure 0(b)). Word embeddings for Zr, Cr, and Ni, their principal oxides, and crystal symmetries (at standard conditions) projected onto two dimensions using principal component analysis and represented as points in space. The relative positioning of the words encodes materials science relationships, such that consistent vector operations exist between words that represent concepts such as ‘oxide of \({}^{*}\) and ‘structure of’. Plot (b) is our reproduction but with more examples. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Word** & \multicolumn{3}{c|}{thermoelectric} & \multicolumn{2}{c|}{band\_gap} \\ \hline & \multicolumn{2}{c|}{Example provided} & \multicolumn{2}{c|}{Example replicated} & \multicolumn{2}{c|}{New examples} \\ \hline **No.** & Output & Similarity & Output & Similarity & Output & Similarity \\ **1** & thermoelectrics & 0.844 & thermoelectrics & 0.844 & bandgap & 0.935 \\ **2** & thermoelectric\_properties & 0.834 & thermoelectric\_properties & 0.834 & band\_,\_gap & 0.933 \\ **3** & thermoelectric\_power\_generation & 0.793 & thermoelectric\_power\_generation & 0.793 & band\_gaps & 0.861 \\ **4** & thermoelectric\_figure\_of\_merit & 0.792 & thermoelectric\_figure\_of\_merit & 0.792 & direct\_band\_gap & 0.851 \\ **5** & seebek\_coefficient & 0.775 & seebek\_coefficient & 0.775 & bandgaps & 0.819 \\ **6** & thermoelectric\_generators & 0.7649 & thermoelectric\_generators & 0.764 & optical\_band\_gap & 0.814 \\ **7** & figure\_of\_merit\_ZT & 0.759 & figure\_of\_merit\_ZT & 0.759 & optical\_band\_gap & 0.813 \\ **8** & thermoelectric\_ & 0.752 & thermoelectric\_ & 0.752 & band\_gap\_energies & 0.800 \\ **9** & Bi2Te3 & 0.748 & Bi2Te3 & 0.748 & direct\_bandgap & 0.788 \\ **10** & thermoelectric\_modules & 0.743 & thermoelectric\_modules & 0.743 & eg & 0.787 \\ \hline \end{tabular} \end{table} Table 3: Reproduced results using the Word2Vec model for finding the most similar words given a reference word. The first column is the provided sample output using the word “thermoelectric”. The second column is the reproduced results using the same word. The third column is the result with \(band_{g}ap\) as the reference phrase, which was not included in the original work of exclusion or selection bias, stemming from the exclusion of foreign language abstracts or certain types of articles. Similarly, a focus on abstracts related to inorganic materials and a high recall classifier may unintentionally introduce a confirmation bias. Furthermore, the tokenization process the authors adopted, which reportedly improved their results, could inadvertently result in algorithmic bias. Thus, the community could benefit from a broader discussion about the fairness, interpretability, and potential biases in machine learning models in materials science. ## 4 Discussion In this section, we expanded our reproducibility study by conducting a comparison of the output generated by the two NLP models mentioned in the respective papers. The aim of this comparative analysis is to gain a better understanding of the application domains of each model, despite the absence of fully open-sourced details for either model's hyperparameters and training datasets. Table 4 listed the most similar compounds outputted using Model 2 (NLP model Figure 2: Reproducibility study of elemental embedding using the Mat2Vec model. Two reproducibility studies are conducted: one based on the chemical element names (b), and one based on the chemical symbols (c). of paper 2) for "LiFePO4", an example chemical formula provided in Paper 1, Additionally, it lists the most similar words generated by Model 1 (NLP model from Paper 1) for the term "thermoelectric," which serves as an example word from Paper 2. We found that both Model 1 and Model 2 yielded reasonable output for the tested words from the other model, indicating some evident similarities. When considering the overlap between the top 10 sets of both models, there appears to be a significant degree of commonality, which is expected given that both models belong to the same family and are trained on corpora within the same domain. It is worth noting that the preprocessing stage, particularly the treatment of underscore-joined phrases, has a clear impact on "application" words like "thermoelectric." This provides strong evidence that text processing, even if it involves simple "cleaning up" techniques, can greatly influence end-to-end reproducibility. The choice of preprocessing method may depend on the specific use case. Furthermore, the results obtained for chemical formulas are also interesting. Based on the similarity scores listed in Table 1 and Table 4, it is likely that the results from Paper 2 are more precise at identifying the most similar chemical formula to LiFePO4, in terms of the properties of the materials. While it is acknowledged that some differences may arise due to variations in text sources (such as full text versus abstract) and the specific list of papers used, the normalization of formulas in Mat2Vec removes one source of variance. In Paper 1, potential noise might have been introduced by up-weighting formulas written in a more "canonical" form, which may result in missing some of the less common instances. It is difficult to determine if this truly impacts the disparities in the lists, but it is noteworthy that preprocessing choices, such as formula normalization, can influence the reproducibility and the compatibility of models across related works. We have also tried to reproduce other plots in the second paper with the model provided by the first paper, and the results are given in SI (Section S1.1). Although they were not the intended applications of the model, the model performed reasonably well in giving similar embeddings to similar elements (Figure S1), as well as predicting similar relationships between similar concepts (Figure S2). Since the advent of the works reviewed, the field of Natural Language Processing has seen rapid advancements, with the integration of increasingly sophisticated NLP algorithms such as large language models (LLM) into materials science. This progression has, however, resulted in a diminishing ratio of studies that are readily reproducible. There are several challenges to reproducibility. One such challenge arises when these complex algorithms are trained on extensive datasets, which are often inaccessible due to publisher agreements. An example is the MatBERT, a transformer model specifically for the materials science domain, proposed by Trewartha et al. (2022), trained on a corpus of 2 million papers from materials science journals that are not publicly available [11]. Another challenge is the reliance on models or tools that may be proprietary or lack permissions for industrial use. As an illustration, Bran et al. (2023) developed an LLM tool to address reasoning-intensive chemical tasks. They acknowledged that the released package contains a limited set of tools, and the results it produces differ from those reported in the paper [12]. Furthermore, training large models, especially LLMs, often requires substantial computational resources, which poses yet another obstacle to reproducibility. The emerging trend of lab automation has also added an additional layer of complexity to this issue, involving intricate lab equipment setups. For instance, Skreta et al. (2023) employed iterative prompting of LLM to operate lab robots [13]. Reproducing such setups effectively demands that researchers be well-versed in NLP and possess comparable resources. These limitations in resource access, combined with algorithmic complexity and the absence of open data, raise a crucial question: What aspects of the research can be reproduced under these constraints? Understandably, the more complex an algorithm, the less likely it is to be replicated. Nonetheless, the cornerstone of scientific integrity is the ability to reproduce and verify results. To promote positive progress in the material science NLP domain, it is our recommendation that future publications make their methods section as transparent as possible. No details should be left out, especially when a complex algorithm is used. Emphasizing reproducibility in upcoming studies will help to build a more robust and reliable scientific community. \begin{table} \begin{tabular}{|c|c c|c c|} \hline **Word** & \multicolumn{2}{c|}{thermoelectric} & \multicolumn{2}{c|}{LiFePO4} \\ \hline **Model** & \multicolumn{2}{c|}{Model 1} & \multicolumn{2}{c|}{Model 2} \\ \hline **No.** & Output & Similarity & Output & Similarity \\ **1** & Thermoelectric & 0.727 & LFP & 0.874 \\ **2** & pyroelectric & 0.605 & Li3012P3V2 & 0.8594 \\ **3** & thermoelectrical & 0.602 & LiMn04P & 0.843 \\ **4** & photovoltaic & 0.595 & Li4012Ti5 & 0.828 \\ **5** & optoelectronic & 0.587 & FeLi2O4Si & 0.828 \\ **6** & multiferroic & 0.580 & CoLiQ4P & 0.816 \\ **7** & electrical & 0.567 & Li2Mn04Si & 0.804 \\ **8** & TCO & 0.567 & LiMn2O4 & 0.793 \\ **9** & NTC & 0.558 & LVP & 0.772 \\ **10** & piezoelectric & 0.555 & Fe04P & 0.769 \\ \hline \end{tabular} \end{table} Table 4: Results from running the other paper’s NLP model ## 5 Conclusion Both studies provided well-written manuscripts describing their workflows in adequate detail and clean and well-documented codebases. Their repositories also have clear guidance from installing the packages, loading pre-trained models, and evaluating the outcome of the models to reproducing the results in the papers. With the information provided, we could replicate and reproduce most of the results shown in both articles. Further, we were also able to extend the analysis to other materials and further confirmed the validity of the models and claims in the papers. Therefore, future research publications in materials science and related fields could take inspiration from their practices for improved reproducibility. However, despite the earnest efforts of the authors of both papers, we also encountered a few general challenges in our study. First, the training data from the published scientific articles were not open-sourced due to copyright restrictions. Second, the model training process remains a black box to the readers as the models were not released. Also, some details of the model's architecture and training procedure are missing. Both factors prevented future researchers from reproducing the works from scratch or fine-tuning the model for other application domains. However, these two observations have been common in NLP research beyond materials science. Finally, we have observed some backward compatibility issues due to updates in the dependencies, which could be circumvented by better specifying the versions of all underlying dependencies. Natural Language Processing has seen rapid advancements, integrating increasingly complex NLP algorithms into materials science. This progression, however, has led to a decrease in the ratio of studies that can be easily reproduced. Challenges include large datasets often inaccessible due to publisher agreements, models or tools that may be proprietary or lack permissions for industrial use, and substantial computational resources required for training large models. Reproducing complex lab automation setups, which include intricate lab equipment, also poses an additional challenge. In light of these considerations, we recommend that future publications prioritize transparency, especially when complex algorithms are used. Every detail in the methods section should be disclosed and explained as clearly as possible. This emphasis on reproducibility will be instrumental in fostering a more robust and reliable scientific community. Finally, we extend our acknowledgment to the authors of the two papers analyzed in this study for their considerable contributions to the field of materials science. Despite the challenges, their work stands as a testament to the power of meticulous research and transparent reporting. We encourage the research community to continue striving for transparency and reproducibility, strengthening the collective scientific enterprise. ## 6 Author Contributions XL and SS conducted the experiments. All authors contributed to writing and editing the paper. EK was not involved in the direct execution of any reproducibility experiments, as he was an author on the first paper discussed in this work.
2310.16847
Maximal Mass Neutron Star as a Key to Superdense Matter Physics
We propose a universal approximation of the equation of state of superdense matter in neutron star (NS) interiors. It contains only two parameters, the pressure and the density at the center of the maximally massive neutron star. We demonstrate the validity of this approximation for a wide range of different types of equations of state, including both baryonic and hybrid models. Combined with recently discovered correlations of internal (density, pressure, and speed of sound at the center) and external (mass, radius) properties of a maximally massive neutron star, this approximation turns out to be an effective tool for determining the equation of state of superdense matter using astrophysical observations.
D. D. Ofengeim, P. S. Shternin, T. Piran
2023-10-23T03:23:07Z
http://arxiv.org/abs/2310.16847v3
# Maximal Mass Neutron Star as a Key to Superdense Matter Physics # Maximal Mass Neutron Star as a Key to Superdense Matter Physics **D. D. Ofengeim\({}^{1}\)1, P. S. Shternin\({}^{2}\), T. Piran\({}^{1}\)** \({}^{1}\)_The Herbew University of Jerusalem, Jerusalem, Israel_ \({}^{2}\)_Ioffe Institute, Saint Petersburg, Russia_ Footnote 1: E-mail: [email protected] **Abstract**--We propose a universal approximation of the equation of state of superdense matter in neutron star (NS) interiors. It contains only two parameters, the pressure and the density at the center of the maximally massive neutron star. We demonstrate the validity of this approximation for a wide range of different types of equations of state, including both baryonic and hybrid models. Combined with recently discovered correlations of internal (density, pressure, and speed of sound at the center) and external (mass, radius) properties of a maximally massive neutron star, this approximation turns out to be an effective tool for determining the equation of state of superdense matter using astrophysical observations. **DOI:** 10.31857/S... Keywords: _neutron stars, superdense matter, equation of state._ ## 1 Introduction Determining the equation of state of superdense matter is one of the central problems of neutron star astrophysics (Haensel et al. 2007). A natural approach to this problem is to infer observational constraints on the NS physical characteristics like mass \(M\), radius \(R\), moment of inertia \(I\), etc., and then compare these with the predictions of NS structure theory (Lattimer, 2021). On this path, one usually takes a certain microphysical model, which allows one to calculate the equation of state, and by comparing the results of modeling the NS structure with observations, one then constrains the parameters of the original model. However, the properties of matter at densities significantly larger than the saturated nuclear matter density (\(\rho_{0}=2.8\times 10^{14}\) g/cm\({}^{3}\)) are practically impossible to study in terrestrial laboratories. When constructing models of such matter, one has to rely on extrapolations of various methods and approaches that have proven themselves under less extreme conditions. These extrapolations are based on a variety of ideas about the microphysics of superdense nuclear matter. As a result, the "astrophysical market" offers a huge number of different equations of state, which at first glance are completely different from each other (see section 2 below). It is, therefore, interesting to identify universal relations between NS parameters that weakly depend on specific microphysical models (for example, Lattimer and Prakash 2001; Bejger and Haensel 2002; Yagi and Yunes 2013a,b; Zhang and Yagi 2020; Ofengeim 2020; Cai et al., 2023). Such relations are intended to describe in a unified way the combinations of NS characteristics encountered within the framework of various approaches to modeling superdense matter. They are convenient to use for interpreting observations as restrictions on possible combinations of stellar parameters become largely model-independent. This work is devoted to the development of such an approach. Many observational properties of NSs (\(M\), \(R\), etc.) are determined by the solution of the Tolman-Oppenheimer-Volkoff equations (Tolman, 1939; Oppenheimer and Volkoff, 1939). To close this system of equations, it is necessary to specify the relation between pressure and density, \(P=P(\rho)\). This rela tion, the equation of state (Haensel et al. 2007), is in the one-to-one correspondence to the \(M-R\) curve of NSs (Lindblom 1992). That is if we manage to accurately determine the \(M-R\) curve, the problem of finding the equation of state \(P(\rho)\) will also be solved. In reality, we can only measure masses and radii of individual NSs, and this only with a finite accuracy, which significantly complicates the situation. Whatever the equation of state is, the \(M-R\) curve always have a global maximum, \(M_{\rm TOV}\), along the mass axis (e.g. Shapiro, Teukolsky, 1985). However, the properties of the maximally massive neutron star (MMNS) are different for each equation of state model. If the latter predicts \(M_{\rm TOV}\) less than the mass of any of the observed NS masses, then the model is incorrect. Therefore, any observation of a sufficiently massive NS imposes significant constraints on the equation of state. The value of \(M_{\rm TOV}\) specifies the natural scale of NS masses for a given equation of state. Similar characteristics are the radius of the MMNS, \(R_{\rm TOV}\), the density at its center, \(\rho_{\rm TOV}\), and the corresponding pressure, \(P_{\rm TOV}\). Note that \(\rho_{\rm TOV}\) and \(P_{\rm TOV}\) (for the true equation of state realized in nature) are the maximum possible density and pressure of matter in a stationary object in the contemporary Universe. In what follows, we will also need the speed of sound at the center of the MMNS, \(c_{\rm 5TOV}\). Ofengeim (2020) showed that among the parameters characterizing the MMNS (\(M_{\rm TOV}\), \(R_{\rm TOV}\), \(\rho_{\rm TOV}\), \(P_{\rm TOV}\), \(c_{\rm 8TOV}\)), only two are independent. This was confirmed on a set of 50 equations of state for nucleonic and hyperonic compositions. Recently, a possible explanation for these findings was given in Cai et al. (2023) based on a perturbative analysis of the dimensionless Tolman-Oppenheimer-Volkoff equations. In the present work, the existence of correlations between these parameters is confirmed on an extended sample of 162 equations of state, including nucleon, hyperon, and hybrid models (i.e., with a quark inner core), and new compact approximation formulas are proposed to describe these correlations. In addition, we constructed a universal approximation of the \(P(\rho)\) relations for \(\rho\gtrsim 3\rho_{0}\) which is based on only two parameters, namely \(\rho_{\rm TOV}\) and \(P_{\rm TOV}\). Taking into account the one-to-one correspondence between the pair \(\rho_{\rm TOV},P_{\rm TOV}\) and the pair \(M_{\rm TOV},R_{\rm TOV}\), this provides a model-independent tool for a direct transformation of observational constrains on the properties of MMNS into constraints on the equation of state of superdense matter at densities that are most difficult to reach in laboratory research. ## 2 The Zoo of Equations of State We consider 162 NS equations of state. Among them, 97 have nucleonic composition, 32 allow for the appearance of hyperons and \(\Delta\)-isobars (but do not allow for free quarks) in NS interiors, and 33 equations of state predict the presence of internal quark core of the star. Main sources for equations of state are the CompOSE database1(Typel et al., 2015), the collection of models from Read et al. (2009) and models used earlier by Ofengeim (2020). A complete list of the equations of state used containing their main characteristics and references is given in Appendix. Footnote 1: [https://compose.obspm.fr](https://compose.obspm.fr). The considered models are based on a variety of approaches to modeling nuclear interactions and the microphysics of superdense matter. These include obsolete models of free degenerate neutron gas and free \(npe\) gas; models based on effective energy density functionals, including purely phenomenological (PAL, PAPAL, BGN families), non-relativistic Skyrme-type (SLy, BSk, SkI, etc.) or Gogni (D1M*) and others, numerous relativistic mean field functionals; models derived from microscopic baryon interaction potentials using many-body methods (APR, WFF, BBB); and also some other models. Among the hybrid equations of state, both those in which a first-order phase transition occurs between hadronic and quark matter (for example, CMF, VQCD) and those in which there is a quark-hadron crossover (QHC) between these phases are present. Figure 1 shows how this sample of models is distributed over \(M_{\rm TOV}\) (panel a), over \(c_{\rm 8TOV}\) (panel b), and over the ratio of the radius \(R_{1.4}\) of a "canonical" NS with a mass of \(1.4M_{\odot}\) to the radius of the most massive star, \(R_{\rm TOV}\) (panel c). Notice that all 162 equations of state satisfy the condition \(R_{\rm TOV}<R_{1.4}\). Realistic equations of state, apparently, should satisfy the causality condition \(c_{\rm 8TOV}<c\), where \(c\) is the speed of light in vacuum (Haensel et al. 2007), and explain present observations of massive radio pulsars (Demorest et al., 2010; Antoniadis et al., 2013; Fonseca et al., 2021). About one third of the equations of state of our sample violate the second condition, and about 10% violate the first one. We do believe, however, that it is important to include both realistic and non-realistic models in order to comprehensively explore the generality of the detected correlations and the constructed approximations. We also note, that the speed of sound is not always a monotonic function of the density (especially for non-nucleonic models), and \(c_{\rm s_{TOV}}\) is not necessarily the highest possible speed of sound within a star. However, it turns out that the more stringent condition, \(c_{\rm s}<c\) in the entire volume of the star, is violated by the same number of equations of state as the condition \(c_{\rm s_{TOV}}<c\). ## 0.3 Correlations of M Mms Characteristics There are strong correlations between the values of \(M_{\rm TOV}\), \(R_{\rm TOV}\), \(\rho_{\rm TOV}\), \(P_{\rm TOV}\), and \(c_{\rm s_{TOV}}\) calculated within different models (Ofengeim, 2020; Cai et al., 2023), for which we propose here new fitting formulae: \[M_{\rm TOV}=\frac{\rho_{\rm TOV}{\cal R}^{3}}{f_{M}(P_{\rm TOV},\rho_{\rm TOV})}, \tag{1a}\] \[R_{\rm TOV}=\frac{{\cal R}}{f_{R}(P_{\rm TOV},\rho_{\rm TOV})},\] (1b) \[c_{\rm s_{TOV}}=\sqrt{G\rho_{\rm TOV}}\ {\cal R}\ \frac{f_{\rm c}(P_{\rm TOV},\rho_{\rm TOV})}{f_{R}(P_{\rm TOV},\rho_{\rm TOV })}. \tag{1c}\] These formulas differ from those proposed earlier by Ofengeim (2020) and Cai et al. (2023) and are based on dimensional analysis. The key to their construction is the introduction of the characteristic "Jeans" scale for the radius \[{\cal R}=\sqrt{\frac{P_{\rm TOV}}{G\rho_{\rm TOV}^{2}}} \tag{1d}\] and the corresponding scale for the mass \(\rho_{\rm TOV}{\cal R}^{3}\). The dimensionless functions \(f_{M}\), \(f_{R}\) and \(f_{c}\) have the same structure \[f_{i}=c_{i}\left(\frac{P_{\rm TOV}}{\rho_{\rm TOV}c^{2}}\right)^{p_{i}}\left( \frac{\rho_{\rm TOV}}{\rho_{0}}\right)^{q_{i}}+d_{i}, \tag{1e}\] where the optimal values of the fitting parameters \(c_{i}\), \(d_{i}\), \(p_{i}\) and \(q_{i}\) are given in table 1. Figure 2 clearly demonstrates the accuracy of the obtained approximations. It can be seen that the correlations of \(M_{\rm TOV}\) and \(R_{\rm TOV}\) with \(P_{\rm TOV}\), \(\rho_{\rm TOV}\) are described with good accuracy, while the approximation formula for \(c_{\rm s_{TOV}}\) shows a larger scatter. Among the 162 models considered, there are two that stand out significantly from these correlations. These are the JJ(VQCD)soft and JJ(VQCD)intermediate models, for which the center of the most massive NS is at the verge of the region of the first-order phase transition. In this case, the maximum of the \(M-R\) curve turns out to be non-smooth, which affects the correlation \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(i\) & \(p_{i}\) & \(q_{i}\) & \(c_{i}\) & \(d_{i}\) & rrms & max \\ \hline \(M\) & 1.41 & 0.0177 & 5.86 & 0.273 & 0.86\% & 5.4\% \\ \(R\) & 0.518 & \(-\)0.0755 & 2.74 & \(-\)0.0741 & 2.0\% & 9.3\% \\ \(c\) & 0.76 & \(-\)0.016 & 3.27 & 0.19 & 4.9\% & 22\% \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters of the approximations given in equation (1). The last two columns show the relative root mean squared fit error (rrms) and the maximal relative fit error (max). Models-outliers JJ(VQCD)soft and JJ(VQCD)intermediate are not included in the calculation of these errors. Figure 1: Distributions of (a) NS maximal mass, (b) speed of sound in the center of the star, (c) \(R_{1.4}/R_{\rm TOV}\) ratio for the sample of equations of state used. Different hatching patterns corresponds to different model types (nucleonic, hyperonic, hybrid). \(M_{\mbox{\tiny TOV}}(\rho_{\mbox{\tiny TOV}},P_{\mbox{\tiny TOV}})\). These models are not shown in the bottom panels of Figure 2. In all other cases, when the maximum of the \(M-R\) curve is smooth, no significant deviations from correlations (1) are observed. ## 4 The Universal Approximation \(P(\rho)\) If we express the dependencies \(P(\rho)\) in dimensionless variables \(P/P_{\mbox{\tiny TOV}}\), \(\rho/\rho_{\mbox{\tiny TOV}}\), the behavior of all 162 models will be very similar, especially when approaching the center of the MMNS. A good universal approximation at \(\rho>3\rho_{0}\) is: \[P=P_{\mbox{\tiny TOV}}\ g_{P}\left(\rho/\rho_{\mbox{\tiny TOV}},c^{\mbox{( fit)}}_{\mbox{\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny \tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tinytiny\tinytiny\tiny\tiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tiny\tinytiny\tinytiny\tiny\tiny\tiny\tiny\tinytiny\tiny\tiny\tinytiny\tiny\tinytiny\tiny\tinytiny\tiny\tiny\tinytiny\tiny\tinytiny\tinytiny\tiny\tiny\tinytiny\tiny\tinytiny\tiny\tiny\tinytiny\tinytiny\tiny\tinytiny\tinytinytiny\tinytinytiny\tiny the largest error, on the contrary, characterizes the fit quality at low densities. Only one model, RSGMT(QMC700), gives a large, \(\sim 20\%\), root mean square error, i.e. it is unsatisfactorily described by proposed approximations at any density. However, it has extreme values even for those properties that are subject to laboratory testing, and therefore seems unrealistic. ## 5 Applications to Observational Constraints on Mmss We turn now to demonstrate how the proposed universal correlations enable us to impose model-independent constraints on the equation of state of superdense matter based on observational data. Figure 5(a) show MMNS positions on the \(M,R\) plane for all considered equations of state (i.e., each symbol is a point (\(R_{\rm TOV}\), \(M_{\rm TOV}\))) corresponding to a specific equation of state. Different types of symbols correspond to different types of equations of state (nucleonic, hyperonic, hybrid). In addition, open symbols indicate "superluminal" models for which \(c_{\rm STOV}\geqslant c\). The shaded area in the upper right corner shows the so-called maximum compactness limit for possible values of NS masses and radii (for example, Lattimer and Prakash, 2016). Figure 5(b) shows the same data, but on the \(P,\rho\) plane (each symbol is a point (\(\rho_{\rm TOV}\), \(P_{\rm TOV}\))), however for a readability, a scale was chosen at which some of the models fell out of the boundaries of the figure. There are various constraints on \(M_{\rm TOV}\) in the literature with varying degrees of certainty. The maximal mass is limited from below by observations of the most massive NSs. As an example of such a restriction, we chose the condition from Kandel and Romani (2023), namely \(M_{\rm TOV}>2.19M_{\odot}\) at the \(2\sigma\) significance level. It is based on measurements of the masses of pulsars in binary systems. This limit is shown in Figure 5(a) with long-dashed line. In addition, Rezzolla et al. (2018), based on an analysis of the neutron star merger event GW170817 (Abbott et al., 2017), proposed an upper limit on the mass of the MMNS, \(M_{\rm TOV}<2.33M_{\odot}\) at the \(2\sigma\) significance level. This limit is shown in Figure 5(a) with short-dashed line. In the following we use these constraints for illustrative purposes and demonstrate how observational constraints can be used to put limits on the equation of state. Direct inference of the MMNS radius from observations is difficult. We will resort to the following considerations. As shown in Figure 1(c), for all 162 equations of state \(R_{\rm TOV}\) turns out to be less than the radius \(R_{1.4}\) of the "canonical" NS with \(M=1.4M_{\odot}\). Inferences of the constraints on on the radii of medium-mass NSs are simpler due to Figure 4: The distribution of the errors of the fit (2) for the whole sample of equations of state. Figure 3: Typical examples of the application of the approximation (2) (lines) to nucleonic (BSk24, circles), hyperonic (CMF-1,triangles) and hybrid (QHC21_A, squares) equations of state (symbols). Each curve is plotted in the region \(3\rho_{0}\leqslant\rho\leqslant\rho_{\rm TOV}\). The lower panel shows the relative approximation errors. their greater accessibility for various types of observations (for example, Degenaar and Suleimanov, 2018). Conservatively, we employ the results of the analysis of the gravitational wave event GW170817 from the work of Annala et al. (2018), in which \(R_{1.4}<13.6\,\mathrm{km}\) was obtained at the 90% significance level3. We set the same condition on \(R_{\mathrm{TOV}}\), which is shown in Figure 5(a) with the dash-dotted line. Footnote 3: Stronger limits can be obtained from other observations These three constraints can be mapped onto the \(P,\rho\) plane by considering the fits (1a) and (1b) as equations for determining \(\rho_{\mathrm{TOV}}\) and \(P_{\mathrm{TOV}}\). Their solutions are depicted in Figure 5(b) with lines of the same styles as the corresponding constraints in Figure 5(a). Finally, realistic equations of state must satisfy the theoretical restriction \(c_{\mathrm{s}_{\mathrm{TOV}}}<c\). Using formulas (1), this boundary is shown on both planes of figure 5 with solid lines. The true MMNS should be located below these lines. Notice that due to the approximate nature of the fitting formulas, a small number of superluminal models (empty symbols) fall below, and a small number of subluminal (filled symbols) fall above the solid lines. This uncertainty, as well as the uncertainties in transferring observational constraints from \(M_{\mathrm{TOV}}\) and \(R_{\mathrm{TOV}}\) to \(P_{\mathrm{TOV}}\) and \(\rho_{\mathrm{TOV}}\), must be taken into account in a thorough analysis, but here we neglect them in our illustrative analysis. Combining these constraints, we obtain the allowed regions of \((M_{\mathrm{TOV}},\,R_{\mathrm{TOV}})\) and \((P_{\mathrm{TOV}},\,\rho_{\mathrm{TOV}})\) which are shown in figure 5 as a double hatch region. Using approximation (2) from each point of this region one can draw a curve towards lower densities. The set of these curves will then occupy the area shown in Figure 5(b) as a single hatch region. This area represents the final constraint on the equation of state that is constructed using this method. It appears to be at least as stringent as limits proposed in other recent work (Raaijmakers et al., 2021; Zhang et al., 2023). But note that this result is purely illustrative. A more thorough analysis requires taking into account uncertainties in the approximations (1) and (2) and analyzing the systematic errors of Figure 5: MMNSs positions on \(M,R\) (a) and \(P,\rho\) (b) planes. Filled symbols show subluminal equations of state (\(c_{\mathrm{s}_{\mathrm{TOV}}}<c\)), open symbols correspond to superluminal ones (\(c_{\mathrm{s}_{\mathrm{TOV}}}\geq c\)). Solid lines show the boundaries \(c_{\mathrm{s}_{\mathrm{TOV}}}=c\), obtained using approximations (1). Long-dashed lines show the constraint from Kandel and Romani (2023), \(M_{\mathrm{TOV}}>2.19M_{\odot}\), short-dashed lines show the constraint \(M_{\mathrm{TOV}}<2.33M_{\odot}\) from Rezzolla et al. (2018). Dash-dotted line shows the constraint \(R_{\mathrm{TOV}}<R_{1.4}<13.6\,\mathrm{km}\)(Annala et al., 2018). Regions which fulfill all these constraints are doubly hatched. The single-hatched region in panel (b) shows the region occupied by all \(P(\rho)\) curves, whose MMNS points \((\rho_{\mathrm{TOV}},P_{\mathrm{TOV}})\) are anywhere in the allowed (double-hatched) region. This region describes the allowed equations of state under the constraints imposed here. Note that for readability the scale of (b) was chosen such that some of the models are out of the boundaries of the figure. the employed observations, as well as accounting for other observations that we haven't used here. ## 0.6 Conclusions We elaborated and improved the method for determining the equation of state of superdense matter from the properties of MMNS, previously proposed by Ofengeim (2020). The correlations between the quantities \(M_{{}_{\rm TOV}}\), \(R_{{}_{\rm TOV}}\), \(P_{{}_{\rm TOV}}\), \(\rho_{{}_{\rm TOV}}\) and \(c_{{}_{\rm STOV}}\) discovered in that work are confirmed on a much wider set of equations of state, which now includes both baryonic and hybrid models. Using dimensional analysis we present new approximate formulae (1) that describe theses correlations. In addition, it is shown that the \(P(\rho)\) relations for a variety of the equation of state models can be described by a single approximation (2), which has only two parameters, namely pressure \(P_{{}_{\rm TOV}}\) and density \(\rho_{{}_{\rm TOV}}\) at the center of MMNS. This fit has a low accuracy at small densities \(\rho\lesssim 3\rho_{0}\), but it works well in the more interesting denser region. By applying formulas (1) to a number of theoretical and observational constraints on the characteristics of MMNS, we limit significantly the possible values of \(M_{{}_{\rm TOV}}\), \(R_{{}_{\rm TOV}}\), \(P_{{}_{\rm TOV}}\) and \(\rho_{{}_{\rm TOV}}\). Using the formula (2), the constraints on the last pair are converted into constraints on the entire equation of state in the region \(\rho\gtrsim 3\rho_{0}\). The nature of the universal properties studied here remains unclear. In the work by Ofengeim (2020), the existence of correlations between the properties of MMNS was interpreted as indirect confirmation of the approximate two-parametric nature of realistic equations of state identified by Lindblom (2010). The universality of the \(P(\rho)\) curves reported in the present work, at first glance, directly extends Lindblom's findings to almost all existing models and proposes a pair of quantities \(P_{{}_{\rm TOV}}\) and \(\rho_{{}_{\rm TOV}}\) (or, taking into account correlations (1), \(M_{{}_{\rm TOV}}\) and \(R_{{}_{\rm TOV}}\)) as these two parameters. However, Lindblom's two-parametric description had an "anchor" point at the crust-core boundary, i.e. at a density of \(\sim 0.5\rho_{0}\), while the expression (2) works above \(3\rho_{0}\). Therefore, the parametrization of the equations of state with two parameters used in this work is somewhat different from that discovered by Lindblom (2010). In addition, both the Lindblom parametrization and the parametrizations of the \(P(\rho)\) curves proposed here are completely phenomenological and have no microscopic justification. Finally, Cai et al. (2023) recently proposed an explanation for the correlations between the characteristics of the MMNS, which does not rely at all on the properties of the equation of state. So perhaps the correlations (1) and the approximations (2) have different physical origins. This work can be extended in three natural directions. First, one can extrapolate the approximation of \(P(\rho)\) curves to the region \(\rho<3\rho_{0}\). In order to do so, it probably will be necessary to increase the number of independent parameters in the fitting formula. Second, due to the one-to-one correspondence between the relations \(P(\rho)\) and \(M,R\) (Lindblom, 1992), the latter set of curves should also be describable by a small number of real parameters. Accordingly, one can try to propose a universal approximation for the \(M,R\) curves. Such approximation will be useful in almost all studies related to observations of NS masses and radii. Finally, it is necessary to determine which of the explanations for the correlations of MMNS properties, those of Ofengeim (2020), Cai et al. (2023) or some other is correct. Regardless of these open questions, we expect that the method presented here can be used to infer new limits from observations on the equation of state of matter at very high densities. The work was supported by RSF # 19-12-00133 (PS) and by an Advanced ERC grant MultiJets (DO, TP).
2305.12897
On the edge-Erdős-Pósa property of walls
We show that walls of size at least $6 \times 4$ do not have the edge-Erd\H{o}s-P\'{o}sa property.
Henning Bruhn, Raphael Steck
2023-05-22T10:33:58Z
http://arxiv.org/abs/2305.12897v2
# On the edge-Erdos-Posa property of walls ###### Abstract We show that walls of size at least \(6\times 4\) do not have the edge-Erdos-Posa property. ## 1 Introduction The Erdos-Posa property provides a duality between packing and covering in graphs. We say that a class \(\mathcal{F}\) has the _edge-Erdos-Posa property_ if there exists a function \(f:\mathbb{N}\to\mathbb{R}\) such that for every graph \(G\) and every integer \(k\), there are \(k\) edge-disjoint subgraphs of \(G\) each isomorphic to some graph in \(\mathcal{F}\) or there is an edge set \(X\subseteq E(G)\) of size at most \(f(k)\) meeting all subgraphs of \(G\) isomorphic to some graph in \(\mathcal{F}\). In this article, we focus on graph classes that arise from taking minors. For a graph \(H\), we define the set of _\(H\)-expansions_ as \(\mathcal{F}_{H}=\{G\,|\,H\text{ is a minor of }G\}\). While for all non-planar graphs \(H\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property (see for example [4]), there are only some very simple planar graphs for which \(\mathcal{F}_{H}\) is known to have the edge-Erdos-Posa property such as long cycles [2] or \(K_{4}\)[1]. For most planar graphs, it is open whether they have the edge-Erdos-Posa property or not. In this article, we show that **Theorem 1**.: _For every wall \(B\) of size at least \(6\times 4\), the class of \(B\)-expansions does not have the edge-Erdos-Posa property._ ### Condensed Wall The main gadget used for proving that walls do not have the edge-Erdos-Posa property is a wall-like structure called _condensed wall_ introduced by Bruhn et. al. [3], see Figure 1. A condensed wall \(W\) of size \(r\in\mathbb{N}\) is the graph consisting of the following: * For every \(j\in[r]\), let \(P^{j}=u^{j}{}_{1},\dots,u^{j}{}_{2r}\) be a path of length \(2r-1\) and for \(j\in\{0\}\cup[r]\), let \(z_{j}\) be a vertex. Moreover, let \(a\), \(b\) be two further vertices. * For every \(i,j\in[r]\), add the edges \(z_{j-1}u^{j}{}_{2i-1},z_{j}u^{j}{}_{2i},z_{i-1}z_{i},au^{j}{}_{1}\) and \(bu^{j}{}_{2r}\). We define \(c=z_{0}\) and \(d=z_{r}\) and refer to \[W_{j}=W[\{u^{j}{}_{1},\dots,u^{j}{}_{2r},z_{j-1},z_{j}\}]\] as the _j-th layer of W_. Note that the layers of \(W\) are precisely the blocks of \(W-\{a,b\}\). We will refer to the vertices \(a,b\) quite often, and whenever we write \(a\) or \(b\) in this article, we refer to those vertices in a condensed wall. The vertices connecting the layers of \(W\) are \(z_{i},i\in\{0\}\cup[r]\), and we will call those _bottleneck vertices_. This includes the vertices \(c\) and \(d\). The edges \(z_{i-1}z_{i},i\in[r]\) are called _jump-edges_. For vertices \(a,b,c,d\), an _(\(a\)-\(b\), \(c\)-\(d\))-linkage_ is the vertex-disjoint union of an \(a\)-\(b\)-path with a \(c\)-\(d\)-path. **Lemma 2** (Bruhn et. al. [3]).: _There are no two edge-disjoint (\(a\)-\(b\), \(c\)-\(d\))-linkages in a condensed wall._ ### Modification of the condensed wall A _modified condensed wall_ is a condensed wall without jump-edges \(z_{i-1}z_{i}\) for all \(i\in[r]\). Throughout, let \(W\) be a condensed wall and let \(W^{-}\) be a modified condensed wall that does not contain jump-edges \(z_{i-1}z_{i}\). ### Definition of Walls For \(m,n\in\mathbb{N}\), an _elementary grid_ of size \(m\times n\) is a graph with vertices \(v_{i,j}\) for all \(i\in[m],j\in[n]\) and edges \(v_{i,j}v_{i+1,j}\;\forall i\in[m-1],j\in[n]\) as well as \(v_{i,j}v_{i,j+1}\;\forall i\in[m],j\in[n-1]\). A _grid_ is a subdivision of an elementary grid. A wall is the subcubic variant of a grid. We define an _elementary wall_ as an elementary grid with every second vertical edge removed. That is, an elementary wall of size \(m\times n\) is an elementary grid of size \(m+1\times 2n+2\) with every edge \(v_{i,2j}v_{i+1,2j}\,,i\in[m],i\text{ is odd},j\in[n+1]\) and every edge \(v_{i,2j-1}v_{i+1,2j-1}\,,i\in[m],i\text{ is even},j\in[n+1]\) being removed. Additionally, we remove all vertices of degree \(1\) and their incident edge. The \(i^{th}\)_row_ of an elementary wall is the induced subgraph on \(v_{i,1},\dots,v_{i,2n+2}\) for \(i\in[m+1]\) (ignore the vertices that have been removed); this is a path. There are exactly \(n+1\) disjoint paths between the first row and the \((m+1)^{\text{th}}\) row. These are the _columns_ of an elementary wall. The _boundary_ of an elementary wall is the union of the first and last row Figure 1: A condensed wall of size \(5\). together with the first and last column. The _bricks_ of an elementary wall are its \(6\)-cycles. (See Figure 2) A _wall_ is defined as the subdivision of an elementary wall. The definition of rows, columns, boundary and bricks in an elementary wall carries over to a wall in a natural way (with some truncation of the first and last row and column). For brevity of notation, we define an _\(n\)-wall_ as a wall of size at least \(n\times n\). When analyzing small parts of walls, we will not only count the number of bricks they contain, but also specify how the bricks are connected. To this end, we define \(B_{1},B_{2},\ldots,B_{10}\) to be subdivisions of the graphs shown in Figure 4. ## 2 Construction For every graph \(G\) and every \(r\in\mathbb{N}\), an _\(r\)_**-fold**\(G\) is a graph \(G^{\prime}\) where every edge in \(G\) is replaced by \(r\) edge-disjoint paths of length \(2\). Given a size \(f(2)\) of some hypothetical hitting set, we choose \(r\in\mathbb{N}\) with \(r>f(2)\). Let \(B\) be a wall of size at least \(6\times 4\). Note that for every wall, there are exactly two bricks which are adjacent to only two other bricks, while all others are adjacent to at least three bricks. We define the _body_ of \(B\) to be the minimal subgraph of \(B\) that contains all bricks with at least three adjacent bricks in \(B\). In other words, the body of \(B\) contains everything but the two less connected bricks in the corners of \(B\). In the body of \(B\), we pick two edges \(e_{1},e_{2}\) on the outer face of \(B\) such that every \(e_{1}\)-\(e_{2}\) path \(P\) in \(B\) is incident with at least \(7\) bricks of \(B\) aside from the Figure 3: Three bricks that might be part of a wall. Left, you see a plain drawing with bricks in the form of rectangles. Right, you see the same part of a wall but with bricks in the form of honeycombs. In this article, we will draw walls as on the right side to underline symmetry. Figure 2: An elementary wall of size \(8\times 8\). ones containing \(e_{1}\) or \(e_{2}\). This is possible since \(B\) has size at least \(6\times 4\), so there are at least \(16\) bricks adjacent to the outer face of \(B\). Now we define \(G^{*}\) to be the graph consisting of the union of an \(r\)-fold \(B-\{e_{1},e_{2}\}\) and a condensed wall \(W\) of size \(r\) whose terminals \(a,b,c\) and \(d\) are identified with the endvertices of \(e_{1}\) and \(e_{2}\) such that \(a\) and \(b\) are incident with the same edge and every \(ac\)-path in \(G^{*}-(W-\{a,b,c,d\})\) disconnects \(b\) from \(d\) in \(G^{*}-(W-\{a,b,c,d\})\). ## 3 Results ### Strategy First we check that the construction of \(G^{*}\) does not allow for an edge hitting set of size less than \(r\). **Lemma 3**.: _For every set \(F\subseteq E(G^{*})\) with \(|F|\leq r-1\), \(G^{*}-F\) contains a \(B\)-expansion._ Proof.: By construction, \(G^{*}-W^{0}\) contains an \(r\)-fold \(B-\{e_{1},e_{2}\}\), so \(G^{*}-W^{0}-F\) still contains an embedding of \(B-\{e_{1},e_{2}\}\). Furthermore, \(W-F\) contains an \((a\)-\(b,\)\(c\)-\(d)\) linkage. Together, this yields a \(B\)-expansion in \(G^{*}-F\). This allows us to prove the first half of Theorem 1. Proof.: Let \(B\) be a wall of size \(6\times 4\), and let \(G^{*}\) be as described above. By Lemma 3, there is no edge hitting set. We thus only have to show that there can be no two edge-disjoint embeddings of \(B\) in Figure 4: Up to isomorphism, these graphs are defined to be the only elementary \(B_{n}\) for \(n\leq 10\). Now let \(U\) be an arbitrary embedding of \(B\) in \(G^{*}\). Suppose we were able to show that \(U\) must contain an (\(a\)-\(b\), \(c\)-\(d\)) linkage in \(W\). By Lemma 2 there can be no two edge-disjoint linkages in \(W\), implying there can be no two edge-disjoint embeddings \(U_{1},U_{2}\) of \(B\) in \(G^{*}\). This implies that \(B\) does not have the edge-Erdos-Posa property, which finishes our proof. For the remainder of this chapter, let \(U\) be a fixed embedding of \(B\) in \(G^{*}\). It remains to show that: **Lemma 4**.: \(U\) _contains an (\(a\)-\(b\), \(c\)-\(d\)) linkage in \(W\)._ ### Embedding Walls in a Heinlein Wall To be able to control the embedding of \(U\), we need to make sure that no large part of it can be embedded in \(W\). To this end, we study which small walls are too large to fit in \(W\). For our first lemmas, we will label some parts of a \(B_{3}\) (see Figure 5). Let \(x\) be the unique vertex in \(B_{3}\) whose neighbors (in an elementary wall) have degree \(3\). Let the other vertices with degree three be called \(w,y\) and \(z\). Let the brick that is not incident with \(w\) be called \(C_{1}\). Let \(P_{1},P_{2}\) and \(P_{3}\) be the \(yz\)-, \(zw\)- and \(wy\)-paths not crossing \(x\). **Lemma 5**.: _For every \(B_{3}\) in \(W-\{a,b\}\), \(x\) is a bottleneck vertex._ Proof.: Suppose \(x\) would not be a bottleneck vertex. Then \(x\) must be on the row \(R_{i}\) in some layer \(W_{i}\), and thus \(x\) has only degree \(3\) in \(W\). We observe that \(x\) is adjacent with a bottleneck vertex, say \(z_{i}\). Therefore, one of the edges incident with \(x\) is also incident with \(z_{i}\). Therefore, we can assume that one of \(w,y\) and \(z\) (say \(w\)) is identical to \(z_{i}\) or \(z_{i}\) lies on the \(wx\)-path not crossing \(y\) or \(z\). In any case, \(z_{i}\) is not part of the brick \(C_{1}\). As \(C_{1}\) contains \(x\) and is a cycle, it must also contain \(z_{i-1}\). Additionally, \(y\) and \(z\) lie on \(C_{1}\) but cannot both be incident with \(z_{i}\) as otherwise there would be only one instead of three (or more) vertices between them. Say \(y\) is not incident with \(z_{i}\). Then both neighbors of \(y\) on \(R_{i}\) lie on \(C_{1}\). But \(y\) also has a connection \(P_{3}\) of length at least three to \(w\) that is internally disjoint from \(C_{1}\). However, \(P_{3}\) may only contain the edge \(yz_{i}\), which is too short. We could directly apply Lemma 5 to see that a 5-brick wall \(B_{5}\) cannot be embedded in \(W-\{a,b\}\), because it would contain three different vertices that are a central vertex \(x\) in a \(B_{3}\), but there are only two bottleneck vertices in a layer of \(W\). We will later see that we can even exclude a 4-brick wall \(B_{4}\), but for this we will need to see how a 3-brick wall \(B_{3}\) can be embedded in \(W-\{a,b\}\). Figure 5: A \(B_{3}\) with labels. **Lemma 6**.: _For all integers \(r\) and \(n\), a condensed wall without jump-edges \(W^{-}\) of size \(r+n\) contains \(n\) edge-disjoint embeddings of \(B_{3}\) in \(W^{-}-\{a,b\}\), even with \(r\) edges being deleted._ Proof.: Obviously, there still exist \(n\) complete layers of \(W^{-}\). In each, we will place one \(B_{3}\) as in Figure 6, so they are all edge-disjoint because the layers of \(W^{-}\) are. **Lemma 7**.: _There exists no \(B_{4}\) in \(W-\{a,b\}\)._ Proof.: First, we focus on a fixed \(B_{3}\) which is a subgraph of every \(B_{4}\). By Lemma 5, we know that \(x\) must be a bottleneck vertex, say \(z_{i-1}\). As a wall is \(2\)-connected, we conclude that the rest of \(B_{3}\) must be in the same layer \(W_{i}\) of \(W\) as \(x\). Where are \(w,y\) and \(z\)? \(w,y\) and \(z\) could all lie on the row \(R_{i}\) of \(W_{i}\). The outer two (say \(w\) and \(y\)) must then be connected via the path \(P_{3}\) that uses \(z_{i}\). This implies that \(P_{1}\) and \(P_{2}\) must be entirely contained in \(R_{i}\). Now there is no connection between any two of \(P_{1},P_{2}\) and \(P_{3}\) in \(W_{i}-B_{3}\), which implies there can no \(B_{4}\) in \(W-\{a,b\}\) which contains a \(B_{3}\) that is embedded as above. The only alternative for a \(B_{3}\) would be to use \(z_{i}\) for one of \(w,y\) and \(z\), say \(w\). But then \(z\) and \(y\) are still on \(R_{i}\). As both bottleneck vertices are blocked, \(P_{1}\) must be entirely contained in \(R_{i}\), too. Now \(y\) and \(z\) separate \(P_{1},P_{2}\) and \(P_{3}\) in \(R_{i}\), so there cannot be any path connecting any two of them in \(W-\{a,b\}\) without using \(z_{i-1}\) or \(z_{i}\), which are blocked by \(x\) and \(w\). Again, we conclude that there cannot be a \(B_{4}\). **Lemma 8**.: _There exists no \(B_{4}\) in \(W^{-}\) that contains a \(B_{3}\) in \(W-\{a,b\}\). The same holds true for a \(B_{5}\) in \(W\)._ Proof.: In the proof of Lemma 7, we have seen that there are only two possibilities to embed a \(B_{3}\) in \(W-\{a,b\}\). Let us start with the first one, i. e. \(w,y\) and \(z\) are all on the same row \(R_{i}\), and \(P_{3}\) connects \(w\) and \(y\) via \(z_{i}\). Then there is no path between \(P_{1}\) or \(P_{2}\) and a vertex outside of \(R_{i}\) that does not cross \(B_{3}\), which implies \(W\) cannot contain a \(B_{4}\). So let us assume \(w\) is identical to \(z_{i}\) instead. In \(W^{-}-\{a,b\}\), the subdivision of edge \(1\) crosses \(R_{i}\), so both \(y\) and \(z\) must be on the same side of it. But then \(B_{3}\) can have only one path connecting it to a vertex outside of \(W_{i}\) via either \(u_{1}^{i}\) or \(u_{2r}^{i}\), but two such connections would be needed to form an additional cycle and thus a \(B_{4}\). In \(W\) (instead of \(W^{-}\)), there can be both connections, but this only enables a \(B_{4}\) and not a \(B_{5}\) Figure 6: A \(B_{3}\) in a layer of a modified condensed wall \(W^{-}\) without jump-edges. The last lemma shows that no large wall can contain a \(B_{3}\) in \(W-\{a,b\}\). Next, we will see what happens for a \(B_{2}\). **Lemma 9**.: _Every \(B_{2}\) in \(W-\{a,b\}\) that is part of a \(B_{3}\) in \(W\) contains a bottleneck vertex as a vertex of degree \(3\)._ Proof.: As \(B_{2}\) is \(2\)-connected, \(B_{2}\) must be contained in a single layer \(W_{i}\) of \(W\). We start by labeling our \(B_{2}\) as in Figure 7. Let \(C_{1}\) and \(C_{2}\) be the two bricks of \(B_{2}\), and let \(x\) and \(y\) be the vertices of degree \(3\). Let \(Q\) be the \(xy\)-path belonging to both \(C_{1}\) and \(C_{2}\), and let \(P_{1}\) be the \(xy\)-path which is unique to \(C_{1}\), while \(P_{2}\) is the \(xy\)-path that lies only in \(C_{2}\). Finally, let \(P_{3}\) be the path in \(B_{3}-B_{2}\) connecting \(P_{1}\) and \(P_{2}\). Suppose \(x\) and \(y\) would both not be a bottleneck vertex. Then \(x\) and \(y\) lie both on the row \(R_{i}\). Additionally, by Lemma 5, the \(B_{3}\) cannot be entirely contained in \(W-\{a,b\}\). **Case 1: \(Q\) contains no bottleneck vertex** Suppose \(Q\) would not contain a bottleneck vertex. Then \(Q\) is entirely contained in \(R_{i}\). As \(x\) and \(y\) have degree \(3\) in \(B_{2}\), as is the maximum degree in \(R_{i}\), every edge incident with \(x\) or \(y\) in \(W\) must also be used for \(B_{2}\). In particular, the paths \(P_{1}\) and \(P_{2}\) use the edges connecting \(x\) and \(y\) with \(z_{i-1}\) and \(z_{i}\). From those bottleneck vertices, \(P_{1}\) and \(P_{2}\) must use an edge to \(R_{i}\), and from there on they can only contain a path connecting them to \(x\) or \(y\) on \(R_{i}\). Now the only possibilities to connect \(P_{1}\) or \(P_{2}\) to a vertex outside of \(W_{i}\) would be to use the bottleneck vertices, which are the first vertex on \(P_{1}\) after \(x\) and \(y\), or the very next vertex which they hit on the row, which comes second on \(P_{1}\) and \(P_{2}\). However, none of \(P_{1}\) and \(P_{2}\) can have a connection via its third (or a later) vertex. Note that we counted the vertices for both \(P_{1}\) and \(P_{2}\) in the same order (e. g. clockwise). When looking at how a \(B_{2}\) can be extended to a \(B_{3}\), we notice that the first branch vertex of \(P_{1}\) must be connected with the third branch vertex of \(P_{2}\), or vice versa. This is impossible as we observed above, a contradiction. **Case 2: \(Q\) contains a bottleneck vertex** Suppose \(Q\) would contain at least one bottleneck vertex, say \(z_{i}\). As we required \(x\) and \(y\) not to be a bottleneck vertex, we conclude that \(z_{i}\) is an interior vertex of \(Q\). Now \(P_{1}\) and \(P_{2}\) must connect \(x\) and \(y\) in \(W_{i}\) without using \(z_{i}\). Therefore, one of them (say \(P_{1}\)) must be entirely contained in \(R_{i}\). Furthermore, \(P_{2}\) and \(Q\) separate \(P_{1}\) from \(W-W_{i}\). This implies \(B_{2}\) cannot be extended to a \(B_{3}\), a contradiction. Figure 7: A \(B_{2}\) with labels. The grey path \(P_{3}\) belongs to the \(B_{3}\). **Lemma 10**.: _Let \(G\) be a subcubic subgraph of a condensed wall \(W\) that contains a \(B_{2}\) in \(W-\{a,b\}\) (which implies \(B_{2}\) is contained in a single layer \(W_{i}\) of \(W\)). Then there are at most \(3\) disjoint paths in \(G\) connecting \(B_{2}\) to \(W-W_{i}\), and at most one of them may use a bottleneck vertex of \(W_{i}\)._ Proof.: By Lemma 9, we know that \(B_{2}\) must already contain one of the bottleneck vertex as a vertex of degree \(3\). As \(G\) is subcubic, this vertex cannot be used for a connection of \(B_{2}\) to \(W-W_{i}\). This leaves only one bottleneck vertex and \(a\) and \(b\) for such connections. **Definition 11**.: _A \(B_{1}^{2}\) is the union of two disjoint \(B_{1}\) (\(H_{1}\), \(H_{2}\)), and two disjoint paths (\(P_{1},P_{2}\)) each connecting \(H_{1}\) with \(H_{2}\), i.e. \(B_{1}^{2}:=H_{1}\cup H_{2}\cup P_{1}\cup P_{2}\)._ **Lemma 12**.: _In a \(B_{7}\), let \(Q\) be a disjoint (and possibly empty) union of paths \(P\) in \(B_{7}\), such that all interior vertices of \(P\) have degree at most \(2\) in \(B_{7}\), and let \(P\) have only bottleneck vertices and at least one of \(a\) and \(b\) as endvertices. Then there is no embedding of \(B_{7}-Q\) in a condensed wall \(W\)._ Proof.: Suppose \(B_{7}-Q\) would be in \(W\). First, we notice that every \(B_{2}\) in \(B_{7}-Q\) has at least four disjoint connections to \(B_{7}-B_{2}\). Therefore, we conclude with Lemma 10 that no \(B_{2}\) can be contained in \(W-\{a,b\}\). Next, we observe that each of \(a\) and \(b\) can be only situated on at most two of the six outer bricks of \(B_{7}\). This implies \(B_{7}-\{a,b\}-Q\) contains at least two disjoint \(B_{1}\), call them \(H_{1}\) and \(H_{2}\). Additionally, we can conclude that \(H_{1}\) and \(H_{2}\) were on opposite sides of \(B_{7}\), otherwise we would get a \(B_{2}\) in \(W-\{a,b\}\). Furthermore, \(H_{1}\) and \(H_{2}\) were connected by four disjoint paths in \(B_{7}\), meaning there are still at least two disjoint paths \(P_{1}\), \(P_{2}\) connecting \(H_{1}\) and \(H_{2}\) in \(B_{7}-\{a,b\}-Q\). (Note that all paths in \(Q\) begin or end in either \(a\) or \(b\).) Now \(U=H_{1}\cup H_{2}\cup P_{1}\cup P_{2}\) is a \(B_{1}^{2}\) in \(W-\{a,b\}\). First, we notice that \(U\) is \(2\)-connected, which implies that \(U\) is contained in a single layer \(W_{i}\) of \(W\). Next, we observe that there are \(6\) connections of \(U\) to \(a\) and \(b\) in \(B_{7}\). Therefore, \(U\) must have \(6\) connecting vertices (i.e. bottleneck vertices or vertices adjacent to \(a\) or \(b\)). However, there can be at most \(4\) connecting vertices, a contradiction. **Lemma 13**.: _For every \(n,r\in\mathbb{N}\), in every condensed wall without jump-edges \(W^{-}\) of size \(3\cdot(n+r)\), there are \(n\) edge-disjoint \(B_{6}\) in \(W^{-}\), even with \(r\) edges of \(W^{-}\) being deleted._ Proof.: Even with \(r\) edges of \(W^{-}\) being deleted, there are still at least \(n\) untouched edge-disjoint chunks of \(W^{-}\) that each contain three consecutive layers of \(W^{-}\) and which all have an edge to both \(a\) and \(b\). In each of them, we can find a \(B_{6}\) as in Figure 9. **Lemma 14**.: _For every \(n,r\in\mathbb{N}\), let \(G\) be the graph consisting of a condensed wall without jump-edges \(W^{-}\) of size \(5\cdot(n+r)\) and \(n+r\) edge-disjoint paths \(P\) (all internally disjoint from \(W\)) connecting \(c\) with \(d\). Then there are \(n\) edge-disjoint \(B_{7}\) in \(G\), even with \(r\) edges of \(G\) being deleted._ Proof.: Even with \(r\) edges of \(W^{-}\) being deleted, there are still at least \(n\) untouched edge-disjoint chunks of \(W^{-}\) that each contain five consecutive layers of \(W^{-}\) and which all have an edge to both \(a\) and \(b\). In each of them, we can find a \(B_{7}^{-}\) (that only misses one \(z_{i}\)-\(z_{j+1}\)-path) as in Figures 10 and 11. Clearly, we can find edge-disjoint paths from each chunk to \(c\) and \(d\) in \(W-\{a,b\}\), which together with a path \(P\) outside of \(W\) yields the desired \(z_{i}\)-\(z_{j+1}\)-path. Together, they form \(n\) edge-disjoint \(B_{7}\). **Proposition 15**.: _A condensed wall \(W\) (or its modified counterpart \(W^{-}\)) with some connections of its terminals cannot serve as a counterexample to prove that a \(B_{7}\) does not have the edge-Erdos-Posa property._ Proof.: Let \(G\) be our counterexample graph, i. e. a condensed wall \(W\) (or its modified counterpart \(W^{-}\)) with some connections of its terminals. Let \(U\) be a \(B_{7}\) in \(G\). Let \(U_{out}\) be the subgraph of \(U\) in \(G-(W-\{a,b\})\), while \(U_{in}\) is the subgraph of \(U\) in \(G\). In Lemma 12, we have seen that a \(B_{7}\) cannot be found in any condensed wall \(W\). As a \(B_{7}\) is 2-connected, we thus need at least two terminals of \(W\) to be connected outside of \(W\). However, Lemma 14 shows that Figure 8: Left: A \(B_{6}\) as we would like to embed it in a condensed wall, with positions of \(a\) and \(b\). Right: An isomorphic graph that reflects our actual embedding of \(B_{6}\) in \(W^{-}\) as in Figure 9. Figure 9: A \(B_{6}\) in three layers of a modified condensed wall \(W^{-}\) (i.e. without jump-edges) of size 6. For more details on the embedding, see Figure 8. \(U_{out}\) is connected. If \(U_{out}\) has only \(2\) connections to \(W\), then \(U_{out}\) only contains a single path with at least one of \(a\) and \(b\) as an endvertex. We can again conclude with Lemma 12 that we cannot find a \(B_{7}\) in \(G\). It remains to check what happens if \(U_{out}\) uses three connections, namely \(a,b\) and exactly one of \(c\) and \(d\). Then \(U_{out}\) contains a single vertex \(v\) of degree \(3\) in \(B_{7}\) and possibly some paths incident with it. This implies that there is no vertex of degree \(3\) in \(B_{7}\) between \(a,b\) and \(v\). Therefore, \(U_{out}\) is only incident with at most \(4\) of the outer bricks of \(B_{7}\), and those are adjacent to each other. This implies that there is a \(B_{2}\) in \(W-\{a,b\}\), a contradiction to Lemma 10. ### Main Proof Let us now come back to proving our main lemma. **Lemma 4**.: \(U\) _contains an (a-b, \(c\)-\(d\)) linkage in \(W\)._ Proof.: \(U\) can not be entirely contained in \(W\) due to Lemma 12. Cleary, it can not be entirely contained in \(G^{*}-(W-\{a,b,c,d\})\) (the graph which contains all edges of \(G^{*}\) that are not in \(W\)) either, as there are two edges missing there. Therefore, \(U\) must have edges in both \(W\) and \(G^{*}-(W-\{a,b,c,d\})\). This motivates the following definitions: Let \(U_{in}\) be the subgraph of \(U\) in \(W\), and let \(U_{out}\) be the subgraph of \(U\) in \(G^{*}-(W-\{a,b,c,d\})\). Note that \(U_{in}\) and \(U_{out}\) are edge-disjoint, \(U_{in}+U_{out}=U\) and the only vertices that can be shared by \(U_{in}\) and \(U_{out}\) are \(a,b,c\) and \(d\). Therefore, we define a _connecting vertex_ as a vertex of \(U\) that is incident with edges of both \(U_{in}\) and \(U_{out}\). **Case 1**: Every component of \(U_{in}\) contains at most three connecting vertices. As \(U\) is \(2\)-connected, every component of \(U_{in}\) must contain at least two connecting vertices. Of the body of \(U\), one of \(U_{in}\) or \(U_{out}\) contains all but at most one vertex of degree \(3\) (the _large component_), while the other may only contain either one or two subgraphs of maximum degree \(2\) (i.e. paths) or a single subgraph with one single vertex of degree \(3\) and possibly some incident paths. Suppose the large component is in \(W\). Since \(B\) is a wall of size at least \(6\times 4\), the body of \(B\) contains a \(B_{10}\). (Actually, it is considerably larger, but a \(B_{10}\) will suffice to obtain a contradiction in this case.) If \(U_{out}\) only contains paths, we know by Lemma 12 that if every one of those paths has \(a\) or \(b\) as one endvertex, Figure 10: A \(B_{7}\) as we embed it in Figure 11. we would not be able to embed a \(B_{7}\) in \(W\cup U_{out}\), test there cannot be a \(B_{10}\) or the entire body of \(U\) in \(G^{*}\). Therefore, we conclude that \(U_{out}\) must either contain a vertex of degree 3 or a \(cd\)-path (and possibly also an \(ab\)-path). Next, we note that the eight outer bricks of the \(B_{10}\) are arranged in a cycle, which we call _grand cycle_. This grand cycle contains two disjoint cycles that are incident with all eight bricks of the grand cycle, which we call _large cycles_. \(U_{in}\) still contains most of this grand cycle, except for one or two small subgraphs as described above. Each vertex can be part of at most two bricks in the grand cycle. Additionally, it can be part of at most one of the large cycles. If there is a vertex \(v\) of degree 3 or a \(cd\)-path \(P\) in \(U_{out}\), \(W-\{a,b\}\) still contains at least all but three vertices of degree 3 in the grand cycle and all the paths between them that are not incident with a missing vertex. This implies \(W-\{a,b\}\) contains at least two bricks. If those are adjacent, there is a \(B_{2}\) in \(W-\{a,b\}\), a contradiction to Lemma 10. If every two bricks in \(W-\{a,b\}\) are not adjacent, we have a look at how many bricks in \(W-\{a,b\}\) are left. If there are exactly two, each of \(a,b\) and \(v\) (or \(P\)) must be incident with two bricks, and those bricks are pairwise different for each vertex. We conclude that we can still find a cycle in \(W-\{a,b\}\) that is incident with all bricks of the grand cycle. Together with the two bricks in \(W-\{a,b\}\), this yields a \(B_{1}^{2}\) in \(W-\{a,b\}\). As a \(B_{1}^{2}\) is 2-connected, it is contained in a single layer of \(W\). However, it has two disjoint connections to each of \(a,b\) and \(v\) (or \(P\)) in \(U\), which means there must Figure 11: A \(B_{7}\) of which all but one edge fit into a condensed wall of size 5. A schematic drawing of what is embedded here can be found in Figure 10. be six pairwise disjoint paths connecting \(B_{1}^{2}\) with vertices outside of its layer, which is clearly not possible. What happens if there are more than two bricks left in \(W-\{a,b\}\)? Then, as no two of them are adjacent, there are exactly three bricks in \(W-\{a,b\}\) and exactly one of \(a,b\) and \(v\) (or \(P\)) lies between every two of them. Clearly, we can again find a \(B_{1}^{2}\) in \(W-\{a,b\}\) and arrive at a contradiction as we did above. It remains to check what happens if the large component \(C\) is contained in \(U_{out}\). We claim that if there is an embedding of \(C\) in \(G^{*}-(W-\{a,b,c,d\})\), then there is also an embedding of \(C\) in the body of \(G^{*}-(W-\{a,b,c,d\})\). We observe that no brick of \(C\) can be entirely contained in an arm of \(G^{*}-(W-\{a,b,c,d\})\) by definition of the body. If there is a brick \(B_{1}\) of \(C\) that is only partly contained in an arm of \(G^{*}-(W-\{a,b,c,d\})\), then the other part of \(B_{1}\) is contained in the body of \(G^{*}-(W-\{a,b,c,d\})\). In particular, there is a path \(P\) (the border between body and arm) such that \(B_{1}\cup P\) is a \(B_{2}\). But of this \(B_{2}\) exactly one brick is entirely contained in the arm while the other is entirely contained in the body of \(G^{*}-(W-\{a,b,c,d\})\). Replacing \(B_{1}\) with the latter and doing the same for all bricks of \(C\) that are not entirely contained in \(G^{*}-(W-\{a,b,c,d\})\) yields an embedding of \(C\) in \(G^{*}-(W-\{a,b,c,d\})\). The body of \(G^{*}-(W-\{a,b,c,d\})\) is missing two edges (\(e_{1}\) and \(e_{2}\)), so the body of \(U\) cannot be entirely contained there. To see that connecting \(a\) or \(b\) with \(c\) or \(d\) would not help to embed the body of \(U\), we count bricks. \(G^{*}\) misses \(e_{1},e_{2}\), so it contains two bricks less than \(B\). A cycle that uses exactly one new \(\{a,b\}-\{c,d\}\) path would be incident with at least seven bricks, meaning the cycle can not constitute a new brick since every brick can only be incident with at most 6 other bricks. Now let \(K\) be a cycle that uses both new \(\{a,b\}-\{c,d\}\) paths and is incident with at most 6 bricks. Then it is incident with all three bricks that are adjacent to the brick containing \(e_{1}\), and also incident with the three bricks adjacent to the brick containing \(e_{2}\). But those two groups of three bricks each are not adjacent to each other, which is impossible for the neighbourhood of a brick. We conclude that \(K\) may not constitute a new brick, either. We conclude that connecting \(a\) or \(b\) with \(c\) or \(d\) does not allow for an embedding the body of \(U\). Therefore, the only possibility is to add an \(a\)-\(b\) path and a \(c\)-\(d\) path in \(W\). Those paths must be disjoint, as otherwise the two bricks repaired by using them would be adjacent, which is impossible since their neighbourhood is not. Thus \(U\) contains an \(a-b,c-d\)-linkage, which was what we wanted. **Case 2**: There are exactly four connecting vertices in one component of \(U_{in}\). By definition, there can be at most four connecting vertices in total, so this case covers everything not handled in Case 1. In particular, both \(U_{in}\) and \(U_{out}\) are connected. If \(U_{in}\) does not contain a single brick, it contains no cycle and is therefore a tree with exactly four leaves and exactly two vertices of degree 3. If \(U_{in}\) contains an \(a-b\),\(c-d\)-linkage, there is nothing left to show. Otherwise, it looks as the red part in Figure 3.3. To show that \(G^{*}-W^{0}+U_{in}\) does not contain an embedding of \(B\) in this case (see Figure 3.3), we count the bricks in the body of \(B\) again. Since \(G^{*}\) is missing \(e_{1}\) and \(e_{2}\), its body is missing two bricks. We have seen before that no cycle that contains exactly one \(\{a,b\}\)-\(\{c,d\}\) path may constitute a new brick since it it incident with at least seven other bricks. \(G^{*}-W^{0}+U_{in}\) may contain exactly one new brick that contains an \(a\)-\(b\) path or a \(c\)-\(d\)-path in \(U_{in}\). However, if there were two such bricks, they were adjacent to each other, which is impossible since their other neighbours are those bricks that were adjacent to the bricks containing \(e_{1}\) and \(e_{2}\), respectively. This would imply that each of the two new bricks' neighbourhood would be disconnected, which is impossible. We conclude that the body of \(G^{*}-W^{0}+U_{in}\) contains one brick less than the body of \(B\), a contradiction. As \(U_{in}\) contains at least one brick, there is a cycle \(C\) in \(U_{in}\) that contains all bricks of \(U_{in}\) on its inner face. In particular, the four paths connecting \(a,b,c\) and \(d\) all end in \(C\) in pairwise different vertices \(v_{a},v_{b},v_{c}\) and \(v_{d}\). If there is a \(v_{a}v_{b}\)-path in \(C\) disjoint from \(v_{c}\) and \(v_{d}\), then there is also a (disjoint) \(v_{c}v_{d}\)-path in \(C\) disjoint from \(v_{a}\) and \(v_{b}\). Together, they yield an \(a-b\),\(c-d\)-linkage in \(W\). We may therefore assume that every \(v_{a}v_{b}\)-path on \(C\) is incident with \(v_{c}\) or \(v_{d}\). This implies that out of the four paths connecting \(U_{in}\) and \(U_{out}\), the two containing \(a\) and \(b\) do not lie next to each other, but instead alternate with the ones containing \(c\) and \(d\). Now we can study \(U_{out}\). As \(U_{in}\) cannot contain a \(B_{7}\), there are some bricks left in \(U_{out}\) and four paths connecting it to \(U_{in}\). The border to the outer face of the bricks in \(U_{out}\) forms a cycle \(C\) that is incident with all paths that connect \(U_{in}\) with \(U_{out}\). Therefore, \(C\) must be incident with \(a,b,c\) and \(d\) in \(U_{out}\). We have seen that the connecting paths containing \(a\) and \(b\) alternate with those containing \(c\) and \(d\). Therefore, there must be an \(ac\)-path and a disjoint \(bd\) path in \(U_{out}\). This is impossible by construction. ## 4 Discussion In Theorem 1, we have seen that the expansions of every wall of size at least \(6\times 4\) do not have the edge-Erdos-Posa property. Is \(6\times 4\) optimal? When studying the proof of Lemma 4, one will notice that a size of \(6\times 4\) was only needed to get an easier argument for why adding an \(\{a,b\}\)-\(\{c,d\}\) path (that is, a path connecting vertices that are far away from each other) does not help to create new bricks. However, I believe that with a more careful argument (for example counting the neighbours of every brick), the same result can be obtained for smaller walls such as a \(B_{10}\). In particular, you may have noticed that those parts of the proof of Lemma 4 that referred to embedding a large part of a wall into the Heinlein Wall already made do with a \(B_{10}\). Can we do Figure 12: \(G^{*}-W^{0}+U_{in}\) when \(U_{in}\) contains neither a brick nor a linkage. even better? In Proposition 15, we showed that a condensed wall cannot be used to prove that a \(B_{7}\) does not have the edge-Erdos-Posa property, even though it is not a minor of it as seen in Lemma 12. What about a \(B_{8}\) or \(B_{9}\)? I believe that a Heinlein Wall cannot be used to prove that they do not have the edge-Erdos-Posa property, either. I did not prove that, but I found evidence in the form of constructions that place most bricks of a \(B_{8}\) or \(B_{9}\) in a (modified) Heinlein Wall \(W^{-}\) while needing only one or two bricks outside of it with three or four connections to \(W^{-}\). **Lemma 16**.: _For every \(n,r\in\mathbb{N}\), let \(G\) be the graph consisting of a Heinlein Wall without jump-edges \(W^{-}\) of size \(3\cdot(n+r)\) and a \(n+r\)-fold brick \(B_{1}\) with three adjacent branch vertices being connected to \(d,b\) and \(c\) (in this order when counting clockwise) by \(n+r\)-fold paths (all internally disjoint from \(W\), \(B_{1}\) and each other). Then there are \(n\) edge-disjoint \(B_{8}\) in \(G\), even with \(r\) edges of \(G\) being deleted._ Proof.: Even with \(r\) edges of \(G\) being deleted, there are still at least \(n\) untouched edge-disjoint chunks of \(W^{-}\) that each contain three consecutive layers of \(W^{-}\) and which all have an edge to both \(a\) and \(b\). In each of them, we can find a \(B_{8}^{-}\) (that only misses one brick and three connecting paths) as in Figures 13 and 14. Clearly, we can find edge-disjoint paths from each chunk to \(c\) and \(d\) in \(W^{-}-\{a,b\}\), which together with a brick and three connections outside of \(W^{-}\) yield the desired \(B_{8}\). Together, they form \(n\) edge-disjoint \(B_{8}\). **Lemma 17**.: _For every \(n,r\in\mathbb{N}\), let \(G\) be the graph consisting of a Heinlein Wall without jump-edges \(W^{-}\) of size \(3\cdot(n+r)\) and a \(n+r\)-fold \(B_{2}\) with four branch vertices being connected to \(a,b,c\) and \(d\) by \(n-r\)-fold paths (all internally disjoint from \(W\), \(B_{2}\) and each other and such that a \(B_{9}\) can be formed, see Figure 16). Then there are \(n\) edge-disjoint \(B_{9}\) in \(G\), even with \(r\) edges of \(G\) being deleted._ Proof.: Even with \(r\) edges of \(G\) being deleted, there are still at least \(n\) untouched edge-disjoint chunks of \(W^{-}\) that each contain three consecutive layers of \(W^{-}\) and which all have an edge to both \(a\) and \(b\). In each of them, we can find a \(B_{9}^{-}\) (that only misses two bricks and four connecting paths) as in Figures 15 and 16. Clearly, we can find edge-disjoint paths from each chunk to \(c\) and \(d\) in \(W^{-}-\{a,b\}\), which together with two bricks and four connections outside of \(W^{-}\) yield the desired \(B_{9}\). Together, they form \(n\) edge-disjoint \(B_{9}\) Figure 13: A \(B_{8}\) as we embed it in Figure 14.
2304.14117
A sensemaking system for grouping and suggesting stories from multiple affective viewpoints in museums
This article presents an affective based sensemaking system for grouping and suggesting stories created by the users about the items of a museum. By relying on the TCL commonsense reasoning framework1, the system exploits the spatial structure of the Plutchik's wheel of emotions to organize the stories according to their extracted emotions. The process of emotion extraction, reasoning and suggestion is triggered by an app, called GAMGame, and integrated with the sensemaking engine. Following the framework of Citizen Curation, the system allows classifying and suggesting stories encompassing cultural items able to evoke not only the very same emotions of already experienced or preferred museum objects, but also novel items sharing different emotional stances and, therefore, able to break the filter bubble effect and open the users' view towards more inclusive and empathy-based interpretations of cultural content. The system has been designed tested, in the context of the H2020EU SPICE project (Social cohesion, Participation, and Inclusion through Cultural Engagement), in cooperation the community of the d/Deaf and on the collection of the Gallery of Modern Art (GAM) in Turin. We describe the user centered design process of the web app and of its components and we report the results concerning the effectiveness of the of the diversity seeking, affective driven, recommendations of stories.
Antonio Lieto, Manuel Striani, Cristina Gena, Enrico Dolza, Anna Maria Marras, Gian Luca Pozzato, Rossana Damiano
2023-04-27T11:59:13Z
http://arxiv.org/abs/2304.14117v1
A sensemaking system for grouping and suggesting stories from multiple affective viewpoints in museums ###### Abstract This article presents an affective-based sensemaking system for grouping and suggesting stories created by the users about the items of a museum. By relying on the \(\mathbf{T}^{\rm CL}\) commonsense reasoning framework1, the system exploits the spatial structure of the Plutchik's 'wheel of emotions' to organize the stories according to their extracted emotions. The process of emotion extraction, reasoning and suggestion is triggered by an app, called GAMGame, and integrated with the sensemaking engine. Following the framework of Citizen Curation, the system allows classifying and suggesting stories encompassing cultural items able to evoke not only the very same emotions of already experienced or preferred museum objects, but also novel items sharing different emotional stances and, therefore, able to break the filter bubble effect and open the users' view towards more inclusive and empathy-based interpretations of cultural content. The system has been designed tested, in the context of the H2020EU SPICE project (Social cohesion, Participation, and Inclusion through Cultural Engagement)2, in cooperation the community of the d/Deaf and on the collection of the Gallery of Modern Art (GAM)3 in Turin. We describe the user-centered design process of the web app and of its components and we report the results concerning the effectiveness of the of the diversity-seeking, affective-driven, recommendations of stories. Footnote 1: \(\mathbf{T}^{\rm CL}\) is an acronym for Typicality-based Compositional Logic: the reasoning framework driving the behavior of the the sensemaking system. The framework is described in Section 4.1 Footnote 2: [https://spice-h2020.eu/](https://spice-h2020.eu/) Footnote 3: [https://www.gamtorino.it/en](https://www.gamtorino.it/en) Story-Based Recommendations; Diversity-seeking emotional recommendations; Commonsense Reasoning; Affective Computing; ## 1 Introduction In the last two decades, the awareness of the role of cultural heritage in promoting and enforcing social inclusion has progressively grown, as clearly witnessed by the statements made by institutional actors in various public settings and documents. This trend, started with the FARO Convention (Council of Europe, 2005)4, culminated with the new museum definition5) released by the Extraordinary General Assembly of the International Council of Museums (ICOM) on 24 August 2022. Signed in 2005, the FARO convention sees cultural heritage institutions as drivers of reflection and inclusion in society, putting "people's values, aspirations and needs first" and celebrating "the diversity and plurality of their views and values" (Fairclough, Dragicevic-Sesic, Rogac-Mijatovic, Auclair, & Soini, 2014). The new definition of museums by ICOM puts a strong emphasis on inclusion, diversity and sustainability ("Open to the public, accessible and inclusive, museums foster diversity and sustainability"), stressing at the same time the involvement of communities in various activity types, which include education and reflection ("they operate [...] with the participation of communities, offering varied experiences for education, enjoyment, reflection and knowledge sharing"). Footnote 4: [http://conventions.coe.int/Treaty/EN/Treaties/Html/199.htm](http://conventions.coe.int/Treaty/EN/Treaties/Html/199.htm) Footnote 5: [https://icom.museum/en/resources/standards-guidelines/museum-definition/](https://icom.museum/en/resources/standards-guidelines/museum-definition/) In parallel with the revision of the role of heritage mentioned above, the advent of new technological paradigms, such as mobile technologies, has deeply affected communication and dissemination of cultural heritage(Marras, Messina, Mureddu, & Romoli, 2016). Technologies can contribute to overcoming the physical barriers interposed between collections and visitors; in addition, today's lower costs of technologies work in favour of their adoption in museums. The COVID-19 pandemic, then, has pushed forward the adoption of technologies to reach an increasing number of audiences with the help of virtual environments. In the face of these opportunities, changes in audience involvement cannot be limited to the adoption of technologies: on the contrary, technologies call for novel ways to involve and engage visitors and for inclusive design solutions that actually leverage the potential of cultural participation to tackle exclusion (Campagnaro & Porcellana, 2016; Giglitto, Ciolfi, Lockley, & Kaldeli, 2023). In this paper, we describe a case study that aims at supporting reflection on cultural items through the an affective lens, leveraging diversity-seeking recommendations of collections of user-generated artwork interpretations. The case study revolves around an app, called GAMGame, for creating and sharing stories about museum collections. The stories created by the museum visitors are classified from an emotional perspective, and this information is used to generate diversity-seeking recommendations, aimed at breaking the well known filter bubble effect. By relying on the DEGARI \(2.0^{6}\) affective-based reasoner (Lieto, Pozzato, Striani, Zoia, & Damiano, 2022), the GAMGame recommends to museum visitors stories encompassing cultural items able to evoke not only the same emotions as the already experienced (or preferred) objects, but also novel items triggering similar or opposite emotions. The ultimate goal of this approach is to open the users' perspective towards a more empathetic and inclusive approach to others' perspectives in cultural heritage. Developed in cooperation with the Turin Gallery of Modern Art (Galleria d'Arte Moderna, GAM) within the EU H2020 SPICE Project, the GAMGame is targeted at the inclusion of the community of the d/Dear. According to World Health Organization (WHO)7, deafness will become a major emergency in the next decades: the WHO report points out that 432 million adults currently experience a form of disabling hearing loss, and these impairments are expected to involve nearly 2.5 billion by 2050. The project puts museums at the center of social inclusion processes by leveraging the novel paradigm of Citizen Curation (Bruni et al., 2020; Daga et al., 2022). Citizen Curation reverses the traditional paradigm of curation, where art interpretation is exclusively entrusted to curators, art historians and critics: citizens are put at the center of the interpretation process, thanks to curation methods which prompt personal responses to art, and promote their sharing across people and communities. In the SPICE project, Citizen Curation methods - such as creating personal collections of artworks and attaching personal responses to them - are supported by a socio-technical infrastructure which allows visitors to create and share their own interpretations of artworks, and to reflect on other visitors' interpretations. The paper is organized as follows. After surveying the main issues concerning museum accessibility and deafness in Section 2, we describe the iterative design and evaluation of the environment for the creation of stories (i.e. the GAMGame) in Section 3. Section 4 describes the logic and architecture of the sensemaking system representing the inference engine of the GAMGame envoronment, whose evaluation is presented and discussed in Section 5. Conclusion and future work end the paper. ## 2 Background and Motivations In this section, we review the relevant notions about museum accessibility and deafness, and we illustrate the paradigm of Citizen Curation that constitutes the overarching conceptual framework of the case study. ### Accessibility in museums The concept of accessibility comes in varying degrees and forms and for some time now it has been associated to the concept of inclusiveness, because, though personal, the visit must be lived without barriers and differences, allowing everyone to access the available content and information. According the World Health Organization definition, disability is not only a health problem, but a complex phenomenon, reflecting the interaction between features of a person's body and features of the society in which he or she lives8. Overcoming the difficulties faced by people with disabilities requires interventions to remove environmental and social barriers. According to Addis, art, in all its manifestations, is a language and therefore a form of communication, and as such it should be affordable and accessible to all (Addis, 2005). Technologies are fundamental tools for involving visitors in museums and overcoming any kind of barrier: physical, sensory, cognitive, economic, cultural, and social (Marras et al., 2016). To do so, museums are implementing solutions and tools to be increasingly inclusive, thanks to the lower costs of technology. However, it is important to highlight that technologies can also be barriers if not implemented in an inclusive way (Marras, 2020): to realize inclusive technologies, some institutions have elaborated their own accessibility guidelines (such as the Smithsonian Institute9); most guidelines refer to the "design for all" principles10, as exemplified by (Timpson, 2015). Footnote 8: [https://www.who.int/health-topics/disability](https://www.who.int/health-topics/disability) Footnote 9: [https://access.si.edu/](https://access.si.edu/) Footnote 10: [https://universaldesign.ie/What-is-Universal-Design/The-7-Principles/](https://universaldesign.ie/What-is-Universal-Design/The-7-Principles/) It is understood that any service offered by the museum, such as apps, audio guides, must be inclusive and integrated, as regards the d/Dear audiences. The key aspect is that d/Dear people have a strong visual culture "resulting from the importance of their visual sense to interact with the world" (Martins, 2016). The main services offered by the museums are tours in Sign Language and multimedia guides with Sign Languages translations of the museum collection for the d/Deaf audience. Another aspect on which the methods of engagement are developing is that of social communication, following recent research (Alnfiai & Sampali, 2017) that has analyzed 55 social and communication apps. The authors found that only 6 had been designed specifically for d/Deaf people. The main function of the apps is allowing users to send video messages and make live video calls and it also provides a variety of technical methods enabling users to communicate, including text, emojis, speech-to-text video, real time communication, privacy, sign language and large size. Also, in this case, it is important to underline that, regardless of the technology used, the appearance of the contents and their comprehensibility is a fundamental aspect. The proliferation of devices such as tablets and smartphones used by many people in their daily lives, pushes towards Bring Your Own Device (BYOD) with access tools increasingly available on various devices, because everyone can make adjustments to suit their own needs and users have increasing expectations that technology should work for them (Jankowska et al., 2017). ### Deafness in Digital Heritage Hearing loss is a physiological condition that is generally estimated to be present in ten percent of the general population. However, only proper deafness has a direct impact on language acquisition and, in western countries, affect 1:1000 of the newborn. Communication modality, skills and attitudes of deaf and hard of hearing people are very diverse, due to several individual and context-based factors. Among those, the age of diagnosis, the level of hearing loss, the non-verbal IQ, the use of cochlear implant and/or hearing aids, the age of implant, the presence of additional disabilities, a migrant background, the quality and the quantity of the linguistic input received, the socio-economic status and if they come from a Deaf family or a hearing family. Diversity in the communication of the deaf arise from several different factors, however it is possible to outline clear different profiles. Some are Sign Language users (Spencer & Marschark, 2010), and mostly they come from Deaf families, where Sign Language is the only and true mother tongue, often from generations. Sign languages arise almost anywhere there are deaf people (Cardona & Volterra, 2007), and their emergence is generally spontaneous when there is a critical mass of deaf people. By definition, deaf people cannot hear, but have an intact capacity for language, which finds its way of expression in a visual-gestual language, instead of an auditory-phonetic one. They prefer to be called Deaf, with capital letter D, to emphasize that they are a cultural and linguistic minority (Lane, Pillard, & Hedberg, 2011), rather than a group of people with a specific hearing impairment. They are proud to use Sign Language and ask to recognize it as a minority language, struggling for the right to use it and to access any linguistic contents through it. On the opposite side there are cochlear implant users, which refuse the idea of being part of linguistic minority and ask for additional and quality speech therapy and advanced technological devices to cope the disability. They usually can pronounce oral languages (at different level of proficiency) and understand through a mixed of auditory and lipreading skills. The huge diversity of factors affecting the deaf and hard of hearing people, leads to several mixed profiles, where sign language coexists with oral speech and cochlear implant and the deaf person can use the two languages at different levels of proficiency and modality, including variations in the use of fingerspelling, lipreading, non-manual components and mouthing (Braem & Brentari, 2001). All those factors affect in different ways not only the linguistic outcome, but, crucially, also the level of understanding of any written texts (Brueggemann, 2004). As a result, oral and written languages represent a barrier in almost every field of the life of deaf persons (Goldin-Meadow & Mayberry, 2001), including museums' cultural offer and any learning environment (Spencer & Marschark, 2010). International research agrees that the outcome of this situation of atypical language acquisition is that some grammatical structures of historical-oral languages "create constant difficulties in the acquisition process by deaf people" (Guasti, 2017), ranging from phonology and lexicon (which can be poorer and more rigid), to morphology and syntax. Accessibility to written and oral forms of verbal languages is therefore one the biggest challenges museums have to face in their pathway to democratize culture. This include using adapted forms of written language, taking care to use high frequency words and a plain syntax; overusing images and icons to support a quicker and better understanding of textuality; introducing alternative languages and codes, such us Sign Language, or, with different kind of disabilities, Braille and Alternative Augmentative Communication. ### Citizen Curation The paradigm of Citizen Curation, developed within the SPICE project, provides a mode of participation in which citizens apply "curatorial methods to archival materials available in memory institutions in order to develop their own interpretations, share their own perspective and appreciate the perspectives of others" (Bruni et al., 2020). In the SPICE project, this paradigm is specifically aimed at engaging minority groups that tend to be underrepresented in cultural activities. Citizen Curation can be described as the combination of two processes, namely, Interpretation and Reflection. Although interpretation conceptually precedes reflection, the two processes are not compartmentalized, but, rather, intertwined: reflection builds upon interpretation, but affects subsequent interpretation, forming the continuous process described by Bruni et al. 2020 as the Interpretation-Reflection Loop (IRL). The goal of the IRL is twofold: on the one side, to stimulate reflection by exposing citizens to other citizens' interpretations, letting them appraise diversity in responding to artworks; on the other side, to expand the interpretation process as a consequence of the exposition to diversity. In the GAMGame, curation takes the form of storytelling, intended as cognitive process oriented to sharing interpretations in a compact, easily processed, universal form (Bruner, 1991; Bruschi, 2018; Lombardo & Damiano, 2012). Inspired by the format of social media stories, well known to the target group of the case study (i.e., teenagers and young adults), citizens are stimulated to interact with the collection of the GAM by creating personal stories from the artworks in the collection. Storytelling, here, is not intended simply as the act of selecting and ordering a set of artworks, but implies a deeper, emotional connection with art, in line with the emotional nature of the aesthetic experience: in order to improve the engagement of the participants with the artworks, in fact, they are prompted to express personal reflections and emotions in response to the artworks they include in their stories. Personal reflections are constrained to a set of themes that have been acknowledged as specifically relevant to the experience of art and the expression of subjectivity by the curators, and widely discussed in the literature (McAdams, 2018): memories, thematic connections, and emotions. In Citizen Curation, emotions are a relevant part of the sensemaking process. Acknowledged as a primary component of the artistic experience for centuries by aesthetics, emotions are an intrinsic component of the the way people experience artistic expression ((Leder, Gerger, Brieber, & Schwarz, 2014; Schindler et al., 2017; Van Dongen, Van Strien, & Dijkstra, 2016). Emotions also provide an universal language through which people convey their experience of art, well beyond words. Despite the differences between languages, and the influence of cultural factors, emotions own an universal origin: rooted in evolution, they provide the basis for intercultural communication ((Ekman & Friesen, 1971). In this sense, emotions can provide a suitable means for connecting people belonging to different groups, intended as culture, age, education, and different sensory characteristics. Being exposed to the emotions that others feel in response to the artworks, similar o dissimilar, puts the citizens in a situation of perspective taking, i.e., seeing the world from other perspectives ((Djikic, Oatley, & Moldoveanu, 2013; Pedersen, Wecker, Kuflik, Mulholland, & Diaz-Agudo, 2021). In Citizen Curation, this approach is intended to promote empathy, cohesion and inclusion across social groups, in contrast with the current technologies (e.g. like social media or standard recommender systems) that lead people toward content that fits their own viewpoint, promoting fragmentation and fostering confirmation biases, instead of cohesion, inclusive reflection, and critical thinking. ## 3 Towards the GAMGame The design of the GAMGame web app is the result of the cooperation between the University of Turin (AI and HCI experts, museologists) and the GAM museum (mu-seum curators and educators), with the assistance of the Turin Institute for the Deaf (special education experts). Following the paradigms of user-centered design (Norman & Draper, 1986) and of co-design (Sanders & Stappers, 2008), the design process has involved the target group, namely deaf people, in all steps, from the collection of the requirements and the development of the prototypes, to the evaluation and redesign phases. In this section, we describe the design process, which was almost entirely carried out during the COVID-19 pandemic. Although end users and experts were involved along all the process, some tests could only be conducted online due to the lockdowns occurred in 2020 and 2021. The timeline was the following: the collection of the requirements and the preparation of the first prototype of the GAMGame occurred between May and October 2020; the prototype was tested online in November 2020 and a new prototype was created from December 2020 to April 2021; the user study on the new prototype was carried out in presence in July 2021; the integration of DEGARI for recommending artworks and stories (Section 4), carried out from September 2021 to June 2022, was tested in presence from February 2022 to October 2022 (Section 5). The outcome of the design is the current version of the GAMGame, which supports the _interpretation_ phase of the Interpretation-Reflection Loop envisaged by the paradigm of Citizen Curation described in Section 2.3 by allowing the users to create personal stories from the artworks in the GAM collection. To close the loop, we subsequently integrated into the GAMGame the story recommendation function (Section 4), which, together with the unconstrained navigation of other users' stories, forms the backbone of the _reflection_ component of the GAMGame. ### Requirements gathering The basic tenet for the design of the GAMGame consisted in the choice of storytelling (and visual storytelling in particular) as the method of election for supporting Citizen Curation in the case study. Storytelling in fact, not only relies on a universal format - the narrative format - whose cultural relevance has been acknowledged by psychologists, designers, and media experts (Bruner, 1991; Gershon & Page, 2001; Lombardo & Damiano, 2012), but is pervasive in social media (in the form of social media "stories"), and so both known and appealing for the youngsters. Based on this assumption, the _interpretation_ activity in the GAMGame was structured as the selection of a sequence of artworks from the museum collection to form a personal narrative. More precise requirements, then, were developed during the co-design of the prototype, carried out in conjunction with the museum curators and the special education experts. During the co-design phase, held online through focus groups (via Google Meet) and achieved through shared project documents (design specifications, interface sketches, literature surveys), two types of requirements were put forth: on the one side, museum curators indicated two specific activities, namely _commenting_ and _tagging_, as suitable for stimulating the interpretation of the single artworks in the story by younger citizens; on the other side, special education experts provided a list of interface and interaction requirements tailored to the needs of the Deaf. The requirements expressed by the curators lead to the creation of a console for annotating the artworks (annotation console) included in the story, with tools for tagging, commenting and responding emotionally to each artwork. Commenting, in particular, was further articulated into three main dimensions (feelings, memories and inspiration), in line with the process of narrative construction of identity described by McAdams (2018), in order to stimulate the users to provide personal perspectives in the interpretation of artworks. The requirements posed by the special education experts to meet the needs of d/Deaf users concerned both the interface design and the interaction design, and more in general the overall d/Deaf users experience. In general, it was clearly stated that the use of the app should rely on text as little as possible and be simple and immediate to use. The latter requirement, motivated by the fact that deafness can be accompanied by other physical, perceptive and cognitive conditions, led to the choice to design story creation as a rigidly pipelined activity, where backtracking is inhibited, with a minimal number of steps required to obtain a story. Specific requirements, then, concerned text, interface and interaction. * Text (in line with the literature surveyed in Section 2.2): * The use of text should be limited to the bare minimum, with reference to the atypical acquisition of language described; * Visual codes should be preferred to text whenever possible; * Text, when irreplaceable, should be short and simple (from both lexical and syntactical perspectives). * The layout should contain the minimum number of elements to help focalization; * The interface should be characterized by a high contrast between salient elements and background; * Familiar elements and conventions from social and communication media (icons, widgets) should be reused. * Interaction * The interaction flow should be maintained simple and predictable; * A low number of steps should be necessary to complete a task; * Interaction should rely on direct manipulation whenever possible, avoiding complex sequences of actions for adding artworks and commenting them. Although most of these requirements were covered, to some extent, by the Web Content Accessibility Guidelines (Kirkpatrick, O Connor, Campbell, & Cooper, 2018), they role in the design process was not secondary, as they orientated the design towards alternatives to text and a general predominance of the visual language. An example of this approach is given by the possibility, for the users, to add emojis to the artworks as a way to express one's own emotional response to the artwork in a visual way, without using words. This function is achieved by simply clicking on the emoji one wishes to add to the painting and dragging from the central position to the desired position. In social media, emojis are similar to a widely used jargon, especially for new generations (Wolny, 2016), (Barbieri et al., 2018), (Ronzano, Barbieri, Pamungkas, Patti, & Chiusaroli, 2018), (Shoeb, Raji, & de Melo, 2019), affordable by categories of users who may have difficulties in producing written text on technological devices (such as older people, people with disabilities or children, who generally do not produce long and content-rich texts). As reported in (Mack et al., 2020), recent surveys highlight the inclination of Deaf and Hard of Hearing towards visual communication forms in social media, including emojis. The latter in particular, have been described as closer to the type of facial expressiveness which characterizes Sign Languages. Emojis, also known as smileys or emoticons, are considered as the best icons to express affective reactions since their metaphor is based directly on the expression of human emotions and they form a good proxy for the intended sentiment (Cena, Gena, Mensa, & Vernero, 2022), (Cena et al., 2017). Concerning the emojis included in the artwork annotation panel (love, curiosity, delight, joy, fear, sadness and disgust), their selection was driven by the museum curators based on their experience with the social media of the institution, and with the preferences of the audience of teenagers. Finally, a general request from both curators and experts was to create a safe and inclusive space that should not perceived as tailored to the exclusive needs (and so, to the use) of a specific group, but, rather, designed for easy access by all parties (Stephanidis, 2009). This is in line with the notion of inclusion underlying the SPICE project, whose aim is to improve the multiplicity of points of view by obliterating the boundaries between communities. ### Formative evaluation of the prototype On November 25-26 2020, while the lockdown in force for the second time, a preliminary prototype of the story creation function (see Figure 1) was tested online with middle and high-school students during a public initiative of the University of Turin (UNITO) for engaging schools with research (as part of the program of the European Researchers' Night). Given the impossibility of reaching the target audience (deaf teenagers and young adults) due to the pandemic, we resorted to the general audience of same age to test the prototype, in a universal design perspective. The goal of this test was to confirm the attractiveness of the activity for teenagers, young adults and teachers. **Participants**. A total of 7 classes (1 middle school class, aged 12-14, 6 secondary schools classe, aged 15-19). Since the test was conducted online during the COVID-19 lockdown, it was not possible to identify the participants apart from their school class, due to the characteristics of the meeting platform and the anonymity requirement set by the event organizers. The number of participants (\(n=154\)) was extracted from the anonymous logs. Participants were distributed by age as follows: 22 were in the range 11-13, 132 where in the range 14-19; 15 were teachers. **Procedure and materials**. Participants took part in the online activity with their teachers in 3 different sessions, in anonymous form, producing 113 stories from a catalogue of 43 artworks selected by the museum curators. Sessions had the fixed duration of 55 minutes, since they were part of the school activity program of the classes for the day (both teachers and students were at home and attended school online). Sessions were structured as follows: after connecting to the Meet platform, users received a brief presentation and of the SPICE project by the museum staff (composed of two museum educators) and the university researchers (composed of a museum expert and two scholars in HCI and AI); then, the museum educators described the activity, explaining its purpose of creating stories about the artworks. This introduction was delivered with the help of slides to explain the key ideas of the project; to illustrate the salient phases of the activity (such as artwork selection and annotation), screenshots of the prototype were included, paying attention not to give directions on how to use the interface to accomplish the task. After the introduction, users were given the URL of the prototype, which didn't require any installation and could be used online, and were asked to use it individually to create a story of their own. The estimated time for the creation of a story was 15 minutes, but more or less time was allowed according to the need expressed by the students through the chat of the platform. Teachers supervised the whole session, encouraging the students to participate actively in the activity. In order to complete a story, users had to select a minimum of 2 and a maximum of 3 artworks. To advance in story creation, at least one among the available annotation Figure 1: A screenshot of the preliminary prototype of the GAMGame web app, representing the first step enabled by the app. Namely: “Select up to three artworks to create you story”. types (tags, comments and emojis) had to be added to each artwork; however, users were free to add as many annotations as they wished. 34 participants were not able to use the web app on their devices, so they conducted the activity by using Google Forms, which had been created in advance in anticipation of technical difficulties.11 Footnote 11: 36 stories were created using Google Forms, but they are not included in the analysis due to the differences with the prototype. After the activity, we conducted a focus group with the students and the teachers to investigate the users' reception of the activity and to gain insight on its potential for education purposes. The focus group was driven by the HCI expert and the AI expert. During the focus group, participants could contribute to the ongoing discussion by writing their contributions in the video conference chat, or activate their microphone to speak. Since sessions could not be recorded for privacy reasons, one of the scholars took note of the questions posed and of the users' contributions. The discussion with the users included the following topics: enjoyability of the proposed activity, to investigate its potential to engage the users; difficulties encountered in story creation, to eliminate possible obstacles at the interaction and interface level; improvement points, to gather suggestions from the users; sharing of stories, to determine the users' willingness to share their stories, a crucial step in the Citizen Curation paradigm; contexts of use, to explore the onsite and offline use of the app. **Results**. Due to the online mode and the anonymity requirement, the quantitative analysis of the results is limited to the collected stories. As mentioned above, 113 stories were created by the users. Stories contained on average \(2,42\) artworks (Standard Deviation \(=1,02\)); \(56\) stories (\(49.55\%\)) featured \(3\) artworks, \(25\) stories (\(22\),\(12\%\)) featured \(2\) artworks; \(26\) stories (\(23\%\)) contained \(1\) artwork, that we attributed to technological issues (client-server communication errors recorded in the prototype logs); finally, \(6\) stories (\(5\),\(3\%\)) featured more than \(3\) items. Some artworks were selected more frequently, although they were presented in random order: the painting Estate. L'amaca (The hammock)12 was selected \(26\) times; Via a Parigi (Street in Paris)13 was selected \(21\) times; Le tre finestre (The three windows)14 was selected \(15\) times. Concerning the annotation of artworks, all annotation types were employed by the users (a single annotation of any type was required for each artwork), but comments were preferred over emotions, and the latter over tags: stories contained in total \(136\) comments (\(49,82\%\)), \(73\) emojis (\(26,74\%\)) and \(64\) tags (\(23,44\%\)). Of the comments, \(61\) reported feelings (\(44,85\) %), \(51\) memories (\(37,5\) %), and \(24\) inspirations (\(17,65\) %). Footnote 12: [https://www.gantorino.it/it/archivio-catalog/estate-lamaca/](https://www.gantorino.it/it/archivio-catalog/estate-lamaca/) Footnote 13: [https://www.gantorino.it/it/archivio-catalog/via-a-parigi/](https://www.gantorino.it/it/archivio-catalog/via-a-parigi/) Footnote 14: [https://www.gantorino.it/it/archivio-catalog/le-tre-finestre-la-pianura-della-torre/](https://www.gantorino.it/it/archivio-catalog/le-tre-finestre-la-pianura-della-torre/) Qualitative data emerged from the feedback provided by the users in the focus group. Here, we report the comments we received to the questions listed above, as they were annotated by the scholars during the focus group. Concerning the enjoyability of the experience, there was no negative feedback (here we don't consider the users who were not able to use the online app due to technological problems). Students reported a very positive appreciation of the experience and some expressed the wish to repeat it. In particular, they liked the creative aspect of the activity (e.g.,"I made a supercute story", \(9\) users) and the possibility of expressing themselves (\(16\) users). This is in line with the notion of "narrative identity" McAdams (2018), namely, the creation of an emotional narrative of self, that underpins the narrative interpretation method in Citizen Curation. Concerning the difficulties encountered in story creation, \(3\) users declared they were not able to use the drag and drop easily on their devices or that the web app didn't work on MacOs ("It doesn't work on my Mac"). Concerning the improvement points, 5 users asked for a more ample selection of artworks or different artworks ("Perhaps changing the artworks to make more stories"), the possibility of adding more than three artworks to the story ("Being able to add more than 3 artworks per story", 2 users), and the possibility of applying filters to the artworks and using different colors and fonts when adding tags (5 users). Incidentally, the request for typical complements of social media stories (music and photo effects) suggests that the format of social media stories has been acknowledged by these users. When asked about the sharing of stories, students declared their interest in seeing other students' stories, and were ready to share their stories as a condition to see the ones created by the others ("I would like to share it", 21 users), though many of them expressed the wish to do it anonymously ("In that case, yes, but I don't want the others to see it with my name", 5 users). Concerning the contexts of use, the feedback was limited to positive answers to the explicit question about the willing to use the app during the museum visit. 2 participants tried both the online prototype and the Google forms and during the discussion expressed a preference for the experience with the web app versus the forms ( "I find the form less nice") Finally, teachers gave a very positive opinion about the activity and reported that students were able to carry out the activity with minimal assistance (other communication channels, such as Whatsapp class chats, were active during the sessions), with the exception of middle school students, some of which had difficulties in understanding the goal of the activity and we at odds with selecting and engaging with the artworks. In particular, a source of confusion was given by the fact the artwork selection and annotation were separated: first, users selected the artworks to be included in the story, then they were asked to annotate each artwork separately. ### Re-design of the prototype Following a design-evaluate-re-design approach, starting from the preliminary evaluation described above, a second prototype was created; while the first one was intended for use on a tablet or desktop computer, suitable for use in parallel with online meetings during lockdowns, the second prototype, developed as a React15 web app, is responsive and can be used either on a desktop computer or a smartphone. Apart from adapting the interface to the use on mobile phones by doing minor changes (e.g., replacing dragging with clicking for selecting the artworks), the second prototype contained two main changes. Footnote 15: [https://reactjs.org/](https://reactjs.org/) The first change concerned the selection and commenting order. Since teachers had reported difficulties by the students in understanding that they had to select in the first place all the artworks they wished to include in the story and then annotate the selected artworks one by one, the creation of stories was broken down into the repetition of selection-and-annotation of each artwork, followed by the assignment of a title to the story before its submission. By selecting the "Create Story" function from the main menu (Figure 2, left), the user can browse the catalog and select the artworks to include by clicking on them (Figure 2, center). When each artwork is selected, the user is taken to the annotation function (Figure 2, right). Since the formative evaluation showed that the users employed all the annotation types (template-based text comments, emojis and tags) included in the first prototype, they were all maintained in the second prototype, but the layout of the annotation interface was redesigned for mobile phones. The largest part of the screen is occupied by the image of the artwork; the top part contains the annotation console, with commands for adding emojis and tags to the artwork. Once added, emojis and tags can be dragged to the position desired by the user, and discarded if needed. Multiple emojis and tags can be added. By doing so, the artwork becomes an intrinsic part of the creative activity, a whiteboard on which citizens can express their feelings and ideas about the artwork. The bottom part of the screen is divided into three tabs (with the selected tab in darker hue), which correspond to the question posed by the museum curators to trigger and drive the interpretative process at a more conceptual level. These questions, suggested by the museum curators and educators, correspond to the personal memories evoked by the artwork, the thematic cues triggered by it, and the feelings it raises. However, in order to comply with the directions provided by the experts for access by deaf users, these questions were i) put in affirmative form ii) expressed in the form of templates to be completed iii) accompanied by evocative icons. The templates, respectively, "it reminds me of...", "it makes me think of...", "it makes me feel...", are intended to act as prompts for users input and can be simply filled by inserting a single word. Since the are no conventional icons for these three types of input, we launched a survey in cooperation with the Turin Institute for the Deaf to select the best icons. To do so, for each comment type we proposed two alternatives: one consisted of the most popular icon found by searching the Web with the corresponding keyword (_think_, _feel_, _remember_): the other was proposed by the museum staff. The current icons are the ones emerged from the survey. More importantly, the function for browsing other users' stories was added. in Citizen Curation, seeing other users' stories corresponds to the _Reflection_ phase of the Interaction-Reflection Loop described above, aimed at exposing museum visitors to the perspectives of the others. Since there no negative feedback was provided by the users in the formative evaluation, we decided to include it in the GAMGame, with the Figure 2: GAMGame: Main menu (left), artwork selection (center), artwork annotation (right) condition that anonymity was preserved as requested by some users. Besides story creation, which corresponds to the _interpretation_ phase in the IRL loop described above, the user can explore the stories created by the other users, see her own stories and delete them if she wishes. The exploration of stories in the GAMGame is mediated by the museum catalogue: to see the stories stored in the system, the user browses the catalogue and selects an artwork of interested. Once the artwork has been chosen, the interfaces shows the link to the stories which contain the artwork. Stories are displayed in preview mode (Figure 4, left); each story can be opened and the artworks in it can be seen, accompanied by the personal annotations added by the user who created it, namely comments, emojis, and tags (Figure 4, center). Stories can be liked; the stories created by the user are grouped in the myStories section. Although the GAMGame web app has been designed with the explicit purpose of realizing the paradigm of Citizen Curation, it may fall short to enhance diversity in interpretations and, above all, to advance the reflection process beyond the boundaries established by the tendency of the users to search for confirmation of one's choices. In order to improve diversity in both interpretation and reflection, we added emotion-based recommendations to story creation and exploration. To improve diversity, and to leverage the role of emotions in the interpretative process, we relied on the DEGARI 2.0 system (Lieto et al., 2022) to obtain diversity-oriented, affective recommendations from the emotions associate by the users and the curators to the artworks through annotations (users) and artwork descriptions (curators). The generation of emotionally diverse recommendations by DEGARI relies on the Plutchik's model of emotions (Plutchik, 2001), which combines a categorial approach to emotions, with distinct emo Figure 3: The recommendation of artworks in the GAMGame. After selecting the last artwork in the story (left), the user is presented with artworks with similar and opposite emotions (here, opposite, right). Recommended artworks can be added to the story and annotated. tion types such as joy, awe or fear, with a dimensional approach that sets emotions into similarity and opposition relations, useful to explore diversity. In the GAMGame, after creating a story, the user receives a recommendation based on the emotional features associated to the artworks in the story, as illustrated in Figure 3). The user can ignore the recommended artworks of both type (with similar and opposite emotions), or can decide to include them in the story (currently, the number of selectable artworks has been limited to 1 for each recommendation type). This approach, described by (Lieto et al., 2022) for artwork recommendation, in the present work has been applied to story recommendation, with the goal of orienting to diversity also the reflection phase. In the GAMGame, when the user browses the stories in which a given artwork has been included (Figure 4, left), they can select a story and see the single artworks in the story, with their annotations (Figure 4, center). For each story, the systems shows the link to other stories with similar or opposite emotions (Figure 4, right). As it will described in Section 5, in our experiments, preferred stories with similar and opposite emotions received a better reception over the baseline of stories with the same emotions (which were consequently omitted from the app). ### Evaluating the usability of the web app Within the Citizen Curation paradigm, which leverages the citizens' interpretations of artworks as a way to develop perspective taking, the effectiveness of the socio-technical infrastructure through which the interpretative process is achieved is crucial. The annotation of the artworks with personal comments and emotions, in particular, is the core of the interpretation process, and should be accessible to all users in order to ensure the generation of emotionally rich, diverse interpretations. For this reason, in July 2021 we conducted a user study (\(n=12\)) to assess the effectiveness, the user's satisfaction and the perceived ease of use of the story creation function by d/Deaf users, and Figure 4: The recommendation of stories with similar/opposite emotions. While browsing other users’ stories in preview mode (left), the user can see the annotations of the single artworks in the story (center); for each story, the user can see the stories with similar or opposite emotions (right). of the annotation tools in particular. Given the issues emerged from the collection of requirements about the use of text by the d/Dear, we were interested in evaluating the annotation function, which contained textual elements, and we expected the d/Dear to prefer simpler annotations (tags and emojis) over more complex annotations (text templates). At the interface level, we wanted to ascertain that the users would understand the icons that accompany (and in some case replace) the textual instructions in story creation, some of which had borrowed from social media (e.g., the like icon), while some others had been co-designed with the museum and the Turin Institute for the Deaf as described above. Finally, we wanted to investigate the users' disposition to share their stories, since this is a main pillar of the Citizen Curation paradigm. The experiments followed the ethical guidelines released by the SPICE project consortium as part of the Work Package Ethics, ratified by an independent ethics advisor. All the user data were anonymously collected and stored according to the project data management plan.16 Footnote 16: [https://spice-h2020.eu/document/deliverable/D1.2.pdf](https://spice-h2020.eu/document/deliverable/D1.2.pdf) **Participants**. A convenience sample of 12 d/Dear users (6 males and 6 women), selected by the Turin Institute for the Deaf from their staff and students, took part in the experiments. Even though random sampling is the best way of having a representative sample, these strategies require a great deal of time and resources. Therefore, much research in human-computer interaction, in particular for groups of minority, is based on samples obtained through non-random selection (Straits, 2005; Young & Temple, 2014). Following the project ethical guidelines, the selection of participants was delegated to the Turin Institute for the Deaf, which had the capability to both disseminate the call among its contacts and guarantee a fair selection of the candidates. The sample intentionally reflected the composition of the community that revolves around the Turin Institute for the Deaf, which includes d/Dear and non-deaf staff members such as professionals (e.g., Italian Sign Language interpreters, special education experts, media producers), therapists (e.g., speech therapists), teachers (e.g., Sign Language teachers), and care givers, and d/Dear beneficiaries and trainees. As a consequence, the participants belonged to two main groups: Group A (6 users) consisted of d/Dear users with other conditions in comorbidity, selected among the trainees. As described in Section 2.2, in fact, deafness and hearing loss are accompanied by other conditions (such as learning disorders, dysppraxia, etc.) to a higher degree than non-deaf population. For this reason, this group is relevant for evaluating the GAMGame in a Design For All perspective. Group B (6 users) consisted of d/Dear educators and teachers; this group is relevant for the GAMGame since trainers and educators are expected to plan and drive the use of the web app in educational contexts, within and outside the museum. All participants in group A were aged 19 to 35 and had a secondary education level. In Group B, 4 participants were in the 19-35 age range, 2 in the 36-60 age range; 5 participants out of 6 had a tertiary education level, 1 participant had a secondary education level. Group A included 4 women an 2 men; group B included 2 women and 4 men. Both groups followed the same procedure, with the same apparatus and material, but they will be kept distinct in the analysis as a way to improve our understanding of the different parts of the community they represent. **Procedure**. The user study consisted in the task of creating a story by using the GAMGame app. In order to better control the execution of the activity, story creation was broken down in sub-tasks: artwork selection, artwork annotation and story submission. After receiving brief introduction to the GAMGame objective and features (with no preview of the tool), users were requested to select and annotate 3 artworks, so the first two steps (artwork selection and annotation) were repeated three times in a row. **Apparatus**. To avoid disparities in the execution condition determined by the differences between smartphone models, the task was executed by all users on the same desktop computer. Users were assisted by a LIS (Lingua Italiana dei Segni, Italian Sign Language) interpreter, who translated the text to Sign Language if needed, and translated users' comments and questions from LIS to Italian. The screen was recorded during task execution; however, since the experiment was conducted anonymously, the faces of the users were not included in the shot. During the execution of the task, they were observed by an experimenter taking notes on task completion and details of the interaction with the system. **Material**. In order to collect information about the socio-demographic characteristics of the users and their use of media, the users, after receiving information about the project and the experiment's objectives, filled out a brief questionnaire on age range, gender, education level, computer literacy and use of social media (Facebook, Instagram, TikTok). After completing the task, users were presented with an adaptation of the System Usabilty Scale (SUS) (Brooke et al., 1996; Lewis, 2018): items had been linguistically simplified by putting all questions in affirmative form and replacing difficult words with simpler words of everyday use. The use of Likert scales to collect the answers was replaced by emojis ranging from very sad and sad faces to happy and very happy faces to express agreement, with neutral face as the intermediate item. Two extra items were added to the SUS questionnaire, with the goal of collecting feedback on the two Figure 5: Use of emojis to collect the user’s feedback in the adapted SUS questionnaire (in Italian: “Express your opinion using the following scale: not at all, no, not sure, yes, absolutely”; “I’d like to use this system again”). For the sake of space, only the first two questions are shown. During the experiments, instructions and questions were translated by the ISL interpreter. aspects mentioned above, namely icons and story sharing: one question was about the familiarity of the user with the icons in the interface ("I found the icons familiar"), the other was about the inclination to share one's stories with the other users ("I'd like to share my stories"). These two items, expressed in affirmative form to make them simpler, were added at the end of the SUS questionnaire for convenience, but they were not included in the calculation of the results. **Results and discussion**. Concerning the completion of the tasks, we observed a difference between group A (d/Dear beneficiaries and trainees) and group B (d/Dear teachers and educators) (Table 1). Users in group A were able to complete all tasks without help; on the contrary, some users in group B needed help to complete the tasks, but they were able to complete the task autonomously after receiving initial help. In detail, concerning the selection of the artworks, 4 users out of 6 in group B needed help to select the first artwork, but this number dropped to 1 for the second artwork; all users were able to complete the selection of the third artwork autonomously. Concerning the annotation of the artworks, the protocol prescribed that the users should add at least one annotation type for each artwork, but they were free to choose which one to use. More than one annotation of the same time could be added to the same artwork (e.g., multiple emojis, comments of different type, or tags). As reported in Table 1, comment were added 18 times, tags were added 9 times, and emojis were added 23 times. Proportions slightly change if we consider group A and group B: group A used comments 7 times (av. number of comments per task: 2.33 SD=0.57), tags 1 time, and emojis 12 times (average number of emojis per task: 4, SD=2.64), resulting in general less prolific in annotations (20 annotations) than group B (30 annotations). Group B used comments 11 times (av. number of comments per task: 3.66 SD=0.57), emojis 11 times (av. number of emojis per task: 3.66, SD=1.15), and tags 8 times (av. number of tags per task: 2.66, SD=0.57), with 30 annotations in total. Concerning the differences between the two groups (see Table 1), a paired t-test was run on their annotations, showing that the differences between the number of comments (p=0.47) and tags (p=0.007) are significant, while the difference between emojis is not significant (p = 0.85). Concerning the completion of the tasks, 5 users out of 6 in group A requested help to use the comment templates and 5 out of 6 requested help for emojis; no user in group B asked for help. To sum up, despite the clear preference for emojis, as expected, the use of text annotation by the d/Dear contradicted the expectation that tags would be preferred over comments. On the contrary, text comments were preferred to tags by users in Group A, which represented the most critical group, while no a clear preference emerged for Group B. In our opinion, this preference is due to the fact that the comment templates, curated by museum educators, were more in line with a personal, introspective approach to art, while tags, being unconstrained, were less effective in driving the introspective process; in other words, they offered less guidance. This was confirmed by the observations during the experiment: users - and users in Group A in particular - put much effort in making sure that they understood clearly the meaning of the three templates, and were particularly attracted by the template evoking personal memories. Besides confirming the directions provided by the curators, this finding is relevant because it opens to a better access of deaf users to the production of text, which in turn is the input to the tools for extracting sentiment and emotions from users' interpretations in the reflection phase. According to the data collected through the questionnaire on media use, all users had a social media account: 10 users were Facebook users, 8 were Instagram users; only 1 user declared using TikTok. All users had at least one social media account. Only the users in Group B owned a personal computer; all owned a smartphone. Concerning the feedback collected with the SUS questionnaire, the results show a difference between the two groups. The comparison between Group A and Group B reveals that Group A (average score 68.33, slightly above average, standard deviation 14.2)) evaluated the system more positively than Group B (64.33, below average, standard deviation 12, 22), although users in Group A encountered more obstacles in using the system. However, a t-test run on the scores of the two groups showed that the difference is not significant (p=0.23). We hypothesize that users in Group A, having received more help, eventually underestimated the difficulties of using the system and found it easier to use. On the contrary, users in Group B had, in general, a more critical stance towards their experience and were less satisfied, being at the same time more familiar with technology and possibly endowed with more conceptual tools thanks to a higher education level. The rating of the statement on "Familiarity of icons" (\(3,17\) on a 5-point scale, standard deviation \(1,19\)) raises some concern on the adequacy of the icons and widgets in the interface and suggests that more work is needed on this aspect, while Item 12 ("Sharing stories", \(4,67\) on a 5-point scale, the highest value at all) confirms the positive feedback on sharing one's interpretations received from online users in the previous steps. ### Stakeholder Focus Group The user study was followed by a focus group that involved the project stakeholders, namely the museum (two museum educators, a curator and the museum's digital officer) and the Turin Institute for the Deaf (a special education expert, a Sign Language interpreter, and two deaf media producers and developers). According to Abras, Maloney-Krichmar, Preece, et al. (2004), in fact, the successful design of a product must take into account the wide range of stakeholders of the artifact. In particular, we were interested in the feedback from the museum staff. The paradigm of Citizen Curation, in fact, does not overshadow the role of curators and educators in museums, who are in charge of setting the digital environment where citizens' interpretations are created and, more importantly, of designing the educational context where these interpretations will be shared and will become the object of reflection. In this sense, the experience of museum curators and educators was crucial to assess the potential of our approach and to understand the factors that would hinder the use of the GAMGame. The focus group was conducted by a University team including a HCI expert e two accessibility experts with a background in cultural heritage communication. The aim of the focus group was to discuss the context of use of the GAMGame and the integration of additional functions for improving the sharing of perspectives between \begin{table} \begin{tabular}{l c c c} \hline \hline & **Group A** & **Group B** & **all** \\ \hline comments & 7 & 11 & 18 \\ tags & 1 & 8 & 9 \\ emojis & 12 & 11 & 23 \\ \hline \hline \end{tabular} \end{table} Table 1: Annotation of the artworks (comments, tags and emojis) generated by the users in two groups of the user study. the users (the reflection phase in Citizen Curation terms). The museum staff was positive about the integration of the app within their professional practices. According to the museum staff, the app should not overlap with the standard museum guide, but at the same time should not be relegated to the use at the museum (during or after the visit). According to the museum, the app should be advertised at the museum, with the goal of encouraging people to use it after the visit to share their personal interpretations of the collection. By doing so, visitors may return to the visit experience at a later time, and consolidate their engagement with the artworks, and with the museum itself. In their opinion, this type of use by the public may also improve the relationship of the visitors with the artworks that are usually considered harder to understand, like abstract paintings, because other visitors' interpretations may act as mediators with this type of art better than standard curators' texts. The Turin Institute for the Deaf staff observed that, for the app to be used outside the museum, its use should very simple and straightforward: according to them, ideally it should be possible to use to entertain oneself "while queuing at the supermarket checkout and thinking back to the museum visit" (as one of the curators put it), by relying on a very basic set of instructions. Concerning the integration of new functions, curators proposed to add recommendations as a way to help the users to orientate themselves in the collections and in the growing set of stories added by the other users (" how to support both museums and visitors in exploring and reflecting on the range of accumulated contributions" in Bruni et al. (2020), p.1). From the curators' perspective, emotions were pointed out as particularly appealing for the general public, hypothesizing the creation of an "emotion-driven visit" and the use of user-contributed stories to create highlights and visit paths. In this sense, the visitors' interpretations, thanks to their affective component, may provide the museum with a new tool for increasing their knowledge of how visitors relate with the collections and respond emotionally to them. Concerning the integration of other media than images, and visual elements in general (such emojis), the Turin Institute for the Deaf staff warned that the use of video, and of Sign Language videos in particular, might contradict the identity of the GAMGame as a universal, neutral and safe space, qualifying it as digitally separated place targeted exclusively at d/Deaf, and consequently hampering its capability to put different communities in touch. To conclude, the elements acquired through this discussion confirmed the decision to introduce the story recommendation function described in the next Section, and gave more foundation to the decision to simplify at most the story creation process, making it repetitive and pipelined as described, and include all the annotations of the artworks in the visualization of stories as a way to give more strength to the reflection process. ## 4 Story Recommendation with DEGARI 2.0 As described in the previous section, the GAMGame has proved to be a suitable environment for story creation in an inclusive perspective. Given this background, and the acceptance of affective recommendations of cultural items reported in (Bolioli, Bosca, Damiano, Lieto, & Striani, 2022; Lieto et al., 2022), we decided to extend the use of diversity-oriented affective-based recommendations to the recommendation of stories, as way to support perspective-taking and empathy. The core component of our affective-based sensemaking system, called DEGARI 2.0, relies on a probabilistic extension of a typicality-based Description Logic called \(\mathbf{T}^{\mbox{\tiny CL}}\) (Typicality-based Compositional Logic), introduced in ((Lieto & Pozzato, 2020)). This framework allows one to describe and reason upon an ontology with commonsense (i.e. _prototypical_) descriptions of concepts, as well as to dynamically generate novel prototypical concepts in a knowledge base as the result of a human-like recombination of the existing ones. ### Overview of the TCL logic used in DEGARI 2.0 \(\mathbf{T}^{\mbox{\tiny CL}}\) logic combines three main ingredients. The first one relies on the Description Logic (DL) of typicality \(\mathcal{ALC}+\mathbf{T_{R}}\) introduced in ((Giordano, Gliozzi, Olivetti, & Pozzato, 2015)) which allows to describe the _prototype_ of a concept. In this logic, "typical" properties can be directly specified by means of a "typicality" operator \(\mathbf{T}\) enriching the underlying DL, and a TBox can contain inclusions of the form \(\mathbf{T}(C)\sqsubseteq D\) to represent that "typical \(C\)s are also \(Ds\)". As a difference with standard DLs, in the logic \(\mathcal{ALC}+\mathbf{T_{R}}\) one can consistently express exceptions and reason about defeasible inheritance as well. For instance, a knowledge base can consistently express that "normally, athletes are fit", whereas "sumo wrestlers usually are not fit" by \(\mathbf{T}(\mathit{Athlete})\sqsubseteq\mathit{Fit}\) and \(\mathbf{T}(\mathit{SumoWrestler})\sqsubseteq\neg\mathit{Fit}\), given that \(\mathit{SumoWrestler}\sqsubseteq\mathit{Athlete}\). The semantics of \(\mathbf{T}\) is characterized by the properties of _rational logic_, recognized as the core properties of nonmonotonic reasoning. As a second ingredient, the logic \(\mathbf{T}^{\mbox{\tiny CL}}\) exploits a distributed semantics similar to the one of probabilistic DLs known as DISPONTE ((Riguzzi, Bellodi, Lamma, & Zese, 2015), allowing to label inclusions \(\mathbf{T}(C)\sqsubseteq D\) with a real number between 0.5 and 1, representing its degree of belief/probability, assuming that each axiom is independent from each others. As an example, we can formalize that we believe that a typical athlete is fit with degree 0.9, whereas we believe that, normally, athletes are young, but with degree 0.75, with the inclusions \(0.9~{}::~{}\mathbf{T}(\mathit{Athlete})\sqsubseteq\mathit{Fit}\) and \(0.75~{}::~{}\mathbf{T}(\mathit{Athlete})\sqsubseteq\mathit{Young}\), respectively. Degrees of belief in typicality inclusions allow to define a probability distribution over _scenarios_: roughly speaking, a scenario is obtained by choosing, for each typicality inclusion, whether it is considered as true or false. Finally, \(\mathbf{T}^{\mbox{\tiny CL}}\) employs a heuristics inspired by cognitive semantics ((Hampton, 1987) for the identification of a dominance effect between the concepts to be combined: for every combination, we distinguish a HEAD, representing the stronger element of the combination, and a MODIFIER. The basic idea is: given a KB and two concepts \(C_{H}\) (HEAD) and \(C_{M}\) (MODIFIER) occurring in it, we consider only _some_ scenarios in order to define a revised knowledge base, enriched by typical properties of the combined concept \(C\sqsubseteq C_{H}\sqcap C_{M}\). In \(\mathbf{T}^{\mbox{\tiny CL}}\), given a hybrid KB \(\mathcal{K}=\langle\mathcal{R},\mathcal{T},\mathcal{A}\rangle\) (composed by typical and standard or rigid assertions, i.e. assertion with and without exceptions, as derived from (Lieto, Radicioni, & Rho, 2017)) and given two concepts \(C_{H}\) and \(C_{M}\) occurring in \(\mathcal{K}\), the logic allows defining a prototype of the compound concept \(C\) as the combination of the HEAD \(C_{H}\) and the MODIFIER \(C_{M}\), where the typical properties of the form \(\mathbf{T}(C)\sqsubseteq D\) (or, equivalently, \(\mathbf{T}(C_{H}\sqcap C_{M})\sqsubseteq D\)) to ascribe to the concept \(C\) are obtained by considering blocks of scenarios with the same probability, in decreasing order starting from the highest one. Here all the inconsistent scenarios are discarded, then: (1) we discard those scenarios considered as _trivial_, consistently inheriting all the properties from the HEAD from the starting concepts to be combined; (2) among the remaining ones, we discard those inheriting properties from the MODIFIER in conflict with properties that could be consistently inherited from the HEAD; (3) if the set of scenarios of the current block is empty, i.e. all the scenarios have been discarded either because trivial or because preferring the MODIFIER, we repeat the procedure by considering the block of scenarios, having the immediately lower probability. Remaining scenarios are those selected by \(\mathbf{T}^{\text{cl}}\). The ultimate output is a KB in \(\mathbf{T}^{\text{cl}}\) whose set of typicality properties is enriched by those of the combined concept \(C\). Given a scenario \(w\) satisfying the above properties, the prototype of \(C\) is defined as the set of inclusions \(p\ ::\ \mathbf{T}(C)\sqsubseteq D\), for all \(\mathbf{T}(C)\sqsubseteq D\) that are entailed from \(w\) in the logic \(\mathbf{T}^{\text{cl}}\). This framework has been applied in a number of applications ranging from computational creativity (Lieto and Pozzato, 2019) to cognitive modelling (Chiodino, Lieto, Perrone, and Pozzato, 2020; Lieto, Perrone, Pozzato, and Chiodino, 2019) and intelligent multimedia, musical and artistic recommendations (Chiodino, Di Luccio, et al., 2020; Lieto, et al., 2022; Lieto, Pozzato, Zoia, Patti, and Damiano, 2021). ### Emotion-based Recommendations In the context of this work, \(\mathbf{T}^{\text{cl}}\) is exploited by our system to generate, and classify accordingly, complex emotional concepts (i.e. compound emotions, based on the combination of basic ones) by exploiting an ontological formalization of the circumplex theory of emotions devised by the cognitive psychologist Robert Plutchik (Plutchik, 1980), (Plutchik, 2001)17. According to this theory, emotions, and their interconnections, can be represented on a spatial structure, a wheel (as reported in the left of the Figure 6), in which the affective distance between different emotional states is a function of their radial distance. The Plutchik's ontology, formalizing such a theory, encodes emotional categories in a taxonomy, representing: basic or primary emotions; complex (or compound) emotions; opposition between emotions; similarity between emotions. In particular, by following Plutchik's account, complex emotion are considered as resulting from the composition of two basic emotions (where the pair of basic emotions involved in the composition is called a dyad). The compositions occurring between similar emotions (adjacent on the wheel) are called primary dyads. Pairs of less similar emotions are called secondary dyads (if the radial distance between them is 2) or tertiary dyads (if the distance is 3), while opposites cannot be combined18. An illustrative example showing the rationale used by DEGARI 2.0 to generate the compound emotions (in this case, the emotion Love as composed by the basic emotions Joy and Trust, according to Plutchik's theory) is reported in Figure 6. Footnote 17: The reasons leading to the choice of this model as grounding element of the DEGARI 2.0 system is twofold: on the one hand, this it is well-grounded in psychology and general enough to guarantee a wide coverage of emotions, thus giving the possibility of going beyond the emotional classification and recommendations in terms of the standard basic emotions suggested by models like the Ekman’s one (widely used in computer vision and sentiment analysis tasks). This affective extension is aligned with the literature on the psychology of art suggesting that the encoding of complex emotions, such as _Pride_ and _Shame_, could give further interesting results in AI emotion-based classification and recommendation systems (Silvia, 2009). Second, the Plutchik wheel of emotions is perfectly compliant with the generative model underlying the \(\mathbf{T}^{\text{cl}}\) logic. Footnote 18: The ontology is available here: [https://raw.githubusercontent.com/spice-h2020/SON/main/PlutchikEmotion/ontology.owl](https://raw.githubusercontent.com/spice-h2020/SON/main/PlutchikEmotion/ontology.owl) and queryable via SPARQL endpoint at: [http://130.192.212.225/fuseki/dataset.html?tab=query&ds=/ArsEmotica-core](http://130.192.212.225/fuseki/dataset.html?tab=query&ds=/ArsEmotica-core) The lexical features associated to each basic emotion (and the corresponding probabilities) comes from the NRC lexicon (Mohammad, 2018) and, in the context of DEGARI 2.0, represent the prototypical (i.e. commonsense) features characterizing emotional concepts and taken by the system to leverage the \(\mathbf{T}^{\text{cl}}\) reasoning framework and to generate the prototypical representations of the compound emotions. Once the prototypes of the compound emotions are generated, DEGARI 2.0 is able to reclassify museum items taking the new, derived emotions into account. As a consequence, such a reclassification allows the system to group and recommend museum items based on the novel assigned labels and, as mentioned, a novel prerogative of DEGARI 2.0 consists in the possibility of delivering also diversity-seeking recommendations. The Figure 7 reports an example of the affective-based story aggregations provided by the system (where a _story_ consists of an aggregation of selected museum items). Here, the emotion "Hope" (with background-yellow) detected in the story entitled "Animals and humans" triggers the similar emotion "Pride" (with background-yellow) in the story "Working days". The emotion "Love" (in the first story, highlighted in bold) triggers the emotion "Love" (highlighted in bold) in recommended story (with same emotion) entitled "The life". Finally, the emotion "Love" (in the first story, highlighted in bold) triggers the emotion "Remorse" in the recommended story (with opposite emotion) entitled "Terror to the GAM". The connection between the different types of emotions, and therefore between each story associated, is provided by the Plutchik's ontology exploited by DEGARI 2.0. Overall, the system tries to categorize and link the stories with respect to any of the original emotional categories found in the forming items. As anticipated, \(\mathbf{T}^{\text{cl}}\) is adopted in DEGARI 2.0 to automatically build the prototypical representations of the compound emotions according to the Plutchik's theory and the information about the emotional concepts and their corresponding features to combine via \(\mathbf{T}^{\text{cl}}\) are extracted from the NRC Emotion Intensity Lexicon ((Mohammad, 2018)19. This lexicon associates words to emotional concepts in descending order of emotional intensity and, for our purposes, we considered the most intensively associated terms for each basic emotion as typical features of such emotion. In this Figure 6: Generation of novel Compound Emotions with DEGARI 2.0 by exploiting the Plutchik’s ontology (e.g. Love as composed by Joy and Trust in the picture). The features and the probabilities characterizing each basic emotion are obtained from the NRC affective lexicon. The Plutchik’s wheel of emotion in this figure reports only the compound emotions representing the primary dyads, but our system works on the entire spectrum of dyads. Figure 7: Example of Same, Similar and Opposite emotion for stories recommendations of DEGARI 2.0 from the GAM dataset. This figure shows how the system is able not only to generate new compound emotions but also to group and suggest cultural stories according to their obtained Plutchik’s-based affective classification. The entire dyadic structure of the Plutchik’s model is exploited to recommend items and stories (i.e. collections of items) evoking different emotional stances with the aim of providing a more inclusive and affective-based interpretations of cultural content. way, the prototypes of the basic emotions were formed, and the \(\mathbf{T}^{\text{cl}}\) reasoning framework is used to generate the compound emotions. Such prototypes of basic emotions are formalized by means of a \(\mathbf{T}^{\text{cl}}\) knowledge base, whose TBox contains both _rigid_ inclusions of the form \[\textit{BasicEmotion}\sqsubseteq\textit{Concept},\] in order to express essential desiderata but also constraints, as an example \(\textit{Joy}\sqsubseteq\textit{PositiveEmotion}\) as well as _prototypical_ properties of the form \[p\ ::\ \mathbf{T}(\textit{BasicEmotion})\sqsubseteq\textit{TypicalConcept},\] representing typical concepts of a given emotion, where \(p\) is a real number in the range \((0.5,1]\), expressing the frequency of such a concept in items belonging to that emotion: for instance, \(0.72\ ::\ \mathbf{T}(\textit{Surprise})\sqsubseteq\textit{Delight}\) is used to express that the typical feature of being surprised contains/refers to the emotional concept _Delight_ with a frequency/probability/degree of belief of the \(72\%\). Once the association of lexical features to the emotional concepts in the Plutchik's ontology is obtained and the compound emotions are generated via the logic \(\mathbf{T}^{\text{cl}}\), the system is able to reclassify the cultural items in the novel formed emotional categories. Intuitively, an item belongs to the new generated emotion if its metadata (name, description, title) contain all the rigid properties as well as at least the \(30\%\) of the typical properties of such a derived emotion. The \(30\%\) threshold was empirically determined: i.e., it is the percentage that provides the better trade-off between over-categorization and missed categorizations (Chiodino, Di Luccio, et al., 2020). ### DEGARI 2.0 Software Modules and Architecture Overall, the system is composed by four software modules, as depicted in Figure 8. The modules adopting \(\mathbf{T}^{\text{cl}}\) and involved in the processes of (basic) emotion formation and (compound) emotion generation correspond to the Modules 2 (Emotion combination) and 3 (Generation of combined emotion prototypes) of the architecture in the Figure. Module 1 (Generation of prototypes), on the other hand, represents the entry point of the system and manages the metadata associated to each museum item. Finally, Module 4 (Recommender system), is the one devoted to group, reclassify and recommend the cultural items according to the novel emotional labels created by DEGARI 2.0. In particular, the reclassification step requires matching the output of Module 1. Namely: matching the extracted metadata of each museum item (or the user-generated texts associated with it), with the ones characterizing the compound emotions generated in Modules 2 and 3. In the current version of the system, Module 1 accepts JSON files containing a textual description of the cultural items (e.g. coming from user generated comments or from the museum catalogues) and performs an information extraction step generating a lemmatized version of the JSON descriptions of the cultural item and a frequentist-based extraction of the typical terms associated to each cultural item in its textual description (the assumption is that the most frequently used terms to describe an item are also the ones that are more typically associated to it). The frequencies are computed as the proportion of each term with respect to the set of all terms characterizing the item. These two tasks (lemmatization and frequency attribution) are performed by using standard libraries like Natural Language Toolkit20 and TreeTagger21. Once this pre-processing step is done, the final representation of the cultural items is compared with the representations of the typical compound emotions obtained in Module 3. This comparison, and the corresponding classification, is done in Module 4 that implements, we recall, the following categorization heuristics: if a cultural item contains all the rigid properties and at least the 30% of the typical properties of the compound emotion under consideration, then the item is classified as belonging to it. After the categorization has taken place, DEGARI is eventually able to classify and group together the items evoking the same emotions (e.g., Curiosity in the Figure 9, this JSON snippet is the element entering the Module 1 of the system and triggering its entire processing until the recommendation step (in this case based on the classical "same-emotion" suggestion).) or, as shown in the examples from the Figure 7, aggregation of items (i.e. stories) having opposite or similar emotions. Footnote 20: [https://www.nltk.org/](https://www.nltk.org/) The current version of the system is available as a web service that that can be invoked via standard HTTP requests and whose reasoning output is made automatically available to a queryable SPARQL 22 endpoint. As we will show in the next section, this advancement allowed us to call the DEGARI 2.0 reasoning services and to integrate its output within a web app (called GAMGame) built to collect user data on cultural items, during a museum visit. Footnote 22: [https://www.cis.uni-muenchen.de/-schmid/tools/TreeTagger/](https://www.cis.uni-muenchen.de/-schmid/tools/TreeTagger/) Figure 8: The software architecture of DEGARI. Module 1 represents the entry point of the system. It accepts JSON files containing a textual description of the cultural items (coming from user comments or from the museum catalogues) and performs an automatic information extraction step generating a lemmatized version of the JSON descriptions and a frequentist-based extraction of the typical terms associated to the cultural item. Modules 2 and 3 are devoted respectively i) to the acquisition of the basic Emotions to combine (Module 2) and ii) to the generation of the compound Emotions (Module 3). Module 4 is the one classifying, grouping and recommending the cultural item according to the novel generated emotions. Figure 9: A pictorial example of the categorization pipeline used by DEGARI 2.0 for emotion attribution and content aggregation/suggestion based on the artwork “Autoritratto a forma di Gufo” (Self-portrait in the shape of an owl) from the GAM. The item is associated with a textual description coming, in this case, from the museum collection (user-generated contents are also handled via the same format). The figure shows the mechanisms, described in the text, for the extraction, classification and grouping of the museum items according to the \(\mathbf{T^{cl}}\) generated complex emotional categories (e.g. Curiosity in the example). These depicted steps (focusing on single items classification and grouping) represent the prerequisite for the recommendations of stories (composed by a collection of items). ## 5 Evaluation of recommendation strategies For the experiments, we relied on a an selection of 56 artworks from the GAM catalogue (which contains 586 items overall) made by the curators of the Museum for the inclusion in the app. The rationale behind their selection was to present the audience with a variety of subjects, styles, techniques and historical periods so as to overcome curatorial biases in the setting. The artwork metadata in the GAM catalogue contain the descriptions of each artwork including title, author, date, characters, actions, objects. These elements were encoded as JSON-LD files, so that the resulting description of each item was compliant with the input of DEGARI 2.0 system. ### Evaluation approach To the best of our knowledge, there is no available evaluation standard to test a system about its diversity-seeking affective recommendation (not only referred to the deaf community). As a matter of fact, indeed, standard recommendation systems are evaluated on their ability to confirm one own's points of view. On the other hand, the purpose of a system like DEGARI 2.0 is exactly the opposite: i.e. to break the filter bubble effect by adopting an inclusive approach aiming at extending (not confirming) the typology of experienced cultural items through the exploitation of an affective lens trying to include, in the user's perspectives and potential experience, also cultural items that do not directly fit their usual, expressed, preferences. Given this state of affairs, we tested the capability of DEGARI 2.0 to suggest alternative, diversity-seeking, emotion-based aggregations of museum items selected by the users. Such aggregates are called _stories_, since they provide a narrative context to the chosen selection23. Footnote 23: The analysis of the recommendations based on stories represents the major difference with a previous work (Lieto et al., 2022) that was, on the other hand, focused only on singe-items diversity-seeking recommendations Our evaluation has been carried out in two phases. First: during the researchers' night event (promoted by the University of Turin) and involving both the community of deaf people engaged via the Turin Institute for the Deaf as well as non-deaf people. Then, after two weeks, a follow-up was conducted with a subset of the deaf people only (the participants of this second phase had already participated to the researcher's night experiment and were recontacted by the Institute). Overall, the experimentation was done on 83 users who created a total of 87 stories using the GAMGame web app. In particular, a group of 34 non-deaf users (8,2% female, 47,1% male, 14,7% not specified, who generated 34 stories and a second group of 49 deaf users (55% female, 40% male, 5% not specified ), who generated 52 stories, were involved in the two phases. All participants gave their written informed consent before participating in the experimental procedure, which was approved by the ethical committee of the University of Turin, in accordance with the Declaration of Helsinki (World Medical Association, 1991). Participants were all naive to the experimental procedure and to the overall aims of the study. In the context of our work, the evaluation of recommendation strategies was done separately from the evaluation of interaction, since it has been inspired by the so-called layered evaluation approaches of recommender/adaptive systems (Brusilovsky, Karagiannidis, & Sampson, 2001; Karagiannidis & Sampson, 2000; Paramythis & Weibelzahl, 2005). According to these approaches, during the evaluation of adaptive and recommender systems, instead of evaluating the system as a whole, at least two layer should be distinguished: the interaction layer, wherein the effectiveness of the changes made at the interface are evaluated, and content layer, wherein the accuracy of the system inferences are evaluated. These two layers should be distinguished instead of evaluating the systems as a whole because if the recommender solutions do not improve the interaction, for instance, it is not evident whether one or both the above layers have been unsuccessful. The effectiveness of such an approach has been demonstrated by several experimental results, see Gena (2005) for details. The following sections explore the experiments involving the content layer. ### First Experiment In the first experiment, both groups of deaf and non-deaf users were asked to make an evaluation on the recommended stories characterized by same/similar/opposite emotions extracted from DEGARI 2.0 on the basis of the story they created. **Hypothesis**. We have hypothesized that the recommendations receiving the better ratings would be the ones suggesting stories created by other users with exactly the same emotions as the stories created by the user (e.g. If the extracted emotion associated to a story created by a given user is Love, then we hypothesize that the recommendation of other stories eliciting Love would receive better rating than the one eliciting similar or opposite emotions). **Experimental Design**. Single factor within subjects design. The independent variable was the recommendation manipulated according to three levels: the received recommendations based on the "same-emotion" - based recommendation, the "similar-emotions" based recommendation, and the "opposite emotions' - based recommendation. Figure 10: Google-form for user registration to the GAMGame web app during the UNITO researchers’ night. On the left, the Google Form with the anonymous data (here, gender and age group); on the right, the link to the GAMGame and the home page (here, for desktop screen) of the GAMGame. **Participants**. This first evaluation involved 34 non-deaf users (8,2% female, 47,1% male, 14,7% not specified) and 40 deaf users (45% female, 50% male, 5% not specified). The two groups generated 35 and 34 stories respectively. Participants were selected by involving the community of d/Deaf people engaged via (the Turin Institute for Deaf), via an availability sampling strategy. The reported evaluation focuses on the acceptability of the received inclusive-based affective recommendations. **Procedure**. The experiment aimed at measuring the satisfaction of the potential users of the GAMGame web app when exposed to the suggestions of the novel categories suggested by DEGARI 2.0. It consisted in a user study24 where participants, after having provided their own stories in the GAMGame app, were exposed to a number of affective-based recommendations based on their original selection. The participants registered via Google Forms; the registration ended with the generation of an anonymous id for the test session. For each (anonymous) user, in the registration phase, we asked to provide the following information: Gender, age, relationship with art, with museum, and if not. Once completed the registration phase (Figure 10), users were redirected to the GAMGame web app where they could start the creation of their stories. After completing the story, each participant was recommended with three types of stories: stories featuring artworks with the same emotions, stories featuring artworks with similar emotions, and stories featuring artworks with opposite emotions Footnote 24: This is one of the most commonly used methodology for the evaluation of recommender systems based on controlled small groups analysis, see (Shani & Gunawardana, 2011). **Apparatus**. The participants used their own devices to fill out the anonymous registration procedure and carry out the story creation task, in compliance with the practice known as Bring Your Own Device, well established also in the field of museum applications(Ballagas, Rohs, Sheridan, & Borchers, 2004). **Material**. At the end of the interaction, participants were asked to compile an online questionnaire about the received suggestions. Here, they had to rate, on a 10-point scale (from 1 to 10, being 1 the lowest and 10 the highest rating), the received recommendations based on the "same-emotion" "similar-emotions" and "opposite emotions" categories. The instructions to the community of d/Deaf participants were provided and supervised by a professional translator of the Institute for the Deaf (the Figure 11 shows a frame of this experiment) who translated questions, feedback, and comments from Italian to LIS (Lingua Italiana dei Segni, Italian Sign Language) and vice versa. #### 5.2.1 Results and Discussion for the Inclusive Recommendations, First Experiment Overall, the two groups rated 53 recommendations. This rating results are shown in Table 2(a) for non-deaf participants who gave their assessment on a set of 24 recommended stories and Table 2(b) for d/Deaf-participants participants who gave their assessment on a set of 29 recommended stories. After the interaction with the GamGame web app, and after having being exposed to the affective-based, diversity seeking recommendation by the DEGARI engine, the d/Deaf and non-deaf participants were asked to make an evaluation on the recommended stories characterized by same/similar/opposite emotions extracted from DEGARI on the basis of the story they created. The overall obtained results about the ratings are shown in Table 2 (a) for non-deaf participants and Table 2(b) for deaf participants. Table 2 shows the mean, median and standard deviation values for each emotion recommendation group (same, similar and opposite emotions). It is possible to note that, contrary to our original hypothesis and expectations, both groups (deaf/non-deaf articipants) manifest a major preference for the suggestions of stories that generate similar emotions: deaf users (Table 2(b)) have an average on the same emotions of 6.66 while the group of non-deaf users (Table 2(a)) reaches an average of 7.46 Concerning the recommendation of stories with opposite emotions, on the contrary, we observe that, while non-deaf users give with higher ratings to stories with opposite emotions than same emotions, deaf users give higher ratings to stories with the same emotions than opposite emotions. Interestingly enough, the reported data about the story-based affective recommendations for the deaf users confirms the findings reported in a previous experimentation involving only single-items recommendations, see (Lieto et al., 2022), page 11, Figure b. For the statistical comparison (shown in Table 3) of the rating groups of Sample 1 (rating for deaf users) and Sample 2 (rating for non-deaf users), we used the Mann-Whitney statistic. Specifically, for Sample 1 Rating on same emotion for deaf users) and Sample 2 (rating on same emotion for non-deaf users) we collected 22 ratings and 31 ratings respectively, thus obtaining a \(U-value=277.5\), a \(Z-score=1.46032\) and a \(p-value=0.1443\). The results are not statistically significant at \(p<\)0.05. For Sample 1 (evaluation of deaf users on similar emotions) and Sample 2 (evaluation of non-deaf users), we thus obtained a \(U-value=342.5\), a \(Z-score=0.60789\) and a \(p-value=0.54186\). The results are not statistically significant at \(p<\)0.05. Finally, for the opposite emotion of Sample 1 (deaf users' evaluation of the opposite emotion) and Sample 2 (non-deaf users' evaluation), we thus obtained a \(U-value=234\), a \(Z-score=2.06853\) and a \(p-value=0.03846\). The results are statistically Figure 11: A professional translator from Istituto dei Sordi explains to the participants how to use the app for creating their stories. significant at \(p<\)0.05, confirming that the two groups actually differ in relation with the recommendation of stories with opposite emotions. The recommendations that received a better rating were the ones suggesting stories linked to the original one through the property "similar emotion". The recommendations of stories evoking opposite emotions, with respect to the original created story in the game (for deaf users), were the ones that received the worst ratings. In particular, this latter datum suggests that there are mechanisms of cognitive resistance that prevent a full acceptance of suggestions going in a different direction from one's own preferences. This datum also suggests that a first guideline that can be extracted for the improvement of diversity-seeking affective recommenders concerns the opportunity to adopt presentation devices for the **mitigation of cognitive resistance effects**. Although the search for mitigation measures that wrap diversity into some meaning frame is an open research area, the effectiveness of narrative formats (Damiano, Lombardo, Lieto, & Borra, 2016; Wolff, Mulholland, & Collins, 2012) and of ethically-driven digital nudging techniques (Augello, Citta, Gentile, & Lieto, 2021; Gena, Grillo, Lieto, Mattutino, & Vernero, 2019) is worth exploring. A more immediate strategy that could be adopted in our system is also represented by the progressive recommendation of items evoking emotions that are gradually more distant from the starting one (where the distance can still rely on the radial structure of the Plutchik's wheel encoded in the ontology). ### Second Experiment The second experiment involved only a subset of the deaf people that had participated to the previous experiment contacted the Turin Institute for Deaf two weeks after the Researcher's Night. Here we wanted to evaluate if a repeated exposition to diversity-seeking affective recommendations (i.e. the ones based only on similar and opposite emotions) would have lead (if any) to any variation in the assigned ratings. **Hypothesis**. We have hypothesized that the devised framework (formed by the GAMGame interface plus the devised sensemaking engine) presents an overall layout that, after a repeated exposition to the tool, favours/increases the willingness to the exploration of stories addressing multiple and diverse viewpoints within the museum exhibition. More specifically: we hypothesize that a repeated interaction with the \begin{table} \begin{tabular}{l r r r} \hline \hline \multicolumn{1}{c}{**Rating on recommended stories by the non-deaf users**} & \multicolumn{1}{c}{**Rating on opposite emotion**} \\ & **Rating same emotion** & **Rating on similar emotion** & **Rating on opposite emotion** \\ Mean & 6,0208 & 7,4583 & 7,3125 \\ Dev.Standard & 2,3335 & 1,8233 & 1,8871 \\ Median & 7 & 8 & 8 \\ & (a) & & \\ \hline \multicolumn{1}{l}{**Rating on recommended stories by d/Deaf**} & & \\ Mean & 5,7414 & 6,6638 & 5,4483 \\ Dev.Standard & 2,4445 & 2,4406 & 2,6937 \\ Median & 6 & 7 & 5 \\ & (b) & & \\ \hline \hline \end{tabular} \end{table} Table 2: (a) shows the rating for of non-deaf participants and (b) d/Deaf participants users the DEGARI 2.0 stories recommendations system would lead to an increased preferences (if compared to the first interaction) for the recommendations of stories based on similar and opposite emotions. **Experimental Design**. The same methods as the first experiment were used for the second one. Single factor within subjects design. The independent variable was the recommendation manipulated according to two levels: the received recommendations based on the "similar-emotions" based recommendation, and the "opposite emotions' - based recommendation. **Participants**. This second evaluation involved only deaf users: 9 users (5 females, 4 males), engaged via the Turin Institute for Deaf accepted to participate to the second experiment and generated 18 stories. **Procedure**. The users were asked to create stories by avoiding to select the same items already chosen in the first experiment (with the aim of not being re-exposed to the same recommendations already received). Once logged, in fact, users could see their own previous stories in the My stories section of the GAMGame. The analysis conducted in this experiment can, therefore, be considered as a sort of repeated session where we recorded and compared the ratings assigned to the recommendations with respect to the previously obtained ones. **Apparatus**. As for the first experiment, the participants used they own devices to fill out the anonymous registration procedure and carry out the story creation task. **Material**. As for the first experiment, at the end of the interaction, participants were asked to compile an online questionnaire about the received suggestions. Here, they had to rate, on a 10-point scale (from 1 to 10, being 1 the lowest and 10 the highest \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{**Rating on same emotion**} \\ \hline & Rating & \(U-value\) & \(Z-score\) & \(p-value\) & significant/not \\ Sample 1 & 22 & & & & Not significant \\ (leaf-users) & & & & & at \(p<0.05\) \\ Sample 2 & 31 & & & & & \\ (non-deaf users) & & & & & & \\ \hline & \multicolumn{5}{c}{**Rating on similar emotion**} \\ \hline & Sample 1 & & & & & Not significant \\ (deaf-users) & 22 & & & & & at \(p<0.05\) \\ Sample 2 & 31 & & & & & \\ (non-deaf users) & & & & & & \\ \hline & \multicolumn{5}{c}{**Rating on opposite emotion**} \\ \hline & Sample 1 & & & & & \\ (deaf-users) & 22 & & & & & \\ Sample 2 & & & & & \\ (non-deaf users) & 31 & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: Statistical comparison of the rating groups for Sample 1 (rating for deaf users) and Sample 2 (rating for non-deaf users) on same/similar/opposite emotions by using the Mann-Whitney statistics. rating), the received recommendations based on the "similar-emotions" and "opposite emotions" categories. The instructions to the community of d/Deaf participants were provided and supervised by a professional translator of the Institute for the Deaf who translated questions, feedback, and comments from Italian to LIS (Lingua Italiana dei Segni, Italian Sign Language) and vice versa. #### 5.3.1 Results and Discussions for Inclusive Recommendations, Second Experiment In total, 18 brand-new stories were created and received diversity-seeking recommendations based on stories provided by other users. The main figure emerging from this experiment is reported in Table 4, which compares the average rating assigned by the deaf users for both recommendation types. It emerges, for both the types of recommendations, an augmented preference assigned to the received recommendations, thus suggesting that the repeated exposition to novel points of view can improve the willingness to widen one's own view. At time \(t_{1}\), in fact, we observe that average rating assigned to the recommended stories with similar emotions grows (from 6.664 at \(t_{0}\) to 7.278 at \(t_{1}\)), overcoming the ratings assigned to stories with same emotions - as in the group of non-deaf users in the First Experiment (with the same trend observable for the rating assigned to the recommendations based on opposite emotions). As a consequence of this state of affairs, then, the starting hypothesis guiding this second experiment was confirmed. Indeed, since (in both the experiments) the logic underlying the distribution of the affective-based story recommendations and the actual interface used (i.e. the GAMGame) did not vary, and since the evaluation of recommendations was constrained - by design - to start from stories that that did not represent the original "first choice" of the users (we remind that they were asked to create brand-new stories considering different items with respect to the one selected in the first experiment, see Procedure)25 we arguably attributed the change in the obtained ratings to the effect of a repeated exposition to the overall framework exploring alternative viewpoints vehiculated by stories evoking diverse emotions Footnote 25: Thus representing an even more challenging evaluation setup compared to the first evaluation since the users were, arguably, less incline to provide higher ratings for collections that do not elicit their original preferred emotional setting. This findings is relevant since it suggests that, after a first application of cognitive resistance mechanisms (based on the preservation of one's points of view) the mechanisms of diversity-seeking recommendations seem to be more accepted once the receivers have been already exposed to such type of suggestions. We are not aware of similar results in the context of diversity-seeking recommendations, but these findings are in line with those coming from research on the depolarization of echo-chambers in social media, which show how the repeated exposition to alternative viewpoints, done with techniques like random dynamical nudging, tends to lead the people converge towards less extreme and more inclusive viewpoints (Currin, Vera, & Khaledi-Nasab, 2022). Finally, it is worth observing that, apart from the individual use by the museum visitors, the proposed recommendation system can be used also as a sense-making tool by museum educators and curators in educational activities, where similarities and oppositions can be a relevant tool for guided discussion and comparison in group settings. ### Limitations The overall experimentation faces some limitations. First: as mentioned, the sample of the users that we managed to involve does not allow us to make statistical inferences but only a qualitative analysis on the outcome of the obtained results. Despite this limitation, however, largely due to the difficulty of interacting with such community without the direct intervention of professional experts and translators in Sign language, the number of deaf people involved in the experiment is larger than the typical user studies involving people affected by such a disability. For example, Mack et al. recruited 7 participants for interviews on the use of social web apps, and were able to reach a larger sample (60 participants) only online. In (Mahajan et al., 2022), 5 participants were interviewed. Second: the experiments reported in Section 5 have been done, differently from the ones reported in (Lieto et al., 2022) on single-items recommendations, outside the museum. As reported, the data of the overall ratings of the first experiment do not seem to suffer from this difference, this suggesting that our system can be used both during a museum visit or in a pre/post visit condition (e.g. for schools). The evaluation described in Section 3, instead, took place in the premises of the museum, but after the visit, since it required a fixed apparatus. However, it is necessary to notice that the GAMGame has not been designed to be employed exclusively during the visit. On the contrary, its design, with its strong visual content and simple, straightforward activities conceived of for inclusion, is suitable for anywhere use, thus contributing to break the barriers between the museum and the outside world, at reach of a larger number of communities. ## 6 Conclusions and Future Work In this paper, we described the use of a novel sensemaking tool for enhancing reflection on cultural items through emotional diversity. The system was integrated and evaluated within the context of a Citizen Curation environment. Targeted to the inclusion of the community of the d/Dear, this environment allows the user to express their personal interpretations of artworks by creating and sharing simple stories from a collection of museum artworks. By leveraging the annotations added by the users to the artworks in the stories, the system allows exploring the repository of stories \begin{table} \begin{tabular}{l|r|r|} \hline \multicolumn{2}{c}{_Time_} & \multicolumn{2}{c}{\(t_{0}\)} \\ \hline & **Rating similar** & **Rating opposite** \\ & **emotion** & **emotion** \\ Mean & 6,664 & 5,448 \\ Dev.Standard & 2,441 & 2,694 \\ Median & 7 & 5 \\ \hline & \multicolumn{2}{c}{\(t_{1}\)} \\ \hline Mean & 7,278 & 7,056 \\ Dev.Standard & 1,965 & 1,731 \\ Median & 7 & 7 \\ \hline \end{tabular} \end{table} Table 4: Rating from the Deaf participants in Experiment 2 at time \(t_{0}\) and \(t_{1}\)
2305.18404
Conformal Prediction with Large Language Models for Multi-Choice Question Answering
As large language models continue to be widely developed, robust uncertainty quantification techniques will become crucial for their safe deployment in high-stakes scenarios. In this work, we explore how conformal prediction can be used to provide uncertainty quantification in language models for the specific task of multiple-choice question-answering. We find that the uncertainty estimates from conformal prediction are tightly correlated with prediction accuracy. This observation can be useful for downstream applications such as selective classification and filtering out low-quality predictions. We also investigate the exchangeability assumption required by conformal prediction to out-of-subject questions, which may be a more realistic scenario for many practical applications. Our work contributes towards more trustworthy and reliable usage of large language models in safety-critical situations, where robust guarantees of error rate are required.
Bhawesh Kumar, Charlie Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, Andrew Beam
2023-05-28T15:26:10Z
http://arxiv.org/abs/2305.18404v3
# Conformal Prediction with Large Language Models ###### Abstract As large language models are widely developed, robust uncertainty quantification techniques will become crucial for safe deployment in high-stakes scenarios. This work explores how conformal prediction can quantify uncertainty in language models for multiple-choice question-answering. We find that the uncertainty estimates from conformal prediction are tightly correlated with prediction accuracy. This observation can be helpful in downstream applications such as selective classification and filtering out low-quality predictions. We also investigate the exchangeability assumption required by conformal prediction to out-of-subject questions, which may be a more realistic scenario for many practical applications. Our work contributes towards more trustworthy and reliable usage of large language models in safety-critical situations, where robust guarantees of error rate are required. Machine Learning, ICML ## 1 Introduction Large language models (LLMs) have recently achieved impressive performance on a number of NLP tasks, such as machine translation, text summarization, and code generation. However, lingering concerns of trust and bias still limit their widespread application for critical decision-making domains such as healthcare. One well-known issue with current LLMs is their tendency to "hallucinate" false information with seemingly high confidence. These hallucinations can occur when the model generates outputs not grounded in any factual basis or when the prompt is highly unusual or ambiguous. This behavior of LLMs may also result from how these models are trained -- using statistical sampling for next-token prediction -- which can progressively increase the likelihood of factual errors as the length of generated tokens increases (LeCun, 2023). Factually incorrect outputs may confuse and deceive users into drawing wrong conclusions, ultimately decreasing the overall system's trustworthiness. Decisions based on unpredictable or biased model behavior could have significant negative and socially harmful consequences in high-stakes domains such as healthcare and law. Therefore, we seek to explore principled uncertainty quantification (UQ) techniques for LLMs that can provide guaranteed error rates of model predictions. Ideally, these UQ techniques should be model agnostic and easy to implement without requiring model retraining due to the intensive computing costs and limited API access associated with many LLMs. To this end, we investigate _conformal prediction_, a distribution-free UQ framework, to provide LLMs for the task of multiple-choice question-answering (MCQA). Based on our experiments, we find the uncertainty, as provided by conformal prediction, to be strongly correlated with accuracy, enabling applications such as filtering out low-quality predictions to prevent a degraded user experience. We also verify the importance of the exchangeability assumption in conformal prediction (see section 2) for guaranteeing a user-specified level of errors. To summarize, our contributions are the following: * we adapt conformal prediction for MCQA tasks to provide distribution-free uncertainty quantification in LLMs, * show how the uncertainty provided by conformal prediction can be useful for downstream tasks such as selective classification, * and assess the performance of conformal prediction when the exchangeability assumption is violated for in-context learning in LLMs. ## 2 Conformal Prediction Uncertainty quantification (UQ) techniques are critical to deploying machine learning in domains such as healthcare Bhatt et al. (2021); Kompa et al. (2021); Ravi et al. (2022). Conformal prediction Gammerman et al. (2013); Vovk et al. (2022) is a flexible and statistically robust approach to uncertainty quantification. Informally, the central intuition behind conformal prediction is to output a set of predictions containing the correct output with a user-specified probability. By providing a more nuanced understanding of the model's confidence and a statistically robust coverage guarantee, conformal prediction paves the way for improved and more reliable applications of machine learning models across various domains Kumar et al. (2022). **Prediction sets.** Formally, let \(\mathcal{C}:\mathcal{X}\to 2^{\mathcal{Y}}\) be a set-valued function that generates a prediction sets over the powerset of \(Y\) given an input \(X\). This prediction set naturally encodes the model's uncertainty about any particular input by the **size** of the prediction set. Expressing uncertainty as the set size is an intuitive output that can be helpful in decision-making contexts Babbar et al. (2022). For example, in medical diagnosis, the concept of prediction set is similar to a differential diagnosis, where only likely and plausible conditions are considered given the observed symptoms of a patient Lu et al. (2022). Indeed, conformal prediction has been utilized for uncertainty quantification in healthcare applications such as medical imaging analysis Lu et al. (2022); Ravi et al. (2022); Lu and Kalpathy-Cramer (2022). **Coverage guarantee.** Conformal methods generate prediction sets that ensure a certain user-specified probability of containing the actual label, regardless of the underlying model or distribution. This guarantee is achieved without direct access or modification to the model's training process and only requires a held-out calibration and inference dataset. This makes conformal prediction well-suited to LLM applications when retraining is costly and direct model access is unavailable through third-party or commercial APIs. The coverage guarantee states that the prediction sets obtained by conformal prediction should contain the true answer on average at a user-specified _level_, \(\alpha\). This property is called _coverage_, and the corresponding coverage guarantee is defined as: \[1-\alpha\leq\mathbf{P}\left(Y_{\text{test}}\in\,\mathcal{C}(X_{\text{test}}) \right), \tag{1}\] where \(\alpha\in(0,1)\) is the desired error rate, and \(\mathcal{C}\) is the calibrated prediction set introduced above. \((X_{\text{test}},Y_{\text{test}})\sim\mathcal{D}_{\text{calibration}}\) is an unseen test point that is drawn from the same distribution as the data used to calibrate the prediction sets. **Conformal Calibration Procedure.** As previously mentioned, conformal prediction only needs the scores of a model to calibrate and construct the prediction sets. We now describe how to calibrate the prediction sets for a specific score function. Let \(f:\mathcal{X}\rightarrow\Delta^{|\mathcal{Y}|}\) be a classifier with a softmax score, where \(\Delta\) is a \(|\mathcal{Y}|\)-dimensional probability simplex. A common choice for the score function, _least ambiguous set-valued classifiers_ (LAC) Sadinle et al. (2019), is defined as \[S(X,Y)=1-\left[f(X)\right]_{Y}, \tag{2}\] where \(\left[f(X)\right]_{Y}\) is the softmax score at the index of the true class. To calibrate the prediction sets to our desired level of coverage, we need to estimate a threshold \(\hat{q}_{\alpha}\) that is the \(1-\alpha\) quantile of the calibration scores \[\hat{q}_{\alpha}=\text{Quantile}\left(\{s_{1},\ldots,s_{n}\},\frac{\left[(n+ 1)(1-\alpha)\right]}{n}\right), \tag{3}\] where \(\{s_{1},\ldots,s_{n}\}\) are the LAC scores of the calibration set. At inference time, prediction sets can be constructed in the following manner: \[\mathcal{C}(X)=\left\{y\in\mathcal{Y}:S(X,y)\leq\hat{q}_{\alpha}\right\}, \tag{4}\] **Exchangeability assumption.** Conformal prediction assumes that the data used to calibrate the prediction sets is exchangeable with the test data at inference time. If this assumption holds, the coverage guarantee, as stated in Equation 1, will hold, and the resulting prediction sets will have the desired error rate. Exchangeability can be viewed as weaker than the independent and identically distributed (IID) assumption Bernardo (1996). This assumption is often made in machine learning with regard to the training, validation, and test sets. The threshold used to determine the size of the prediction set is estimated on a held-out calibration data set that is assumed to be _exchangeable_ with the test distribution. ## 3 Prompt Engineering In this paper, we focus on the task of multiple-choice question answering (MCQA) and frame MCQA as a supervised classification task, where the objective is to predict the correct answer choice out of four possible options. We wish to quantify the model uncertainty over the predicted output using conformal prediction. We condition each option choice (A, B, C, and D) on the prompt and question and use the LLaMA-13B model Touvron et al. (2023) to generate the logit corresponding to each multiple-choice answer. We normalize the four logits using the softmax to obtain valid probabilities for each option. **One-shot prompting.** LLMs are very sensitive to the exact input prompt, which has motivated a whole field of in-context learning and prompt engineering or prompt tuning (Zhou et al., 2023; Wei et al., 2023). Context learning refers to the ability of LLMs to understand and make predictions based on the context in which the input data is presented without updating the model weights. Prompt engineering methods vary significantly among tasks and require heavy experimentation and reliance on hand-crafted heuristics. For the current setup, model performance on classification tasks is often sensitive to the prompts used. Thus, we experiment with several prompting strategies before finalizing our prompts. We use one-shot prompting by including one context example. For each subject, we use a slightly different prompt. For example, we prompt the model to assume it is the "world's best expert in college chemistry" when generating predictions for college chemistry subjects. We also use ten different prompts for each subject to generate ten softmax probability outputs to reduce variance. We obtain the final probability outputs for a question by averaging the softmax outputs corresponding to these ten prompts. The ten prompts for a given subject only vary in terms of the one-shot question. A sample prompt for high school biology is provided below: This is a question from high school biology. A piece of potato is dropped into a beaker of pure water. Which of the following describes the activity after the potato is immersed into the water? (A) Water moves from the potato into the surrounding water. (B) Water moves from the surrounding water into the potato. (C) Potatocells plasmolyze. (D) Solutes in the water move into the potato. The correct answer is option B. You are the world's best expert in high school biology. Reason step-by-step and answer the following question. From the solubility rules, which of the following is true? (A) All chlorides, bromides, and iodides are soluble (B) All sulfates are soluble (C) All hydroxides are soluble (D) All ammonium-containing compounds are soluble Figure 1: **LLaMA MCQA accuracy is similar for GPT-4 generated questions and real MMLU questions across subjects.** For most MMLU subjects, prediction accuracy using one-shot GPT-4 generated questions is similar to when actual MMLU questions are used in one-shot prompts. Results are averaged over ten randomly selected one-shot GPT-4 and MMLU prompts. Figure 2: **The accuracy distribution across subjects for ten prompts.** We plot the distribution of accuracy for ten different one-shot prompts. #### The correct answer is option: **GPT-4 generated examples.** We explore two approaches for the one-shot example in the prompts: (1) One-shot example is one of the questions in the MMLU dataset for that subject. We then exclude this specific question for generating predictions with the resulting prompt. (2) We use GPT-4 to generate multiple-choice questions for each subject. We then cross-check the questions and answers produced by GPT-4 for correctness and select ten correct question-answer pairs. We use the following prompt to generate MCQs for clinical knowledge from GPT-4: "_Give me 15 multiple choice questions on clinical knowledge with answers_". Specific questions and answers generated by the GPT-4 are available from our code (refer to Section 4.4.) We have also included a subset of sample GPT-4 generated questions and answers as well as MMLU-based questions and answers in the Appendix (A.1 ) We generate MCQs for other subjects using similar prompts. GPT-4-based one-shot questions produce more accurate answers than MMLU-based questions, as shown in Figure 1. After controlling for the size of the prompts (limited to 700 tokens), we find that MMLU-based and GPT-4 based one-shot questions produce similar accuracy on the sixteen subjects we evaluate. We conduct all the following experiments on prompts that use GPT-4-based one-shot questions since they are shorter on average and achieve similar performance. ## 4 Experiments ### Model and dataset We use the LLaMA-13B model (Touvron et al., 2023) to generate predictions for MCQA. LLaMA-13B is an open-source 13 billion parameter model trained on 1 trillion tokens and has been shown to achieve good zero-shot performance on various question-answering benchmarks. For our dataset, we use the MMLU benchmark (Hendrycks et al., 2021), which contains MCQA questions from 57 domains covering subjects such as STEM, humanities, and medicine. For our experiments, we considered the following subset of MMLU: computer security, high school computer science, college computer science, machine learning, formal logic, high school biology, anatomy, clinical knowledge, college medicine, professional medicine, college chemistry, marketing, public relations, management, business ethics, and professional accounting. We group these domains into three broad categories: "business", "medicine", and "computer science". These 16 subjects represent diverse domains and have sufficient samples (each with at least 100 questions). We perform classification by obtaining logit scores corresponding to option choices 'A', 'B', 'C', and 'D' conditioned on the one-shot prompt and the question. For example, for the sample prompt and question pair described in Figure 4: **Uncertainty quantification using prediction set size.** In conformal prediction, a set of predictions is generated for each question. The size of this set indicates how uncertain the model is for a particular question. Larger set sizes denote greater uncertainty, and smaller set sizes denote less uncertainty. The colors denote the three categories of questions. Figure 3: **Desired coverage is achieved for all subjects.** The red dashed line shows the desired coverage rate (specified at \(\alpha=0.1\)), which is guaranteed by conformal prediction to be with at least \(1-\alpha\) percent of the time. The colors denote the three categories of questions. section 3, we find the logit score corresponding to the next tokens corresponding to each of the four options. We then take the softmax over the logit scores corresponding to the options choices to obtain probability scores. The softmax scores corresponding to ten different prompts (that vary in terms of one-shot questions) are averaged to obtain final probability scores for each question-option pair. ### Setup We randomly split the data into equal-sized calibration and evaluation sets for each subject and averaged results over 100 random trials for our conformal prediction experiments. For each trial, we randomly sample \(50\%\) of data for calibration and \(50\%\) to evaluate coverage and set size. Thus, we have at least 50 samples for calibration. While the theoretical guarantee of conformal prediction holds on average for even such a small number of calibration samples, the individual 100 random trials may not always have exact coverage. A higher calibration size can reduce variance in coverage associated with the different random trials (Angelopoulos Bates, 2021). average, while "easier" subjects such as marketing have the lower average uncertainty. We show more results for different \(\alpha\) values in Table 1. **Selective classification with conformal prediction.** Conformal Prediction framework can also be used for selective classification (Angelopoulos et al., 2022; Angelopoulos and Bates, 2021). In Figure 5, we analyze the correlation between uncertainty (as measured by conformal prediction) and top-1 accuracy performance. Specifically, we look at top-1 accuracy across subjects stratified by the size of the prediction set outputted by conformal prediction. We find a robust negative correlation between set size and top-1 accuracy for all subjects. This is intuitive as models with low confidence scores should correspond to less accurate predictions. The accuracy for prediction sets with only one prediction is significantly higher than naive top-1 accuracy, as shown in Figure 7 (refer \(k=1\) accuracy). Thus, our results demonstrate that the set size obtained from conformal prediction procedure can filter low-quality predictions in downstream applications for LLMs. For example, highly uncertain predictions in a disease screening application should be flagged for manual review and not shown to the user. **Size-stratified coverage and comparison with naive top-\(k\) prediction sets.** Size-stratified coverage measures error-rate guarantee across prediction sets of different sizes (Angelopoulos et al., 2022). This experiment shows that coverage is not trivially satisfied by naively forming prediction sets by simply taking the top-\(k\) highest softmax probabilities. In Figure 7, we show the coverage when all prediction sets have a fixed set size and find that coverage decreases sharply with size. This is in contrast to prediction sets formed by conformal prediction in Figure 6, where we find that even prediction sets of size one have close to the desired level of coverage (\(90\%\) when \(\alpha=0.1\)) across most subjects. Indeed, we found that coverage is consistent over all set sizes for conformal prediction. Conformal prediction can be thought of as outputting "adaptive" prediction sets that try to attain the proper level of coverage (depending on the chosen error rate \(\alpha\)) instead of "fixed" prediction sets of size \(k\). **Exchangeability assumption across subjects.** In Figure 8, we test the exchangeability assumptions between subjects by calibrating on one subject and evaluating coverage on a different subject, grouped into three categories of subjects. Recall that the exchangeability assumption is needed for the coverage guarantee of Equation 1 to hold. On the main diagonal, where the prediction sets are calibrated and evaluated on the same subject, we observed little deviation from the desired coverage rate of \(90\%\). For example, prediction sets calibrated and evaluated on the same subject had close to the desired error rate of \(10\%\) when \(\alpha=0.1\). On the off-diagonal, we can see significant disparities between some subjects. For example, when prediction sets are calibrated on MCQA data from "high school computer science" and evaluated on "business ethics", coverage is only around \(83\%\), less than the desired \(90\%\) coverage. However, for subjects from similar domains and accuracy, such as "clinical knowledge", "anatomy", and "high school biology", we find relatively more minor deviations from the targeted coverage rate when calibrated on out-of-subject data. This may result from good generalization capabilities and relatively calibrated softmax probability (Kadavath et al., 2022) outputted by the LLMs. ### Code Availability We release the code at this **Github repository**. The code repository also contains the question-answer pairs generated by GPT-4 for our prompts. ## 5 Discussion As Large Language Models (LLMs) become increasingly powerful and are deployed in mission-critical systems, obtaining formal uncertainty guarantees for these models is crucial. In this work, we investigated uncertainty quantification in LLMs in the context of multiple-choice questions using conformal prediction, a statistical framework, for generating prediction sets with coverage guarantees. We found that naive softmax outputs of LLMs are relatively well calibrated on average but can suffer from underconfidence and over-confidence, and the extent of miscali Figure 7: **Coverage of naive top-\(k\) prediction sets.** Coverage sharply falls off at smaller set sizes for naive prediction sets constructed by simply taking the top-\(k\) softmax scores for all predictions. bration varies across different subjects. To have a formal guarantee on the error rate of the model prediction, we implemented the conformal prediction procedure on the naive softmax output of the LLM. The conformal prediction framework produces valid prediction sets with error rate guarantees when calibration and evaluation sets come from the same distribution. We also explored the application of conformal prediction procedures for selective classification tasks. We found that conformal prediction can be used to discard predictions with unusual and low-quality outputs where the model is not confident, as indicated by the size of its prediction sets. To summarize, our main takeaways are * Developers of LLM systems should provide estimates of uncertainty to improve trustworthiness in their outputs to users. * Uncertainty quantification can be useful for downstream applications such as filtering biased, unusual, or low-quality outputs. * Conformal prediction is one approach to uncertainty quantification where a user-specified error rate can be statistically guaranteed when the calibration data is exchangeable with the test data. * For our specific dataset (MMLU) and LLM (LLaMA-13B), we find that softmax outputs obtained as described in section 4.1 are reasonably calibrated on average. Nonetheless, models suffer from under-confidence and overconfidence, especially at the tail ends of probability distribution (refer figure 9 in the Appendix.) Our work has some limitations. Our findings were limited to the MCQA task on the MMLU dataset using the LLaMA Figure 8: **Difference in coverage when calibrated on different subjects.** Deviation from \(90\%\) coverage for \(\alpha=0.1\). The off-diagonals represent entries corresponding to the cases where exchangeability conditions are violated between calibration and evaluation data sets. The subjects are grouped into the three broad categories of computer science, medicine, and business. 13B model. Future works could extend our findings to multiple models and data sets. Further, it would be interesting to extend the conformal prediction framework to more general settings like free-form text generation to control for inaccurate, biased, and harmful outputs from LLMs. It would also be interesting to explore exchangeability conditions in LLMs further when calibration and evaluation data sets are from different distributions (i.e., not just from MMLU), which is a more realistic scenario. Despite these limitations, our work represents, to our knowledge, the first exploration of conformal prediction for LLMs in classification tasks. Our results contribute to the growing body of research on uncertainty estimation and generalization capabilities of LLMs and serve as a step forward in developing more robust and reliable uncertainty measures for increasingly capable large language models. Such measures are essential for ensuring LLMs' safe and responsible deployment in mission-critical applications. ## Acknowledgement We thank Prof. Yoon Kim, Abbas Zeitoun, and Anastasios Angelopoulos for helpful discussions and feedback on this work.
2307.16250
Classical vs. quantum corrections to jet broadening in a weakly coupled QGP
We compute double-logarithmically enhanced corrections to $\widehat{q}$ at relative order $O(g^2)$ in the setting of a weakly coupled quark-gluon plasma, observing how the thermal scale affects the region of phase space, which gives rise to these corrections. We furthermore clarify how the region of phase from which these corrections are borne is situated with respect to that from which the classical corrections arise at relative order $O(g)$. This represents a significant step towards our eventual goal of understanding which class of corrections dominate, thereby pushing forward our quantitative grasp on the phenomenon of jet quenching in heavy-ion collisions.
Eamonn Weitz
2023-07-30T14:59:56Z
http://arxiv.org/abs/2307.16250v1
# Classical vs. quantum corrections to jet broadening in a weakly coupled QGP ###### Abstract: We compute double-logarithmically enhanced corrections to \(\hat{q}\) at relative order \(\mathcal{O}(g^{2})\) in the setting of a weakly coupled quark-gluon plasma, observing how the thermal scale affects the region of phase space, which gives rise to these corrections. We furthermore clarify how the region of phase from which these corrections are borne is situated with respect to that from which the classical corrections arise at relative order \(\mathcal{O}(g)\). This represents a significant step towards our eventual goal of understanding which class of corrections dominate, thereby pushing forward our quantitative grasp on the phenomenon of jet quenching in heavy-ion collisions. Introduction In the context of heavy-ion collisions, jets provide an ideal _hard probe_ of the quark-gluon plasma (QGP). Through interacting with the QGP, they receive momentum kicks in the directions transverse to their propagation - _transverse momentum broadening_. This broadening can be captured by the _transverse momentum broadening coefficient_, \(\hat{q}=\langle k_{\perp}^{2}\rangle/L\), which specifies the transverse momentum picked up per unit length, \(L\) by a hard parton propagating through the QGP. See the recent reviews on jets in heavy-ion collisions [1] or extractions of \(\hat{q}\) from data [2] for more information. For a weakly coupled QGP, \(\hat{q}\) can be expressed in terms of the transverse scattering kernel \[\hat{q}(\mu)=\int^{\mu}\frac{d^{2}k_{\perp}}{(2\pi)^{2}}k_{\perp}^{2}{\cal C}( k_{\perp}),\qquad{\cal C}(k_{\perp})\equiv(2\pi)^{2}\frac{d\Gamma}{dk_{\perp}^{ 2}}, \tag{1}\] where \(\frac{d\Gamma}{dk_{\perp}^{2}}\) is the rate for a hard parton with energy, \(E\gg T\), the temperature of the plasma, propagating along the \(z\) direction to pick up \(k_{\perp}\). The cutoff, \(\mu\) is installed so as not to include larger momentum scatterings, which include two hard partons in the final state. See App. A of [3] for our conventions. At leading order (LO) in \(g\), \(\hat{q}\) receives contributions from the hard (\(T\)) [4] and soft (\(gT\)) [5] scales, which give rise to the parametric form (up to logarithms) \(\hat{q}\sim g^{4}T^{3}\). The soft contribution is cut off in the IR by dynamical screening, implemented through Hard Thermal Loop Effective Theory (HTL) resummation [6]. NLO corrections also come from the soft scale [7]. Ultrasoft (\(g^{2}T\)) modes contribute at \({\cal O}(g^{2})\), for which the perturbative expansion breaks down. We refer to these NLO and NNLO contributions as _classical corrections_: they are distributed on the \(T/\omega\) IR tail of the Bose-Einstein distribution, \(n_{\rm B}(\omega)\) and are therefore sourced by the Matsubara zero-mode. Caron-Huot [7] demonstrated that one may compute the zero-mode contribution to \({\cal C}(k_{\perp})\) in Electrostatic QCD (EQCD) [8], meaning that one can bypass the somewhat cumbersome HTL computation. More importantly, as a theory of static modes, EQCD is amenable to study using three-dimensional lattice simulations, which can thus provide a _non-perturbative_ evaluation of \({\cal C}(k_{\perp})\), summing contributions from the soft and ultrasoft scales to all orders [9, 10]. Recently, the impact of these classical corrections on the in-medium splitting rate was assessed [11, 12] and found to be very relevant. A similar program is well underway for the non-perturbative determination of classical corrections to the _asymptotic mass_[13, 14, 15]. These classical corrections are at odds with doubly-logarithmically enhanced radiative, _quantum corrections_, appearing at \({\cal O}(g^{2})\), first identified in [16, 17]. There, the leading enhancement is \(\sim\ln^{2}L_{\rm med}/\tau_{\rm min}\), with \(L_{\rm med}\) the length of the medium and \(\tau_{\rm min}\sim 1/T\) the minimum formation time of the associated radiation. This potentially large double-logarithm can be resummed [18], with the evolution equations solved numerically in [19, 20]. Interestingly, they also arise in the context of double gluon emission [18, 21], implying that these logarithms are subject to a certain universality. These corrections come from the _single scattering regime_ where bremsstrahlung is sourced by a single scattering with the medium. This is in contrast to the _multiple scattering regime_, where the bremsstrahlung's formation time, \(\tau\) is long enough so that it is coherently triggered by multiple collisions, accounted for through LPM resummation. In [3], we compute these doubly-logarithmically enhanced corrections in the context of a weakly coupled QGP, carefully analysing how the thermal scale deforms the region of phase space from which the double-logarithms emerge. ## 2 Double Logarithmic Corrections and the Thermal Scale The correction from [16] emerges in the standard dipole picture \[\delta\hat{q}_{[16,\ 17]}(\mu)=4\alpha_{s}C_{R}\hat{q}_{0}\int^{\mu}\frac{d^{2}k_{ \perp}}{k_{\perp}^{2}}\int\frac{dk^{+}}{k^{+}}, \tag{2}\] where \(k^{+}\equiv k_{\perp}^{2}\tau\) is the energy of the bremsstrahlung and \(\hat{q}_{0}\) is the LO transverse momentum broadening coefficient, stripped of the Coulomb logarithm as is done in the _harmonic oscillator approximation_ (HOA). Here we can explicitly see that one of the logarithms comes from a soft, \(dk^{+}/k^{+}\) divergence, with the other coming from a collinear, \(d^{2}k_{\perp}/k_{\perp}^{2}\) divergence. For what follows, it turns out to be more convenient to work with \(\tau\) and \(k^{+}\) \[\delta\hat{q}_{[16,\ 17]}(\mu)=\frac{\alpha_{s}C_{R}}{\pi}\hat{q}_{0}\int^{\mu^{ 2}/\hat{q}_{0}}_{\tau_{\min}}\frac{d\tau}{\tau}\int^{\mu^{2}\tau}_{\hat{q}_{0} \tau^{2}}\frac{dk^{+}}{k^{+}}=\frac{\alpha_{s}C_{R}}{2\pi}\hat{q}_{0}\ln^{2} \frac{\mu^{2}}{\hat{q}_{0}\tau_{\min}}. \tag{3}\] The limits above come from integrating over the triangle presented in Fig. 1. Boundary (\(a\)) arises from the need to cut off non-diffusive momentum exchanges above the scale \(\mu\). The line (\(b\)) is then defined by \(k_{\perp}^{2}\equiv\hat{q}_{0}\tau\), marking the boundary with the _deep LPM regime_ in which multiple scatterings occur. Above boundary (\(b\)) there is no longer a double-logarithmic enhancement as the \(k^{+}\) integrand changes as \(1/k^{+}\to 1/\sqrt{k^{+}}\). Finally, boundary (\(c\)) is an artefact of the _instantaneous approximation_: scatterings between the jet and medium are assumed to take place instantaneously compared to the formation time associated with the radiation. The result from [16] is recovered upon identifying \(\mu\) with the _saturation scale_, \(Q_{s}\equiv\hat{q}L_{\rm med}\). In a weakly coupled QGP, as soon as the energy overlaps with the temperature scale, one needs to account for more medium effects than those captured by these instantaneous, spacelike Figure 1: Depiction of bounds from the integration in Eq. (3). The (\(b\)) boundary is defined by \(\tau=\sqrt{k^{+}/\hat{q}_{0}}\) and the (\(a\)) boundary by \(\tau=k^{+}/\mu^{2}\). Figure taken from [3]. interactions. Specifically, by taking \(T>\mu\gg\sqrt{g}T^{\,1}\) and replacing \(1\to(1+2n_{\rm B}(k^{+}))\) in the \(k^{+}\) integrand of Eq. (3), we find \[\delta\hat{q}(\mu)^{\rm few}=\frac{\alpha_{s}C_{R}}{2\pi}\hat{q}_{0}\bigg{\{} \ln^{2}\frac{\mu^{2}}{\hat{q}_{0}\tau_{\rm int}}-\frac{1}{2}\ln^{2}\frac{ \omega_{\rm T}}{\hat{q}_{0}\tau_{\rm int}^{2}}\bigg{\}}\quad\text{with }\omega_{\rm T}=\frac{2\pi T}{e^{\gamma_{E}}}\quad\text{for }\ \frac{\omega_{\rm T}}{\mu^{2}}\ll\tau_{\rm int}\ll\sqrt{\frac{\omega_{\rm T}}{ \hat{q}_{0}}}. \tag{4}\] In doing so, we account for the Bose-Einstein stimulated emission of the radiated gluon as well as the absorption of a gluon from the medium. We will comment shortly on the purpose of \(\tau_{\rm int}\). But can these additional effects be neglected in a way that is consistent with single scattering, for instance, by demanding that \(\hat{q}_{0}\tau_{\rm min}^{2}\gg T\) in Eq. (3)2? It turns out that the answer is no [3]: such a choice of \(\tau_{\rm min}\) would necessarily allow for formation times associated with the deep LPM regime, where \(\tau_{\rm LPM}\gtrsim 1/g^{2}T\). Note that the correction in Eq. (4) corresponds to integrating over the 1 and 2 regions in Fig. 2. Footnote 2: The \(\mu>T\) case is studied in [3]. The requirement \(1/g^{2}T\gg\tau_{\rm int}\gg 1/gT\) means that processes where a _few scatterings_ occur are included in Eq. (4). Indeed, \(\tau_{\rm int}\) defines a border with what we have identified as a _strict single scattering_ regime, where the formation time is _a priori_ consistent with single scattering, i.e \(\tau\ll 1/g^{2}T\). This region is characterised by so-called _semi-collinear_ processes [22], where timelike as well as spacelike exchanges are allowed to occur. The leading contribution from this region is given by integrating over the 3 and 4 regions in Fig. 2 and yields \[\delta\hat{q}_{\rm semi}(\mu)=\frac{\alpha_{s}C_{R}}{2\pi}\hat{q}_{0}\ln^{2} \frac{\mu^{2}\tau_{\rm int}}{\omega_{\rm T}}, \tag{5}\] where we have taken the HOA. Adding Eqs. (4), (5), we then find \[\delta\hat{q}(\mu_{\perp})_{\rm{dlog}}=\frac{\alpha_{s}C_{R}}{4\pi}\hat{q}_{0} \ln^{2}\frac{\mu^{4}}{\hat{q}_{0}\omega_{\rm T}}. \tag{6}\] Figure 2: Deformation of the double-logarithmic phase space with the inclusion of thermal effects. Regions 1 and 2 form the “few scattering” regime, over which we integrate to get Eq. (4) whereas we integrate over regions 3 and 4, the “strict single scattering” regime to get Eq. (5). Region 5 then gives rise to the \(\mathcal{O}(g)\) corrections to \(\hat{q}\), calculated in [7]. Figure taken from [3]. As well as the disappearance of \(\tau_{\rm int}\), we note the absence of an IR cutoff, \(\tau_{\rm min}\); looking to Fig. 2, the double-logarithm is instead cut off by the scale \(\omega_{\rm T}\). Thus, the thermal scale plays an extremely important role in this context. ## 3 Relation to Classical Corrections As well as double-logarithmic corrections at \({\cal O}(g^{2})\), we also find _power law corrections_ when integrating over regions 3 and 4 \[\delta q_{\rm PL}=\frac{\alpha_{s}C_{R}}{2\pi}\hat{q}_{0}\frac{4T\ln\left( \frac{\mu^{2}\tau_{\rm int}}{k_{\rm IR}^{+}e}\right)}{k_{\rm IR}^{+}}, \tag{7}\] where \(k_{\rm IR}^{+}\) is an IR cutoff on the energy. Power law corrections of this kind are usually discarded as they are unphysical - they always cancel against other power law corrections coming from adjacent regions of phase space. Here, we use this fact to our advantage; in the calculation of the \({\cal O}(g)\) corrections, power law corrections should appear, with \(k_{\rm IR}^{+}\) instead acting as a UV cutoff there. In more detail, one can use causality properties of \({\cal C}(k_{\perp})\), also revealed in [7], to carry out the \(k^{+}\) integral by analytically continuing into the \(k^{+}\) complex plane. \(k_{\rm IR}^{+}\) then appears as the radius of the arc of the deformed contour, with the arc lying between the zeroth and first Matsubara modes. There is no dependence on \(k_{\rm IR}^{+}\) in [7] as the \(1/k_{\rm IR}^{+}\) terms go to zero and can thus be safely neglected. Nevertheless, we have indeed computed these arc contributions explicitly and shown that they cancel exactly against the result from Eq. (7), further confirming how the region from which the classical corrections emerge is connected to that associated with the logarithmically-enhanced quantum corrections. ## 4 Conclusion and Outlook We have studied how, in the setting of a weakly coupled QGP, the thermal scale affects the double-logarithmic phase space, originally identified in [16, 17]. In more detail, we showed how the scale, \(\omega_{\rm T}\) cuts off this region of phase space and furthermore, how the region, which gives rise to the classical corrections, computed in [7] fits in comparison. In obtaining Eq. (6), we have taken the HOA, neglecting a neighbouring region phase space, which permits both single and multiple scattering processes. To properly deal with such a region, we would need to solve an LPM resummation equation, derived in [3] (see also [23]), differential in the transverse momentum picked up by the parton. We foresee that the use of the _improved opacity expansion_[24] could allow us to arrive at an approximate solution of this equation but we leave such an endeavour to future work. ## Acknowledgements We thank Jacopo Ghiglieri for collaboration on the original work [3].
2310.02832
Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness
Effective out-of-distribution (OOD) detection is crucial for reliable machine learning models, yet most current methods are limited in practical use due to requirements like access to training data or intervention in training. We present a novel method for detecting OOD data in Transformers based on transformation smoothness between intermediate layers of a network (BLOOD), which is applicable to pre-trained models without access to training data. BLOOD utilizes the tendency of between-layer representation transformations of in-distribution (ID) data to be smoother than the corresponding transformations of OOD data, a property that we also demonstrate empirically. We evaluate BLOOD on several text classification tasks with Transformer networks and demonstrate that it outperforms methods with comparable resource requirements. Our analysis also suggests that when learning simpler tasks, OOD data transformations maintain their original sharpness, whereas sharpness increases with more complex tasks.
Fran Jelenić, Josip Jukić, Martin Tutek, Mate Puljiz, Jan Šnajder
2023-10-04T13:59:45Z
http://arxiv.org/abs/2310.02832v2
# Out-of-Distribution Detection by Leveraging Between-Layer Transformation Smoothness ###### Abstract Effective OOD detection is crucial for reliable machine learning models, yet most current methods are limited in practical use due to requirements like access to training data or intervention in training. We present a novel method for detecting OOD data in deep neural networks based on transformation smoothness between intermediate layers of a network (BLOOD), which is applicable to pre-trained models without access to training data. BLOOD utilizes the tendency of between-layer representation transformations of in-distribution (ID) data to be smoother than the corresponding transformations of OOD data, a property that we also demonstrate empirically for Transformer networks. We evaluate BLOOD on several text classification tasks with Transformer networks and demonstrate that it outperforms methods with comparable resource requirements. Our analysis also suggests that when learning simpler tasks, OOD data transformations maintain their original sharpness, whereas sharpness increases with more complex tasks. ## 1 Introduction Machine learning (ML) models' success rests on the assumption that the model will be evaluated on data that comes from the same distribution as the data on which it was trained, the _in-distribution_ (ID) data. However, models deployed in noisy and imperfect real-world scenarios often face data that comes from a different distribution, the _out-of-distribution_ (OOD) data, which can hinder the models' performance. The task of discerning between ID and OOD data is commonly referred to as _OOD detection_(Yang et al., 2021). Owing to their consistent state-of-the-art performance across diverse ML tasks (Abiodu et al., 2018), Deep Neural Networks (DNNs) have garnered significant attention in OOD detection research. While popular baselines make use of the model's posterior class probabilities (Hendrycks and Gimpel, 2017), the issue of overconfidence in DNNs (Guo et al., 2017) frequently erodes the credibility of these probabilities. An alternative is offered by the group of methods that leverage the fundamental concept in deep neural networks, namely, representation learning. Because a DNN encodes similar instances closely in its representation space, an OOD instance can be identified based on the distance between its representation and the representations of other instances in the training set (Lee et al., 2018). The downside of these methods, however, is that they require the presence of training data during prediction or involve intervention in the model's training procedure. This is a significant practical limitation, as using third-party models pre-trained on non-public data is increasingly the standard practice. A case in point is the Hugging Face Transformers library (Wolf et al., 2020), which provides readily available community models but often lacks comprehensive training data or detailed training procedures. An obvious way to close the resource gap is to rely on OOD detection methods with minimal pre-requisites. However, current OOD detection research has largely ignored the differing prerequisites among OOD detection methods, often leading to comparisons that treat methods with varying prerequisites equally, disregarding the question of practical applicability. From a practical perspective, it makes sense to group OOD detection methods into the following three categories:1 (1) _Blackbox_, for methods capable of operating on black-box models (i.e., having access only to input-output mappings) and thus suitable for models integrated into a product; (2) _White-box_, for methods that require access to the model's weights and have knowledge about its architecture, and are thus readily applicable to third-party pre-trained models; and (3) _Open-box_, for methods with unrestricted access to model and training resources, allowing for interventions in the training process and/or access to training data or separate OOD train or validation sets. Footnote 1: Gomes et al. (2022) employ similar terminology to refer to which parts of the model one can access (e.g., its outputs, inputs, or intermediate representations). In contrast, we use these terms to characterize the resources an OOD detection method requires. In this paper, we introduce a novel OOD detection method that leverages the inherent differences in how DNNs process ID and OOD data. The method is white-box and has the potential for broad practical applicability. More concretely, our **B**etween **L**ayer **O**ut-**O**f-**D**istribution (BLOOD) Detection method estimates the smoothness of between-layer transformations of intermediate representation, building on the insight that these transformations tend to be smoother for ID data than for OOD data. We evaluate BLOOD on Transformer-based (Vaswani et al., 2017) pre-trained large language models applied to text classification, the most prevalent task in natural language processing (NLP), and find that it outperforms other state-of-the-art OOD detection white-box methods and even some open-box methods. We further analyze BLOOD to probe into the underlying causes of the differences between how ID and OOD intermediate representations are transformed and evaluate BLOOD on two other types of distribution shifts - semantic and background shift. We provide code and data used in our experiments for result reproducibility.2 Footnote 2: [https://github.com/fjelenic/between-layer-ood](https://github.com/fjelenic/between-layer-ood) The contributions of this paper are as follows: **(1)** We propose BLOOD, a novel method for OOD detection applicable even in cases when only the model's weights are available, e.g., third-party pre-trained models which are becoming _de facto_ standard in many fields. BLOOD uses the information about the smoothness of the between-layer transformations of intermediate representations. We quantify this smoothness using the square of the Frobenius norm of the Jacobian matrix, for which we provide an unbiased estimator to alleviate computational limitations. **(2)** Our experiments on Transformer-based pre-trained large language models for the task of text classification show that BLOOD outperforms other state-of-the-art white-box OOD detection methods. Additionally, our results indicate that the performance advantages are more prominent when applied to complex datasets as opposed to simpler ones. We also show that BLOOD is more effective in detecting background shift than semantic shift. **(3)** Following our main insight that between-layer representation transformations of ID data tend to be smoother from that of OOD data, we analyze the source of this difference. We find that the learning algorithm is more focused on changing the ID region of intermediate representation space, smoothing the between-layer transformations of ID data in the process. At the same time, the OOD region of the intermediate representation space is largely left unchanged, except in some scenarios, e.g., for more complex tasks, when the OOD region of the space is also changed and sharpened as a consequence. ## 2 Related Work OOD detection methods are typically categorized based on their underlying mechanism, for example, into output-based, gradient-based, distance-based, density-based, and Bayesian methods (Yang et al., 2021). Another, and arguably more practically relevant, categorization would factor in the necessary prerequisites for these methods, distinguishing between black-box, white-box, and open-box methods as introduced earlier. In the following, we provide a brief overview of the most prominent OOD detection methods through this lens. **Black-box.** Methods with minimal prerequisites typically rely on posterior class probabilities, operating under the assumption that when a model exhibits uncertainty about an instance, the instance is more likely to be OOD. A commonly used baseline quantifies the uncertainty of an instance as the negative of the model's maximum softmax probability for that instance (Lee et al., 2018). A straightforward modification employs the entropy of softmax probabilities rather than the maximum value. Liu et al. (2020) proposed using energy scores instead of softmax scores to overcome the issue of DNN overconfidence. **White-box.** Gal & Ghahramani (2016) proposed using Monte-Carlo dropout to more reliably estimate the model's uncertainty, showing that dropout (Srivastava et al., 2014) with DNNs approximates Bayesian inference. Although Monte-Carlo dropout outperforms vanilla posterior probabilities in OOD detection (Ovadia et al., 2019), it is computationally expensive as it requires multiple forward passes. Another way of leveraging the access to model's architecture is to use gradients to implicitly measure the uncertainty of the model's predictions (Oberdiek et al., 2018; Huang et al., 2021). Gradient methods primarily employ the gradient norm to gauge the difference between the model's posterior distribution and the ideal distribution. **Open-box.** Because DNNs posterior probabilities tend to exhibit overconfidence, Guo et al. (2017) suggested using temperature scaling to calibrate the model's posterior probabilities, which entails the usage of a separate validation set. To get higher quality predictive uncertainty estimates, Lakshminarayanan et al. (2017) train an ensemble of differently initialized models and combine their predictions. Although ensembles are robust to different distributional shifts (Ovadia et al., 2019), they impose a significant computational and memory overhead because they require training and keeping in memory of multiple models. A popular approach to OOD detection for DNNs revolves around the utilization of information related to distances in the representation space (Lee et al., 2018; Van Amersfoort et al., 2020; Liu et al., 2020; Hsu et al., 2020; Kuan & Mueller, 2022; Sun et al., 2022). However, these approaches require access to the training data or changes in the standard training procedure. Yet another set of methods relies on exposing the model to OOD samples during training to improve the performance on OOD detection task (Hendrycks et al., 2019; Thulasidasan et al., 2021; Roy et al., 2022). Still, a major practical limitation of these methods is the necessity for OOD data, whose entire distribution is typically unknown in real-world scenarios. Several post-hoc methods also need OOD data, but for validation sets to optimize their method's hyperparameters (Liang et al., 2018; Sun et al., 2021; Sun & Li, 2022). ## 3 Preliminaries ### Problem statement Let instance \(\mathbf{x}\in\mathbb{R}^{d}\) be a \(d\)-dimensional feature vector and \(y\in\{0,\ldots,C-1\}\) be its corresponding class in a \(C\)-way classification task. We train a classifier on the dataset \(\mathcal{D}=\{(\mathbf{x}_{n},y_{n})\}_{n=1}^{n}\) consisting of \(N\) instances i.i.d. sampled from the distribution \(p(\mathbf{x},y)\). The objective of the learning algorithm is to model the conditional distribution \(p(y|\mathbf{x})\) based on \(\mathcal{D}\) by estimating the parameters \(\mathbf{\theta}\) of the distribution \(p_{\mathbf{\theta}}(y|\mathbf{x})\) that is as close as possible to the true conditional distribution. The goal of an OOD detection method is to determine the uncertainty score \(\mathcal{U}_{\mathbf{x}}\in\mathbb{R}\) of an instance \(\mathbf{x}\), such that there exist \(\epsilon\in\mathbb{R}\) for which both \(\mathbb{P}_{\mathbf{x}\sim p(\mathbf{x},y)}(\mathcal{U}_{\mathbf{x}}<\epsilon)\) and \(\mathbb{P}_{\mathbf{x}\sim q(\mathbf{x},y)}(\mathcal{U}_{\mathbf{x}}>\epsilon)\) are close to unity whenever \(q(\mathbf{x},y)\) is a distribution sufficiently different from \(p(\mathbf{x},y)\). In practice, there can never exist a scoring function that perfectly discriminates between ID examples (generated by \(p(\mathbf{x},y)\)) and OOD examples (generated by \(q(\mathbf{x},y)\)). Nevertheless, even reasonable attempts can prove valuable in real-world scenarios. ### Intuition DNNs work by mapping the input features onto a high-dimensional representation space through \(L\) layers, creating a representation of the data suitable for the task at hand. The mapping is realized as a composition of several non-linear functions, where each function creates an intermediate representation of the input. Representations of DNNs with useful inductive biases for the task at hand tend to gradually progress from input features towards more abstract representation levels through layers, i.e., lower layers model lower-level features while upper layers model higher-level features. For example, when convolutional neural networks (CNNs) process images, they can first represent the simplest features in an image, e.g., lines and edges, which are then used to create representations of textures and simple shapes that are then combined to represent objects (LeCun et al., 1998). Recently, Vision Transformers (ViT) (Dosovitskiy et al., 2021), which are garnering popularity in computer vision, were shown to process data in a similar fashion (Ghiasi et al., 2022). Likewise, Peters et al. (2018); Tenney et al. (2019); Jawahar et al. (2019) showed that large transformer-based language models create text representations that progress gradually from representations that encode morphological and syntactic information at the lower DNN layers to representations that encode semantic meaning in the upper layers. We hypothesize that during the model's training, the model learns smooth transformations between DNN layers corresponding to natural and meaningful progressions between abstractions for ID data. We further hypothesize that these progressions will not match OOD data, hence the transformations will not be smooth for OOD data. Thus, if we could measure the smoothness of transformations in representations between layers, we could in principle differentiate between ID and OOD data. We also speculate that the difference in smoothness of transformations between ID and OOD data should be emphasized in the upper layers of DNNs. Lower layers typically represent low-level features that are more universal, whereas upper layers tend to cluster instances around task-specific features that are not shared between ID and OOD data, potentially creating a mismatch in levels of abstraction. ### Our method Assume an \(L\)-layered deep neural network \(\mathbf{f}:\mathbb{R}^{d_{0}}\rightarrow[0,1]^{C}\) was trained to predict the probabilities of \(C\) classes for a \(d_{0}\)-dimensional input \(\mathbf{x}\). Let \(\mathbf{f}\) be a composition of \(L\) intermediate functions, \(\mathbf{f}_{L}\circ\cdots\circ\mathbf{f}_{1}\circ\cdots\circ\mathbf{f}_{1}\), where \(\mathbf{f}_{l}:\mathbb{R}^{d_{l-1}}\rightarrow\mathbb{R}^{d_{l}}\), \(l=1,\ldots,L-1\), correspond to intermediate network layers, while \(\mathbf{f}_{L}\) corresponds to the last layer, mapping to a vector of logits to which softmax function is applied to obtain the conditional class probabilities. We denote the intermediate representation of \(\mathbf{x}\) in layer \(l\) as \(\mathbf{h}_{l}\), defined as \(\mathbf{h}_{l}=(\mathbf{f}_{l}\circ\cdots\circ\mathbf{f}_{1})(\mathbf{x})\). We now need to quantify how smoothly an intermediate representation is transformed from layer \(l\) to layer \(l+1\). To this end, we first need to define what we consider a smooth transformation. We say a representation \(\mathbf{h}_{l}\) is transformed smoothly if there is not a large difference in how it is mapped from layer \(l\) onto layer \(l+1\) compared to how its infinitesimally close neighborhood is mapped. Let \(\phi_{l}(\mathbf{x})\) be the degree of smoothness of the transformation between representation \(\mathbf{h}_{l}\) and representation \(\mathbf{h}_{l+1}\) for input \(\mathbf{x}\). To calculate \(\phi_{l}(\mathbf{x})\), we compute the Jacobian matrix \(\frac{\partial\mathbf{f}_{l+1}}{\partial\mathbf{h}_{l}}=\mathbf{J}_{l}:\mathbb{R}^{d_{l}} \rightarrow\mathbb{R}^{d_{l+1}\times d_{l}}\), and take the square of its Frobenius norm: \[\phi_{l}(\mathbf{x})=\|\mathbf{J}_{l}(\mathbf{h}_{l})\|_{F}^{2}=\sum_{i=1}^{d_{l+1}}\sum_ {j=1}^{d_{l}}\left(\frac{\partial(f_{l+1})_{i}}{\partial(h_{l})_{j}}\right)^ {2} \tag{1}\] In the most popular ML libraries, gradients of a function are computed through automatic differentiation (AD), which comprises both forward mode and backward mode. Forward mode AD computes the values of the function and a Jacobian-vector product. Computing the full Jacobian matrix \(\mathbf{J}(\mathbf{x})\) with AD is computationally expensive as it requires \(d\) forward evaluations of \(\mathbf{J}(\mathbf{x})\mathbf{e}^{(i)},i=1,\ldots,d\), where \(\mathbf{e}^{(i)}\) are standard basis vectors, computing the Jacobian matrix one column at a time. In the case of modern DNNs with high-dimensional hidden layers, computing full Jacobians could render our method unfeasible. To reduce computational complexity, we derive an unbiased estimator of \(\phi_{l}(\mathbf{x})\) by leveraging Jacobian-vector product computation through forward mode AD. **Corollary 1**.: _Let \(\mathbf{J}(\mathbf{x})\in\mathbb{R}^{m\times n}\) be a Jacobian matrix, and let \(\mathbf{v}\in\mathbb{R}^{n}\) and \(\mathbf{w}\in\mathbb{R}^{m}\) be random vectors whose elements are independent random variables with zero mean and unit variance. Then, \(\mathbb{E}[(\mathbf{w}^{\intercal}\mathbf{J}(\mathbf{x})\mathbf{v})^{2}]=\|\mathbf{J}( \mathbf{x})\|_{F}^{2}\)._ We prove Corollary 1 in the Appendix B by providing a proof for more general Theorem 1. As for the intuition behind the corollary, the Jacobian-vector product \(\mathbf{J}(\mathbf{x})\mathbf{v}\) gives us an appropriately scaled gradient with respect to the change of the input in the direction of vector \(\mathbf{v}\). Further multiplying the Jacobian-vector product \(\mathbf{J}(\mathbf{x})\mathbf{v}\) by the random vector \(\mathbf{w}\) from the left projects the calculated directional gradient \(\mathbf{J}(\mathbf{x})\mathbf{v}\) on the vector \(\mathbf{w}\), i.e., it quantifies the extent to which the output changes in the direction of \(\mathbf{w}\) when the input changes in the direction of \(\mathbf{v}\). Squaring the vector-Jacobian-vector product then gives an estimate of the sum of squared entries of the Jacobian, i.e., the square of its Frobenius norm. Squaring also handles negative values (in cases when the angle between the directional gradient \(\mathbf{J}(\mathbf{x})\mathbf{v}\) and the vector \(\mathbf{w}\) is obtuse), since we are interested in the overall smoothness as defined by Frobenius norm rather than the direction of the specific gradient.3 To calculate the unbiased estimate \(\hat{\phi}_{l}(\mathbf{x})\) of \(\phi_{l}(\mathbf{x})\), we use a sample of \(M\) pairs of random vectors \(\mathbf{v}_{l}\sim\mathcal{N}(\mathbf{0}_{n},\mathbf{I}_{n})\) and \(\mathbf{w}_{l}\sim\mathcal{N}(\mathbf{0}_{m},\mathbf{I}_{m})\), and define \(\hat{\phi}_{l}(\mathbf{x})\) as: \[\hat{\phi}_{l}(\mathbf{x})=\frac{1}{M}\sum_{i=1}^{M}\left(\mathbf{w}_{l,i}^{\intercal }\mathbf{J}_{l}(\mathbf{h}_{l})\mathbf{v}_{l,i}\right)^{2} \tag{2}\] BLOOD uses \(\hat{\phi}_{l}(\mathbf{x})\) as the uncertainty score of an instance \(\mathbf{x}\). In our experiments, we consider two variations of BLOOD: (1) the average of scores for all layers \(\text{BLOOD}_{\text{mean}}=\frac{1}{L-1}\sum_{l=1}^{L-1}\hat{\phi}_{l}(\mathbf{x})\), and (2) the score for the projection of \(\text{BLOOD}_{L-1}=\hat{\phi}_{L-1}(\mathbf{x})\). We use the two variants to assess the impact of layer choice, as we hypothesize that BLOOD will perform better on upper layers, given that lower layers capture low-level, general features. ## 4 Experiments ### Experimental setup We evaluate BLOOD on several text classification datasets using two transformer-based (Vaswani et al., 2017) large pre-trained language models, RoBERTa (Liu et al., 2019) and ELECTRA (Clark et al., 2020), known for their state-of-the-art performance across a wide range of NLP tasks. We calculate the BLOOD score using samples of size \(M=50\) to estimate \(\hat{\phi}_{l}(\mathbf{x})\) of [CLS] token's representations between layers. We use eight text classification datasets for ID data: SST-2 (SST; Socher et al., 2013), Subjectivity (**subj; Pang and Lee**, 2004), AG-News (**AGN; Zhang et al., 2015), and TREC (**trec;** Li and Roth, 2002), BigPatent (**bp;** Sharma et al., 2019), AmazonReviews (**AR;** McAuley et al., 2015), MovieGenre (**mg;** Maas et al., 2011), 20NewsGroups (**NG;** Lang, 1995). We use One Billion Word Benchmark (**obw**) (Chelba et al., 2014) for OOD data, similarly to Ovadia et al. (2019), because of the diversity of the corpus. We subsample OOD datasets to be of the same size as their ID test set counterparts. Appendix C provides more details about the models, datasets, and training procedures. We compare BLOOD to several state-of-the-art black-box and white-box OOD detection methods: (1) **Maximum softmax probability (MSP)** - the negative posterior class probability of the most probable class, \(-\max_{c}p(y=c|\mathbf{x})\), often considered a baseline OOD detection method(Hendrycks and Gimpel, 2017); (2) **Entropy (ENT)** - the entropy of the posterior class distribution, \(\mathbb{H}[Y|\mathbf{x},\mathbf{w}]\); (3) **Energy (EGY)** - a density-based method that overcomes the overconfidence issue by calculating energy scores from logits \(-\log\sum_{i=0}^{C-1}e^{f_{L}(\mathbf{x})_{i}}\) instead of softmax scores (Liu et al., 2020); (4) **Monte-Carlo dropout (MC)** - the entropy of predictive distribution obtained using Monte-Carlo dropout (Gal and Ghahramani, 2016). We use \(M=30\) stochastic forward passes to estimate uncertainty; (5) **Gradient norm (GRAD)** - the L2-norm of the penultimate layer's gradient of the loss function with most likely class considered as a true class (Oberdiek et al., 2018). Additionally, we compare BLOOD to three standard open-box OOD detection methods. Given that these methods entail considerably more prerequisites compared to BLOOD and other white/black-box methods, this comparison is intended solely as a reference point: (1) **Ensemble (ENSM)** - an ensemble of \(M=5\) models of the same type, e.g., an ensemble of five RoBERTa or ensemble of five ELECTRA models, (Lakshminarayanan et al., 2017); (2) **Temperature scaling (TEMP)** - introduces a temperature parameter \(T\) into the softmax function such that it minimizes the negative log-likelihood on the ID validation set (Guo et al., 2017); (3) **Mahalanobis distance (MD)** - Mahalanobis distance of a query instance in the representation space with respect to the closest class-conditional Gaussian distribution (Lee et al., 2018). ### OOD detection performance As the performance measure for OOD detection, we follow the standard practice and use the area under the receiver operating characteristic curve (AUROC) metric (in Appendix F, we report the results using two other commonly used metrics, AUPR-IN and FPR@95TPR; these gave qualitatively identical results as AUROC). The OOD detection task is essentially a binary classification task, with AUC corresponding to the probability that a randomly chosen OOD instance will have a higher uncertainty score than a randomly chosen ID instance (Fawcett, 2006). The AUROC for random value assignment is 50%, while a perfect method achieves 100%. We run each experiment five times with different random seeds and report the mean AUROC. OOD detection performance is shown in Table 1. The first observation is that BLOOD outperforms other white/black-box methods. Secondly, BLOOD\({}_{L-1}\) outperforms other white/black-box methods more often than BLOOD\({}_{\text{mean}}\), thus in the rest of the experiments we focus on BLOOD\({}_{L-1}\). Lastly, while BLOOD demonstrates superior performance on most datasets, the improvements are more consistently observed when applied with ELECTRA compared to RoBERTa. Interestingly, the datasets where BLOOD with RoBERTa outperforms other white/black-box methods (sst, bp, ar, mg, and ng) appear to be more complex, as indicated by the minimum description length (Perez et al., 2021) (cf. Appendix C). We offer explanations for these observations in sections 4.3 and 4.4. Compared to open-box methods, BLOOD is outperformed by MD in all setups except when using ELECTRA on the trec dataset. However, BLOOD remains competitive with ENSM and TEMP. Unlike the findings in (Ovadia et al., 2019), the dominance of ENSM is reduced. This is likely because we employ a pre-trained language model ensemble, while they use entirely randomly initialized models. In our ensemble, the model parameters exhibit minimal variation since all models are pre-trained. Variability between models arises solely from the random initialization of the classification head and the stochastic nature of the training process. The high performance of MD on transformer-based language models is aligns with prior research (Podolskiy et al., 2021). ### Source of the differences in transformations of ID and OOD data Understanding which layers of the model are impacted by the model's training could shed some light on the behavior of our method. To find out how much each layer has learned, we examine the changes in intermediate representations of instances after training. For simplicity, we use the Euclidean distances \(\|\mathbf{r}_{\text{init}}-\mathbf{r}_{\text{FT}}\|_{2}\) between representations of the initialized model (\(\mathbf{r}_{\text{init}}\)) and the representations after fine-tuning the model (\(\mathbf{r}_{\text{FT}}\)). We calculate this distance for all instances in the training set at each of the model's layers and then compute the average for each layer. Figure 1 illustrates the extent of representation changes in training data alongside BLOOD scores before and after fine-tuning at each intermediate layer. The representations of the upper layers change significantly more than the representations of the lower layers. This is expected since transformer-based language models learn morphological- and syntactic-based features in the lower layers, which are similar between tasks and can be mostly reused from the pre-training. In contrast, higher layers learn more task-specific features such as context and conferences (Peters et al., 2018; Tenney et al., 2019; Jawahar et al., 2019). Our hypothesis posits that the smooth transformations of ID data are a by-product of the learning algorithm learning the natural progression between abstractions. Consequently, layers more impacted by training will exhibit smoother transformations, \begin{table} \begin{tabular}{c c c c c c c c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multicolumn{6}{c}{White-box/Black-box} & \multicolumn{6}{c}{Open-box} \\ & & BLOOD\({}_{L-1}\) & MSP & ENT & EGY & MC & GRAD & ENSM & TEMP & MD \\ \hline \multirow{8}{*}{RoBERT} & sst & 50.56 & **72.83** & 71.69 & 71.69 & 71.61 & 68.28 & 71.76 & 69.03 & 71.64 & **85.36** \\ & sstuj & 52.02 & 74.66 & 74.55 & 74.55 & **75.79** & 74.21 & 74.93 & **76.68** & 74.41 & **93.47** \\ & rnn & 77.46 & 61.95 & 73.57 & 73.80 & 76.36 & **77.55** & 75.88 & **80.35** & 75.38 & **82.63** \\ & trec & 69.63 & 95.30 & 96.20 & **96.40** & 96.28 & 95.68 & 96.14 & **98.77** & **96.74** & **96.74** \\ & spr & 87.20 & **98.53** & 70.15 & 72.82 & 85.84 & 74.29 & 73.11 & 79.39 & 86.01 & **97.35** \\ & ar & 91.41 & **93.20** & 89.06 & 89.96 & 92.39 & 90.99 & 89.45 & 29.42 & 92.95 & **98.35** \\ & mg & **88.15** & 85.23 & 75.02 & 76.60 & 86.45 & 79.98 & 74.28 & 76.98 & 84.30 & **95.12** \\ & ng & **83.53** & 72.02 & 77.49 & 78.76 & 82.65 & 79.32 & 76.93 & 80.77 & 82.87 & **96.68** \\ \hline \multirow{8}{*}{ ELECTRA} & sst & 74.36 & **88.11** & 73.84 & 73.84 & 73.97 & 79.70 & 78.11 & 73.82 & 73.17 & 75.88 & **78.85** \\ & sujuj & 74.10 & 77.41 & **78.17** & **78.17** & 70.46 & 77.71 & 78.11 & **79.23** & **78.20** & **81.59** \\ & agn & 65.67 & **89.88** & 76.80 & 77.01 & 79.75 & 79.55 & 76.57 & 79.50 & 78.31 & **86.10** \\ & rrec & 97.48 & **99.80** & 97.26 & 97.56 & 97.48 & 96.21 & 97.07 & 97.55 & 89.20 & 97.54 \\ & spr & 86.06 & **96.72** & 78.56 & 81.75 & 84.63 & 83.04 & 76.77 & 84.20 & 84.69 & **98.28** \\ & ar & 84.58 & **91.66** & 87.74 & 88.44 & 90.64 & 88.53 & 87.52 & **91.98** & 90.35 & **95.47** \\ & mg & 80.52 & **90.63** & 73.83 & 74.78 & 80.41 & 76.67 & 73.35 & 76.86 & 78.47 & **92.96** \\ & ng & 77.61 & **82.47** & 76.45 & 77.73 & 80.83 & 79.11 & 75.57 & 79.93 & 80.75 & **89.13** \\ \hline \hline \end{tabular} \end{table} Table 1: The performance of OOD detection methods measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform the best-performing white/black-box method are in **bold**. Higher is better. We test the performance of BLOOD\({}_{\text{mean}}\) and BLOOD\({}_{L-1}\) against the MSP baseline using the one-sided Man-Whitney U test; significant improvements (\(p<.05\)) are indicated as targets (\({}^{*}\)). which explains why \(\text{{BLOOD}}_{L-1}\) outperforms \(\text{{BLOOD}}_{\text{mean}}\) on the OOD detection task. This effect becomes apparent when comparing the representation change (upper row of Figure 1) with the \(\text{{BLOOD}}\) score (lower two rows of Figure 1) across layers, with a more significant difference in transition smoothness between ID and OOD data observed in layers where representations have undergone more substantial changes overall. The effect is particularly emphasized in ELECTRA, where the last layer undergoes the most significant change, resulting in \(\text{{BLOOD}}_{L-1}\) performing exceptionally well due to the radical smoothing of the final transformation. We also anticipate that the representations of ID data will undergo more significant changes after fine-tuning than those of OOD data, given the model's focus on the ID region of the representation space during training. This effect would cause a difference in smoothness because the ID region of the space would be smoothed out while the OOD region of the space would keep its original sharpness. Same as above, we calculate the change in representations using Euclidean distance of representations before and after fine-tuning. We then quantify the difference between changes in representations of ID and OOD data using the common language effect size (CLES) (McGraw & Wong, 1992), corresponding to the probability that representations of ID data exhibited greater changes after training than representations of OOD data.4 We measure this difference for the model's last layer and the mean difference across all layers. Footnote 4: The CLES statistics quantifies the effect size of the difference between two samples. It is equivalent to AUC of the corresponding univariate binary classifier, representing the probability that a randomly selected score from the first sample will exceed a randomly selected score from the second sample. Table 2 shows the effect size quantified using CLES for the changes in representations between ID and OOD data. In most setups, CLES is far above 50%, which means that representations of ID data Figure 1: The impact of change of each layer on \(\text{{BLOOD}}\) score across layers. Top row: Change in intermediate representations of training instances by layer for (a) RoBERTa and (b) ELECTRA. The scores are averaged across instances for the AR dataset. The black error bars denote the standard deviation. Middle row: \(\text{{BLOOD}}\) score by layer of models for AR before fine-tuning. Bottom row: \(\text{{BLOOD}}\) score by layer of models for AR after fine-tuning. underwent more significant changes than those of OOD data. The results imply that the learning algorithm's focus during training is on the ID region of the representation space. In contrast, the rest of the representation space is largely unaltered. Furthermore, the difference in transformation smoothness between layers, observed between ID and OOD data, can be attributed to the inherently non-smooth transformations of the initialized models. These non-smooth transformations gradually become smoother within the ID region. However, more complex datasets (bp, ar, mg, and ng) in conjunction with the RoBERTa model contradict our initial expectation. In these cases, CLES approaches or even drops below 50%. This indicates that the ID region of the representation space undergoes similar or even fewer changes compared to the rest of the representation space. Our interpretation of this phenomenon is that the algorithm faces greater difficulty in fitting the data, necessitating more substantial adjustments to the model. These significant alterations not only result in smoothing out transitions for ID data but, as a consequence, also make transformations in the rest of the space less smooth. This would explain the improved performance of BLOOD in conjunction with RoBERTa on these datasets, as the difference in transformation smoothness is attributed not only to the smoothing of the ID region of the space but also to the reduction in smoothness of the remaining space. This sharpening effect in the region populated by OOD data is evident when comparing sub-figures (c) and (e) in Figure 1. ### The effect of dataset complexity In the previous subsection, we demonstrated that BLOOD performs better on more complex datasets compared to simpler ones.5 To investigate this phenomenon further, we re-evaluate the performance of OOD detection methods on simplified versions of the more complex datasets. Specifically, we use the binary classification datasets bp2, ar2, and mg2, which are derived from bp, ar, and mg datasets, respectively, by retaining only two classes (cf. Appendix C for additional details). Footnote 5: We support this finding by calculating the Pearson correlation coefficient between MDL and difference in AUROC of BLOOD\({}_{\text{mean}}\) (to capture the influence on all layers in the model) and the baseline method (MSP) for each dataset. We found a significant (\(p<.05\)) correlation of 0.79 for RoBERTa and 0.73 for ELECTRA. Table 3 shows AUROC for the OOD detection task on simplified datasets, as well as the CLES of representation changes. We observe a decrease in AUROC for BLOOD in comparison to the AUROC on the original datasets, while the AUROC of other white/black-box methods shows an increase. The drop in AUROC for BLOOD can be explained by examining the CLES of representation changes, which exhibits a notable increase compared to the original datasets in the case of RoBERTa, and even a slight increase for ELECTRA. The rise in CLES of the change in re \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l} \hline \hline Model & Dataset & \multicolumn{4}{c}{White-box/Black box} & \multicolumn{4}{c}{Open-box} & \multicolumn{4}{c}{CLES} \\ & & BLOOD\({}_{L-1}\) & MSP & ENT & EGY & MC & GRAD & ENSM & TEMP & MD & Mean & Last \\ \hline \multirow{3}{*}{RoBERTa} & BF2 & 79.66 & 89.44 & 89.74 & 88.23 & 88.92 & **89.84** & 87.95 & **89.92** & **97.66** & 94.57 & 84.27 \\ & ar2 & 88.20 & 93.33 & 93.33 & **94.27** & 93.30 & 93.58 & **94.55** & 93.34 & 99.91 & 91.84 & 80.47 \\ & MG2 & 84.78 & 78.13 & 78.13 & **85.44** & 82.62 & 78.28 & 83.95 & 78.23 & **97.48** & 86.80 & 70.25 \\ \hline \multirow{3}{*}{ ELECTRA} & BF2 & 71.71 & **93.23** & **93.23** & 92.51 & 92.61 & 93.20 & 91.25 & **93.34** & **98.75** & 97.28 & 94.87 \\ & ar2 & 90.67 & **96.16** & **96.16** & 93.80 & 95.47 & 96.14 & 95.20 & **96.20** & 93.22 & 97.07 & 96.22 \\ \cline{1-1} & MG2 & **91.41** & 88.02 & 88.02 & 85.08 & 88.55 & 88.10 & 84.12 & 87.95 & **98.28** & 88.28 & 87.10 \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of OOD detection methods for the simplified datasets measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform all white/black-box methods are in **bold**. Higher is better. Right side of the table shows a comparison of changes in representations between ID and OOD data using CLES (%). \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Dataset & Mean & Last \\ \hline & sst & 66.86 \(\pm\) & 5.90 & 63.91 \(\pm\) & 5.64 \\ & sum & 78.77 \(\pm\) & 9.61 & 68.08 \(\pm\) & 10.53 \\ & AGN & 73.28 \(\pm\) & 3.59 & 60.18 \(\pm\) & 4.38 \\ & frac & pgt & 90.63 \(\pm\) & 2.19 & 74.02 \(\pm\) & 21.03 \\ & pr & 55.98 \(\pm\) & 29.52 & 39.65 \(\pm\) & 16.38 \\ & ar & 52.52 \(\pm\) & 15.53 & 33.83 \(\pm\) & 8.45 \\ & mg & 34.40 \(\pm\) & 9.56 & 46.23 \(\pm\) & 11.90 \\ & no & 40.93 \(\pm\) & 8.51 & 49.56 \(\pm\) & 9.14 \\ \hline \multirow{6}{*}{ ELECTRA} & sst & 82.09 \(\pm\) & 13.1 & 78.67 \(\pm\) & 0.97 \\ & sum & 77.43 \(\pm\) & 13.52 & 75.61 \(\pm\) & 14.63 \\ & AGN & 81.28 \(\pm\) & 3.62 & 80.82 \(\pm\) & 4.23 \\ & frac & pgt & 99.86 \(\pm\) & 0.05 & 99.10 \(\pm\) & 0.54 \\ & pr & 93.35 \(\pm\) & 2.00 & 82.80 \(\pm\) & 3.19 \\ & ar & 82.21 \(\pm\) & 9.98 & 81.95 \(\pm\) & 7.70 \\ & mg & 83.88 \(\pm\) & 6.01 \(\pm\) & 83.83 \(\pm\) & 7.70 \\ & ng & 79.08 \(\pm\) & 8.84 & 80.16 \(\pm\) & 4.60 \\ \hline \hline \end{tabular} \end{table} Table 2: Effect size of the changes in representations between ID and OOD data. We calculate CLES (%) averaged across layers (Mean) and for the last layer (Last), showing averages over five random seeds with standard deviation. resentations suggests that the models managed to learn the task without the need to sharpen the transformations of the OOD data, thereby reducing the ability of BLOOD to detect OOD instances. We suspect that the increase in AUROC for the other white/black-box methods may be attributed to the same factor that led to the AUROC decrease in BLOOD - namely, the task's simplicity. However, this cause manifests differently. The simplified datasets, having fewer ambiguous instances in their test sets due to the reduced number of classes, allow the other (probabilistic) methods to more accurately attribute the estimated uncertainty to the OOD data. See Appendix D for a more detailed explanation and visualization using dataset cartography (Swayamdipta et al., 2020). ### Types of distribution shift Another important aspect to consider for OOD detection is the type of distribution shift. Up to this point, we have only considered OOD data coming from a distribution entirely different than that of the ID data, which is referred to as Far-OOD by Baran et al. (2023). We next examine the performance of OOD detection methods on Near-OOD data, which arises from either a semantic or a background shift. For the semantic shift, in line with Ovadia et al. (2019), we designate the even-numbered classes of NG dataset as ID and the odd-numbered classes as Near-OOD data. For the background shift, following Baran et al. (2023), we use the SST dataset as ID and the Yelp Review sentiment classification dataset (Zhang et al., 2015) as Near-OOD data. Table 4 shows the OOD detection performance on the semantic and background shift detection tasks. For the semantic shift, BLOOD exhibits suboptimal performance. However, in the case of the background shift, it notably outperforms all other methods, including the open-box approaches, some of which even perform worse than random. We suspect the subpar performance of other OOD detection methods in background shift detection may be attributed to models performing better on Yelp data compared to the SST data they were trained on (cf. Appendix C), because Yelp has longer texts with more semantic cues, making models more confident on OOD data. We speculate the discrepancy in performance between semantic and background shifts arises because BLOOD is focused on the encoding process of the query instances, while other methods only examine the model's outputs. Consequently, BLOOD demonstrates greater sensitivity to the changes in the data-generating distribution. At the same time, other methods are better at detecting changes in the outputs, such as the introduction of an unknown class. In Appendix E we show that BLOOD is sensitive to the degree of distribution shift. ## 5 Conclusion We have proposed a novel method for out-of-distribution (OOD) detection for deep neural networks (DNNs) called BLOOD. The method analyzes representation transformations across intermediate layers and requires only the access to model's weights. Our evaluation on multiple text classification datasets using Transformer-based large pre-trained language models shows that BLOOD outperforms similar methods. Our analysis reveals that ID representations undergo smoother transformations between layers compared to OOD representations, because the model concentrates on ID region of the representation space during training. We demonstrated that the learning algorithm retains the original sharpness of the transformations of OOD intermediate representations for simpler datasets but increases the sharpness for more complex datasets. Future work includes applying BLOOD to other domains and developing a theoretical framework to explain the observed differences in between-layer smoothness between ID and OOD data. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Shift} & \multicolumn{4}{c}{White-box/Black-box} & \multicolumn{4}{c}{Open-box} \\ & & BLOOD\({}_{L-1}\) & MSP & ENT & EGY & MC & GRAD & ENSM & TEMP & MD \\ \hline \multirow{2}{*}{RoBERTa} & Semantic & 61.61 & 69.46 & **69.50** & 69.41 & 68.34 & 69.36 & 68.91 & **70.56** & **72.63** \\ & Background & **62.70** & 54.26 & 54.26 & 50.17 & 48.18 & 54.33 & 49.13 & 54.19 & 59.40 \\ \hline \multirow{2}{*}{ELECTRA} & Semantic & 62.49 & 63.17 & 63.12 & 60.92 & 62.14 & **63.23** & **65.67** & 62.45 & **64.22** \\ & Background & **59.35** & 42.96 & 42.96 & 38.68 & 37.96 & 42.77 & 41.25 & 42.63 & 39.31 \\ \hline \hline \end{tabular} \end{table} Table 4: The performance of OOD detection methods on the task of Near-OOD detection measured by AUROC (%). The best-performing white/black-box method is in **bold**. Open-box methods that outperform all white/black-box methods are in **bold**. Higher is better.
2305.17737
Shield tilings
We provide a complete description of the edge-to-edge tilings with a regular triangle and a shield-shaped hexagon with no right angle. The case of a hexagon with a right angle is also briefly discussed.
Thomas Fernique, Olga Mikhailovna Sizova
2023-05-28T14:42:33Z
http://arxiv.org/abs/2305.17737v1
# Shield Tilings ###### Abstract We provide a complete description of the edge-to-edge tilings with a regular triangle and a shield-shaped hexagon with no right angle. The case of a hexagon with a right angle is also briefly discussed. ## 1 Introduction Given a finite set of polygons called _tiles_, a _tiling_ is a covering of the Euclidean plane by interior disjoint isometric copies of these polygons, with the property that the intersection of two polygons, if not empty, is either a vertex or an entire edge (so-called _edge-to-edge_ condition). A common issue, in particular in statistical mechanics, is to describe all the possible tilings for a given set of tiles. This problem has been extensively studied in the case of \(2\times 1\) rectangles called _dominoes_ (see e.g. [1]) as well as in the case of a square and a triangle (see, e.g. [1, 1]). Here we focus on _shield tilings_, that are tilings by a unit regular triangle and a _shield_, defined as a hexagon with unit edges whose angles take two values \(\alpha\in(\frac{\pi}{3},\frac{2\pi}{3})\) and \(\beta=\frac{4\pi}{3}-\alpha\) that alternate when we go through the angles circularly (Fig. 1). The motivation comes from a problem of classification of _disk packings_, that are a sets of interior-disjoint disks in the Euclidean plane. More precisely, a disk packing is said to be _triangulated_ if its _contact graph_, that is, the graph which connects the centers of adjacent disks, is a triangulation. In [10], it was proven that there are only \(9\) values \(r<1\) that allow a triangulated packing of disks of size \(1\) and \(r\), and a description of the possible packings was also provided, except in the case \(r\approx 0.54\) root of \[r^{8}-8r^{7}-44r^{6}-232r^{5}-482r^{4}-24r^{3}+388r^{2}-120r+9.\] Figure 1: The triangle tile and shield tiles for different values of \(\alpha\). In this case, the possible packings turned out to correspond to shield tilings for \(\alpha\approx 99.34^{\circ}\) (Fig. 2) and only two examples of packings were provided. Further similar cases with various values of \(\alpha\) appear in the classification of triangulated packings by three sizes of disks [10]. Although some shield tiles can be found in the literature (see e.g. [1]), the classification established here is, to our best knowledge, new. Namely, we introduce in Section 2 two specific classes of shield tilings, called shield line tilings and shield triangle tilings, and prove in Section 3 that, for \(\alpha\neq\frac{\pi}{2}\), this covers all possible shield tilings: **Theorem 1**: _For \(\alpha\neq\frac{\pi}{2}\), every shield tiling is either a shield line tiling or a shield triangle tiling._ For \(\alpha=\frac{\pi}{2}\) there turn out to be many more shield tilings, and a human-readable classification seems difficult to achieve. This is discusses in Section 4. ## 2 Shield line tilings and shield triangle tilings A _shield line_ is an infinite stripe of shield tiles, aligned along one of their symetry axis, each intersecting the next one in a vertex that is also shared by two triangles; it comes in two _orientations_ (Fig. 3). Shield lines can be stacked one on top of the other to yield a shield tiling called a _shield line tiling_. Varying the orientations yields uncountably many different tilings, all periodic in the direction of the shield lines (Fig. 4). A _shield triangle of order \(k\)_ is a set of identically-oriented shield tiles centered on a triangular pattern of size \(k\) of a triangular grid (whose size is minimal so that the shield are interior disjoint) with triangles completing the pattern as Figure 3: Two parallel shield lines with opposite orientations. Figure 2: Two disks whose triangulated packings can be seen as shield tilings. depicted in Fig. 5. Such a triangle tile the plane as the triangular grid: this yields what we call a _shield triangle tiling of order \(k\)_. We also define the shield triangle tiling of _infinite order_, that it is obtained as a limit when \(k\) tends to infinity. Fig. 6 illustrates this. ## 3 Proof of Theorem 1 **Lemma 1**: _For \(\alpha\neq\frac{\pi}{2}\), the only ways shields and triangles can fit around a vertex are, up to isometry, the three ones depicted in Fig. 7._ Figure 4: Shield line tilings with uniform, alternating and random orientations. Figure 5: Shield triangles of order \(0\) through \(4\). Figure 6: Shield triangle tilings of order \(1\), \(3\) and infinite. Proof: Every vertex configuration yields natural numbers \(p\), \(q\) and \(r\) such that \[p\alpha+q\beta+r\frac{\pi}{3}=2\pi.\] Since \(\beta=\frac{4\pi}{3}-\alpha\) we can rewrite this equation \[(p-q)\alpha=2\pi-(4q+r)\frac{\pi}{3}.\] If \(p=q\), then \(4q+r=6\). For \(q=0\) this yields \(r=6\): this corresponds to a configuration of six triangles called _hex_. For \(q=1\) this yields \(r=2\): this corresponds to either to a configuration which alternates two triangles and two shields called _bowtie_ or to a configuration with two neighbor triangles and two neighbor shield called _default_. For \(q\geq 2\) this yields \(r<0\), which is impossible. If \(p\neq q\), then the equation can be satisfied only for specific values of \(\alpha\). One has \(r\leq 6\), and \(\alpha>\frac{\pi}{3}\) yields \(p<6\) while \(\beta>\frac{2\pi}{3}\) yields \(q<3\). There are thus finitely many triple \((p,q,r)\) to check. An exhaustive check shows that, for \(\alpha\neq\frac{\pi}{2}\), the only cases are those depicted in Fig. 8. None of these exceptional configurations can however appear in a shield tiling because each one yields a vertex where two shields meet in their \(\beta\) angle that cannot be completed. **Lemma 2**: _A shield tiling with no hex is a shield line tiling._ Proof: The text refers to the figure above it. Assume there is a fault in the tiling (left). Its two adjacent triangles cannot Figure 7: The vertex configurations _hex_, _bowtie_ and _fault_. belong to a hex. They thus belong to two faults. There are two ways to place these faults (center), but only one can appear in a tiling (the other has a thin angle that cannot be completed). A vertex incident to two adjacent shield can only be completed by a fault, so we are brought back to the same question for new pairs of adjacent triangles (right). Repeating the above argument show that the initial fault must belong to a so-called _fault line_, that is, the intersection of two adjacent shield lines with opposite orientations. Vertices with an open angle of \(\frac{\pi}{3}\) can only be completed by triangles. Add these triangles. If any of them is involved in a fault, then the same argument as above applies and yields a new fault line. Otherwise, that is, if all the newly added triangles are involved in a bowtie, then this yields a new shield line oriented as the previous one. Iterating this argument shows that a tiling with a fault but no hex is made of stacked shield lines, hence is a shield line tiling as claimed. If there is no fault at all (nor hex), then there are only bowties and the only possible tiling is the shield line tiling with uniform orientation (Fig. 4, left). \(\sqcap\)\(\sqcup\) **Lemma 3**: _A shield tiling with a hex is a shield triangle tiling._ _Proof._ If there are only hex, then it is the shield tiling of order 0. Otherwise, consider a hex with a neighbor vertex which is not a hex. This neighbor is necessarily a fault because it is shared by two triangles of the hex (left). A short case study then shows that the other neighbors cannot be hex, thus are also fault (center). Hence, any hex is surrounded by shield tiles (right). Now, the same argument as in Lemma 2 shows that six fault lines originate from every such hex. Each fault line either goes on forever or eventually meets another hex. We have seen in a Lemma 2 that a fault vertex determines the direction of the fault line that contains it. No vertex can thus be shared by two fault lines with different directions. In other words, two fault lines cannot cross each other: they have to meet in a hex. This forces the hex vertices to be arranged on the vertices of a triangular grid as depicted above. Last, no fault line can originate in or pass through a triangle of this grid (otherwise it would have to cross a fault line on the boundary of the triangle). There are thus neither fault nor hex vertices inside these triangles: they have all to be filled by bowties. This yields a \(T_{k}\) (or \(T_{\infty}\) if there is only one hex). Theorem 1 directly follows from Lemmas 2 and 3. ## 4 Right shields For \(\alpha=\frac{\pi}{2}\), the shield is said to be _right_. The case study of Lemma 1 then yields three exceptional vertex configurations (Fig. 9). These three exceptional configurations cannot be ruled out as those in Fig. 8. Indeed, consider a packing of regular dodecagons on the triangular grid with holes filled by triangles: a case study shows that every dodecagon can be filled in exactly three different ways (Fig. 10, left). In particular, this allows tilings where all the 6 vertex configurations appear (Fig. 10, center). These dodecagon-based shield tilings also show that the number \(P_{n}\) of different patterns that can be obtained by taking all the tiles within distance \(n\) from some vertex of some shield tiling grows (at least) exponentially fast in \(n^{2}\). In particular, the _entropy_ of right shield tilings, defined as the limit superior of \(\log(P_{n})/n^{2}\), is positive. In contrast, the entropy of generic shield tilings is zero because \(P_{n}\) grows only exponentially in \(n\). And yet, the above dodecagon-based shield tilings do not exhaust the subject: there are still other shield tilings that cannot be obtained in this way (see e.g. Fig. 10, right). Alike the square and triangle tilings mentioned in the introduction, shield tilings may be too "wild" to admit a human-readable description...
2309.01280
KMT-2021-BLG-1547Lb: Giant microlensing planet detected through a signal deformed by source binarity
We investigate the previous microlensing data collected by the KMTNet survey in search of anomalous events for which no precise interpretations of the anomalies have been suggested. From this investigation, we find that the anomaly in the lensing light curve of the event KMT-2021-BLG-1547 is approximately described by a binary-lens (2L1S) model with a lens possessing a giant planet, but the model leaves unexplained residuals. We investigate the origin of the residuals by testing more sophisticated models that include either an extra lens component (3L1S model) or an extra source star (2L2S model) to the 2L1S configuration of the lens system. From these analyses, we find that the residuals from the 2L1S model originate from the existence of a faint companion to the source. The 2L2S solution substantially reduces the residuals and improves the model fit by $\Delta\chi^2=67.1$ with respect to the 2L1S solution. The 3L1S solution also improves the fit, but its fit is worse than that of the 2L2S solution by $\Delta\chi^2=24.7$. According to the 2L2S solution, the lens of the event is a planetary system with planet and host masses $(M_{\rm p}/M_{\rm J}, M_{\rm h}/M_\odot)=\left( 1.47^{+0.64}_{-0.77}, 0.72^{+0.32}_{-0.38}\right)$ lying at a distance $\D_{\rm L} =5.07^{+0.98}_{-1.50}$~kpc, and the source is a binary composed of a subgiant primary of a late G or an early K spectral type and a main-sequence companion of a K spectral type. The event demonstrates the need of sophisticated modeling for unexplained anomalies for the construction of a complete microlensing planet sample.
Cheongho Han, Weicheng Zang, Youn Kil Jung, Ian A. Bond, Sun-Ju Chung, Michael D. Albrow, Andrew Gould, Kyu-Ha Hwang, Yoon-Hyun Ryu, In-Gu Shin, Yossi Shvartzvald, Hongjing Yang, Jennifer C. Yee, Sang-Mok Cha, Doeon Kim, Dong-Jin Kim, Seung-Lee Kim, Chung-Uk Lee, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge, L. A. G. Monard, Qiyue Qian, Zhuokai Liu, Dan Maoz, Matthew T. Penny, Wei Zhu, Fumio Abe, Richard Barry, David P. Bennett, Aparna Bhattacharya, Hirosame Fujii, Akihiko Fukui, Ryusei Hamada, Yuki Hirao, Stela Ishitani Silva, Yoshitaka Itow, Rintaro Kirikawa, Iona Kondo, Naoki Koshimoto, Yutaka Matsubara, Shota Miyazaki, Yasushi Muraki, Greg Olmschenk, Clément Ranc, Nicholas J. Rattenbury, Yuki Satoh, Takahiro Sumi, Daisuke Suzuki, Mio Tomoyoshi, Paul J. Tristram, Aikaterini Vandorou, Hibiki Yama, Kansuke Yamashita
2023-09-03T22:15:36Z
http://arxiv.org/abs/2309.01280v1
KMT-2021-BLG-1547Lb: Giant microlensing planet detected through a signal deformed by source binarity ###### Abstract Context: Aims:We investigate the previous microlensing data collected by the KMTNet survey in search of anomalous events for which no precise interpretations of the anomalies have been suggested. From this investigation, we find that the anomaly in the lensing light curve of the event KMT-2021-BLG-1547 is approximately described by a binary-lens (2L1S) model with a lens possessing a giant planet, but the model leaves unexplained residuals. Methods:We investigate the origin of the residuals by testing more sophisticated models that include either an extra lens component (3L1S model) or an extra source star (2L2S model) to the 2L1S configuration of the lens system. From these analyses, we find that the residuals from the 2L1S model originate from the existence of a faint companion to the source. The 2L2S solution substantially reduces the residuals and improves the model fit by \(\Delta\chi^{2}=67.1\) with respect to the 2L1S solution. The 3L1S solution also improves the fit, but its fit is worse than that of the 2L2S solution by \(\Delta\chi^{2}=24.7\). Results:According to the 2L2S solution, the lens of the event is a planetary system with planet and host masses \((M_{\rm p}/M_{\rm J},M_{\rm h}/M_{\odot})=(1.47^{+0.64}_{-0.77},0.72^{+0.32}_{- 0.38})\) lying at a distance \(D_{\rm L}=5.07^{+0.98}_{-1.50}\) kpc, and the source is a binary composed of a subgiant primary of a late G or an early K spectral type and a main-sequence companion of a K spectral type. The event demonstrates the need of sophisticated modeling for unexplained anomalies for the construction of a complete microlensing planet sample. Conclusions: ## 1 Introduction The planetary signal in a lensing light curve is mostly described by a 2L1S model, in which the lens comprises two masses of the planet and its host and the source is a single star (Mao and Paczynski, 1991; Gould and Loeb, 1992). It occasionally happens that a planetary signal cannot be precisely described by the usual 2L1S model because of several major causes. The first cause of the deviation of a planetary signal from a 2L1S form is the existence of an additional planet. In general, a planet induces two sets of caustics, in which one lies near the position of the planet host (central caustic), and the other lies away from the host (planetary caustic) at the position \(\mathbf{s}-1/\mathbf{s}\), where \(\mathbf{s}\) denotes the position vector of the planet from the host (Griest and Safizadeh, 1998; Han, 2006). For a lens system containing multiple planets, the central caustics induced by the individual planets appear in a common region around the planet host, and thus the magnification pattern of the central region deviates from that of a single-planet system (Gaudi et al., 1998), causing deformation of the planetary signal. There have been five cases of microlensing events with planetary signals deformed by multiple planets, including OGLE-2006-BLG-109 (Gaudi et al., 2008; Bennett et al., 2010), OGLE-2012-BLG-0026 (Han et al., 2013; Beaulieu et al., 2016), OGLE-2018-BLG-1011 (Han et al., 2019), OGLE-2019-BLG-0468 (Han et al., 2022d), and KMT-2021-BLG-1077 (Han et al., 2022a). The second cause for the deformation of a planetary signal is the binarity of the planet host. Under the lens configuration in which a planet orbits around one component of a wide binary star or around the barycenter of a close binary star, the binary companion induces additional perturbations in the central magnification region, and thus the sig
2307.06043
One-sided localization in dg categories
The notion of one-sided localization in the homotopy invariant context is developed for dg algebras and dg categories. Applications include a simple construction of derived localization of dg algebras and dg categories, and a refinement of Drinfeld's quotient of pretriangulated dg categories.
Joseph Chuang, Andrey Lazarev
2023-07-12T09:44:06Z
http://arxiv.org/abs/2307.06043v1
# One-sided localization in dg categories ###### Abstract. The notion of one-sided localization in the homotopy invariant context is developed for dg algebras and dg categories. Applications include a simple construction of derived localization of dg algebras and dg categories, and a refinement of Drinfeld's quotient of pretriangulated dg categories. 2020 Mathematics Subject Classification: 18N40, 18G80, 18G35 This work was partially supported by EPSRC grants EP/T029455/1 and EP/N016505/1. Our main result concerns the situation when \(\mathcal{C}\) is a pretriangulated dg category (such as the category of complexes of modules over a ring). Given a morphism \(v:B\to C\) in \(\mathcal{C}\), we extend it to a triangle (1.1) Then the operation of killing \(v\) in \(\mathcal{C}\) is equivalent to left-localizing at \(s\) or right-localizing at \(x\). This is the content of Theorem 6.1. If \(B=C\) and the map \(v\) is the identity map, then killing \(v\) is precisely the dg localization of Drinfeld's, cf. [5] which is the dg-version of the Verdier quotient by the object \(B=C\). As an application, we obtain the following refinement of Drinfeld's construction. Given a triangle (1.1), the Drinfeld quotient of \(\mathcal{C}\) by the full pretriangulated subcategory generated by \(B\) is quasi-equivalent to either \((\mathcal{C}\,/x)/v\) or \((\mathcal{C}\,/v)/x\). In particular, the dg categories \((\mathcal{C}\,/x)/v\) or \((\mathcal{C}\,/v)/x\) are quasi-equivalent and do not depend, up to quasi-equivalence, on individual morphisms \(v\) and \(x\) so long as they are consecutive morphisms of a triangle in \(\mathcal{C}\). The paper is organized as follows. In Section 2 we construct a relative cylinder in the category of dg categories; this is a categorical version of the Baues-Lemaire cylinder for dg algebras [1]. This construction is simpler than that of an absolute cylinder and is likely to be useful in other contexts. Section 3 introduces the notion of the killing of a closed morphism in a dg category and gives its homotopy invariant characterization using the previously developed construction of a cylinder. Section 4 similarly introduces and gives homotopy characterization of a one-sided localization whereas in Section 5 one-sided localization is used to construct ordinary, i.e. two-sided localization. The case of a dg algebra (which can be viewed as a dg category with one object) is considered in parallel and similar results are obtained. Finally, in Section 6 the case of pretriangulated categories is treated and the refinement of Drinfeld's quotient, mentioned above, is constructed. ### Notation and conventions We work in the category of dg modules over a fixed commutative ring \(\mathbf{k}\); the notation \(\operatorname{Hom}\) stands for the set of \(\mathbf{k}\)-linear homomorphisms. All dg-modules are homologically graded and we will write \(\Sigma\) for the homological suspension. A dg category is a category enriched over dg \(\mathbf{k}\)-modules and they will be denoted by calligraphic letters such as \(\mathcal{C}\). The category of dg categories will be denoted by dgCat. The subcategory of dgCat consisting of dg categories with one object will be denoted by DGA; it is the category of dg algebras. It is well-known that dgCat and DGA form model categories [6, 10]; we refer to [8] for the background material and terminology on dg categories and dg functors. The model categories dgCat and DGA are not left proper unless \(\mathbf{k}\) is a field and this may lead to homotopy non-invariant constructions; to avoid this, we will always tacitly assume our dg categories or dg algabras to be flat over \(\mathbf{k}\). Given an object \(O\) in a category \(\mathcal{C}\), we will denote by \(O\downarrow\mathcal{C}\) the category of objects under \(O\), i.e. morphisms in \(\mathcal{C}\) of the form \(O\to X\) with maps between such morphisms being obvious commutative triangles. ## 2. Relative cylinder in dg categories Let \(\mathcal{C},\mathcal{D}\) be dg categories and \(F:\mathcal{C}\to\mathcal{D}\) be a dg functor. We assume that \(\mathcal{C}\) and \(\mathcal{D}\) have the same collection of objects and that \(F\) acts identically on objects. In addition, we assume that \(\mathcal{D}\) is free relative to \(\mathcal{C}\) when differentials are disregarded. In other words, \(\mathcal{D}\) is generated, as a nondifferential category, by \(\mathcal{C}\) and a collection of morphisms \(s_{\alpha}\in\operatorname{Hom}(\mathcal{D}),\alpha\in I\) where \(I\) is an indexing set with no relations imposed between \(s_{\alpha}\). The functor \(F\) then exhibits \(\mathcal{C}\) as a dg subcategory of \(\mathcal{D}\). We will construct explicitly a cylinder object for \(\mathcal{D}\) in \(\mathcal{C}\downarrow\operatorname{dgCat}\), the under category of \(\mathcal{C}\) in \(\operatorname{dgCat}\). We start with the notion of a derivation adapted to the categorical framework. **Definition 2.1**.: Let \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) be two dg categories and \(F,G:\mathcal{C}_{1}\to\mathcal{C}_{2}\) be dg functors. Then an _\((F,G)\)-derivation_ of \(\mathcal{C}_{1}\) with values in \(\mathcal{C}_{2}\) is a homogeneous \(\mathbf{k}\)-linear function \(f\) associating to any object \(c\) in \(\mathcal{C}_{1}\) an object \(f(c)\) in \(\mathcal{C}_{2}\) and to any morphism \(x:c\to c^{\prime}\) in \(\mathcal{C}\) a morphism \(f(x):f(c)\to f(c^{\prime})\) such that the Leibniz rule holds: \(f(x_{1}x_{2})=f(x_{1})G(x_{2})+(-1)^{|f||x_{1}|}F(x_{1})f(x_{2})\). Consider the dg category \(\mathcal{D}_{1}\coprod_{\mathcal{C}}\mathcal{D}_{2}\) where \(\mathcal{D}_{1}\) and \(D_{2}\) are both isomorphic to \(\mathcal{D}\); given a morphism \(s\in\operatorname{Hom}(\mathcal{D})\), we will write \(s^{1}\) and \(s^{2}\) for the corresponding morphisms in \(\mathcal{D}_{1}\coprod_{\mathcal{C}}\mathcal{D}_{2}\). Let \(C_{\mathcal{C}}(\mathcal{D})\) be the graded category freely generated over \(\mathcal{D}_{1}\coprod_{\mathcal{C}}\mathcal{D}_{2}\) by morphisms \(\bar{s}_{\alpha}:d\to d^{\prime}\), one for each generator \(s_{\alpha}\in\mathcal{D}:d\to d^{\prime}\), and such that \(|\bar{s}_{\alpha}|=|s_{\alpha}|+1\). Let \(f\) be the \((i_{1},i_{2})\)-derivation of \(\mathcal{D}\) with values in \(C_{\mathcal{C}}(\mathcal{D})\) (where \(i_{1}\) and \(i_{2}\) are the two inclusions of \(\mathcal{D}\) into \(C_{\mathcal{C}}(\mathcal{D})\)) determined by the rule \(f(s_{\alpha})=\bar{s}_{\alpha}\). **Definition 2.2**.: Let the differential on \(C_{\mathcal{C}}(\mathcal{D})\) be determined by the requirement that both \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) are dg subcategories in \(C_{\mathcal{C}}(\mathcal{D})\) and \(d(\bar{s}_{\alpha})=s^{1}_{\alpha}-s^{2}_{\alpha}-f(ds_{\alpha})\). The dg category \(C_{\mathcal{C}}(\mathcal{D})\) so defined, is called the \(\mathcal{C}\)-relative cylinder of \(\mathcal{D}\) **Lemma 2.3**.: _The differential in \(C_{\mathcal{C}}(\mathcal{D})\) squares to zero._ Proof.: Let us define the \((i_{1},i_{2})\)-derivation \([d,f]\) of \(\mathcal{D}\) with values in \(C_{\mathcal{C}}(\mathcal{D})\) by the formula \([d,f](x)=d(f(x))+f(dx)\) where \(x\in\operatorname{Hom}(\mathcal{D})\). Then the following formula holds for any \(x\in\operatorname{Hom}(\mathcal{D})\): \[[d,f](x)=x^{1}-x^{2} \tag{2.1}\] Indeed, both sides of the above are \((i_{1},i_{2})\)-derivations that agree when applied to any generator \(s_{\alpha}\) using the definition of \(f\); this confirms the validity of (2.1) in general. Next, we have for any generator \(s_{\alpha}\in\operatorname{Hom}(\mathcal{D})\): \[d^{2}(\bar{s}_{\alpha}) =d(s^{1}_{\alpha}-s^{2}_{\alpha}-f(ds_{\alpha}))\] \[=ds^{1}_{\alpha}-ds^{2}_{\alpha}-d(f(ds_{\alpha}))\] \[=ds^{1}_{\alpha}-ds^{2}_{\alpha}-[d,f](ds_{\alpha})\] \[=0\] where (2.1) is used for the last equality. Note that the map \(C_{\mathcal{C}}(\mathcal{D})\to\mathcal{D}\) sending both \(\mathcal{D}_{1},\mathcal{D}_{2}\) identically to \(\mathcal{D}\) and the generators \(\bar{s}\) to zero is a map of dg categories. By construction, \(\mathcal{D}\coprod_{\mathcal{C}}\mathcal{D}\) is a dg subcategory of \(C_{\mathcal{C}}(\mathcal{D})\). In other words, \(C_{\mathcal{C}}(\mathcal{D})\) factors the codiagonal map \(\mathcal{D}\coprod_{\mathcal{C}}\mathcal{D}\to\mathcal{D}\) in \(\mathcal{C}\downarrow\operatorname{dgCat}\). We now assume, in addition, that \(\mathcal{D}\) is _cofibrant_ over \(\mathcal{C}\). More specifically, we assume that there is an additional weight indexing by natural numbers on the set of generators \(s_{\alpha}\) so that \(d(s_{\alpha})\) is of a sum of elements of strictly smaller weight, where the weight of \(\mathcal{C}\) is taken to be zero. **Proposition 2.4**.: _The dg category \(C_{\mathcal{C}}(\mathcal{D})\) is a good cylinder object in \(\mathcal{C}\downarrow\operatorname{dgCat}\)._ Proof.: One only needs to prove that the map \(C_{\mathcal{C}}(\mathcal{D})\to\mathcal{D}\) described above is a quasi-equivalence. Let us consider first the case when the differential in \(\mathcal{D}\) (and so also in \(\mathcal{C}\)) is zero. Let \(\mathcal{C}\langle\{s_{\alpha}\}\rangle\) be the graded quiver whose vertices are the objects of \(\mathcal{C}\) and graded spaces of arrows are spanned by \(\operatorname{Hom}(\mathcal{C})\) together with the generators \(s_{\alpha},\alpha\in I\). Let \(C_{\mathcal{C}}(\mathcal{C}\langle\{s_{\alpha}\}\rangle)\) be the dg quiver that has the same vertices as \(\mathcal{C}\langle\{s_{\alpha}\}\rangle\) and three types of basis arrows: \(s^{1},s^{2}\) and \(\bar{s}\) for every generator \(s\); the differential has the form \(d(\bar{s})=s^{1}-s^{2}\). It is clear that the homology of the latter quiver is isomorphic to \(\mathcal{C}\langle\{s_{\alpha}\}\rangle\). The dg category \(C_{\mathcal{C}}(\mathcal{D})\) is isomorphic to the \(\mathcal{C}\)-relative free category on \(C_{\mathcal{C}}(\mathcal{C}\langle\{s_{\alpha}\}\rangle)\). Since the formation of the free category commutes with the homology, the desired statement follows. Now let us consider the general case. Since the set of objects is constant throughout, it suffices to show that for any two objects \(d,d^{\prime}\), the map \(\operatorname{Hom}_{C_{\mathcal{C}}(\mathcal{D})}(d,d^{\prime})\to\operatorname {Hom}_{\mathcal{D}}(d,d^{\prime})\) is a quasi-isomorphism. Consider the increasing filtration on \(\operatorname{Hom}_{C_{\mathcal{C}}(\mathcal{D})}(d,d^{\prime})\) \[F_{0}:=\operatorname{Hom}_{\mathcal{D}_{1}}\coprod_{\mathcal{C}}\mathcal{D}_{2} (d,d^{\prime})\subset F_{1}\subset\ldots F_{p}\subset\ldots.\] Here the filtration component \(F_{p}\) is a dg subspace of \(\operatorname{Hom}_{C_{\mathcal{C}}(\mathcal{D})}(d,d^{\prime})\) consisting of morphisms that are linear combinations of products of generators of weights summing to at most \(p\), where the weights of \(s^{1}_{\alpha},s^{2}_{\alpha}\) and \(\bar{s}_{\alpha}\) are the same as that of \(s_{\alpha}\). It is clear this this filtration is compatible with the differential and exhaustive. The differential \(d_{1}\) of the associated spectral sequence is induced by the differentials applied to the generators \(\bar{s}_{\alpha}\) so, the \(E_{1}\)-page is isomorphic to \(\operatorname{Hom}_{C_{\mathcal{C}}(\operatorname{gr}\mathcal{D})}(d,d^{\prime})\) where \(\operatorname{gr}\mathcal{D}(d,d^{\prime})\) is the associated graded with respect to the filtration considered above. Comparing it to the spectral sequence of \(\operatorname{Hom}_{C_{\mathcal{C}}(\mathcal{D})}(d,d^{\prime})\), we conclude that they are isomorphic and so \(C_{\mathcal{C}}(\mathcal{D})\to\mathcal{D}\) is indeed a quasi-equivalence. **Remark 2.5**.: This result is a categorical version of the Baues-Lemaire cylinder for free differential graded algebras, see [1]; in the case of categories with one object and \(\mathcal{C}=\mathbf{k}\), it specializes to the Baues-Lemaire result. However, in the situation of dg algebras, working relative \(\mathbf{k}\) is equivalent to working absolutely whereas in the categorical framework it is essential to work with categories with a fixed set of objects (which is achieved by considering relative categories of the specified kind), otherwise a cylinder object will be more complicated. ## 3. Killing morphisms in a dg category In this section we describe the procedure of killing a morphism in a dg category in a homotopy invariant fashion. Let \(\mathcal{C}\) be a dg category, \(X,Y\) be objects of \(\mathcal{C}\) and \(v\in\operatorname{Hom}_{n}(X,Y)\) be a closed morphism between \(X\) and \(Y\), so \(d(v)=0\). We will construct a functor \[\kappa_{v}:\mathcal{C}\downarrow\operatorname{dgCat}\to\operatorname{Sets}\] from the undercategory of \(\mathcal{C}\) to the category of sets. **Definition 3.1**.: Given a dg category \(\mathcal{D}\) supplied with a dg functor \(F:\mathcal{C}\to\mathcal{D}\), set \[\kappa_{v}(\mathcal{D})=\{w\in\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))_{n+ 1}:d(w)=F(v)\}/B_{n+1}[\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))].\] **Remark 3.2**.: Note that if \(w\) is such that \(d(w)=v\) and for \(u\in\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))_{n+1}\) it holds that \(d(u)=0\), then \(d(w+u)=w\), so the quotient by boundaries \(B_{n+1}[\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))]\) is well-defined. Clearly the set \(\kappa_{v}(\mathcal{D})\) is an affine space over \(H_{n+1}[\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))]\), in which case it is noncanonically isomorphic to \(H_{n+1}[\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))]\), or it is empty. It is easy to see that \(\kappa_{v}\) only depends, up to a natural isomorphism, on the homology class of \(v\). Let \(\mathbf{k}\langle v\rangle\) be the category with two objects and a one-dimensional space of morphisms of degree \(n\) between them (here \(v\) stands for a basis vector for this space). Let \(\mathbf{k}\langle v,w\rangle\) be the (dg) category with two objects and two-dimensional space of morphisms spanned by \(v\) and \(w\), in degrees \(n\) and \(n+1\), respectively, with the differential \(d(w)=v\). There is an obvious functor \(\mathbf{k}\langle v\rangle\to\mathbf{k}\langle v,w\rangle\) that is identical on objects and on the morphism \(v\); note that \(\mathbf{k}\langle v,w\rangle\) is cofibrant over \(\mathbf{k}\langle v\rangle\). Similarly there is a functor \(\mathbf{k}\langle v\rangle\to\mathcal{C}\), sending \(v\) to the specified morphism in \(\mathcal{C}\). This allows us to form the dg category \(\mathcal{C}\,/v:=\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}\mathbf{k} \langle v,w\rangle.\) It is clear that \(\mathcal{C}\,/v\) is supplied with a dg functor from \(\mathcal{C}\) and the quasi-equivalence class of \(\mathcal{C}\,/v\) as an object in \(\mathcal{C}\downarrow\operatorname{dgCat}\) depends only on the homology class of \(v\). We say that \(\mathcal{C}\,/v\) is obtained from \(\mathcal{C}\) by killing (the homology class of) the morphism \(v\). Note that \(\mathbf{k}\langle v,w\rangle\) is quasi-equivalent to the coproduct \(\mathbf{k}\coprod\mathbf{k}\) in dgCat so that \(\mathcal{C}\,/v\simeq\mathcal{C}\coprod_{\mathbf{k}[v]}^{\mathbb{L}}(\mathbf{ k}\coprod\mathbf{k})\); here the superscript \(\mathbb{L}\) indicates the homotopy (derived) pushout of dg categories. **Proposition 3.3**.: _Given a dg category \(\mathcal{C}\), a closed morphism \(v\in\operatorname{Hom}(\mathcal{C})\) and a dg category \(\mathcal{D}\) supplied with a dg functor \(\mathcal{C}\to\mathcal{D}\) there is a natural isomorphism of sets_ \[\kappa_{v}(\mathcal{D})\cong[\mathcal{C}\,/v,\mathcal{D}]_{\mathcal{C} \downarrow\operatorname{dgCat}}\] _where the right hand side above refers to homotopy classes of maps in \(\mathcal{C}\downarrow\operatorname{dgCat}\). In other words, the functor \(\kappa_{v}\) is represented up to homotopy by \(\mathcal{C}\,/v\)._ Proof.: A dg functor \(\mathcal{C}\,/v\to\mathcal{D}\) in the undercategory of \(\mathcal{C}\) is equivalent to choosing a morphism \(w\in\operatorname{Hom}(D)\) for which \(d(w)=F(v)\). The cylinder object \(C_{\mathcal{C}}(\mathcal{C}\,/v)\) has the same objects as \(\mathcal{C}\), freely generated over \(\mathcal{C}\) by morphisms \(w\), \(w^{\prime}\) and \(s\), with \(d(w)=v\), \(d(w^{\prime})=v\) and \(d(s)=w-w^{\prime}\). It is then clear that two maps \(\mathcal{C}\,/v\to\mathcal{D}\) in \(\mathcal{C}\downarrow\operatorname{dgCat}\) corresponding to morphisms \(w\) and \(w^{\prime}\) are homotopic if and only \(w\) and \(w^{\prime}\) differ by a boundary in \(\operatorname{Hom}(\mathcal{D})\). **Corollary 3.4**.: _The functor \(\kappa_{v}\) lifts to a functor on the homotopy category of \(\mathcal{C}\downarrow\mathrm{dgCat}\to\mathrm{Sets}\)._ The killing construction has an analogue in the category DGA, which we will now sketch. Let \(C\) be a dg algebra and \(v\in C_{n}\) be an \(n\)-cycle in \(A\). Regarding \(C\) as a dg category with one object, we can form the killing construction \(C/v\). The definition of the functor \(\kappa_{v}\) is likewise similar. **Definition 3.5**.: Given a dg algebra \(C\) supplied with a dg map \(f:C\to D\), set \[\kappa_{v}(C)=\{w\in C_{n+1}:d(w)=f(v)\}/B_{n+1}(D).\] As before, it is easy to see that \(\kappa_{v}\) does not depend, up to a natural isomorphism, on the choice of \(v\) inside its homology class. The following result holds (which implies, as before, that \(\kappa_{v}\) lifts to the homotopy category of \(C\downarrow\mathrm{DGA}\)). **Proposition 3.6**.: _Given a dg algebra \(A\), a cycle \(v\in C_{n}\) and a dg algebra \(D\) supplied with a dg map \(C\to D\) there is a natural isomorphism of sets_ \[\kappa_{v}(D)\cong[C/v,D]_{C\downarrow\mathrm{DGA}}\] _where the right hand side above refers to homotopy classes of maps in \(C\downarrow\mathrm{DGA}\). In other words, the functor \(\kappa_{v}\) is represented up to homotopy by \(C/v\)._ Proof.: Note that the category DGA of dg algebras can be viewed as a full subcategory of \(\mathbf{k}\downarrow\mathrm{dgCat}\) where \(\mathbf{k}\) stands for the dg category with one object and \(\mathbf{k}\) worth of morphisms. This inclusion is _not_ compatible with model structures on DGA and dgCat since not every surjective map between dg algebras is a fibration of the corresponding dg categories. Nevertheless, viewing the homotopy category as a localization of DGA either on its own or inside \(\mathbf{k}\downarrow\mathrm{dgCat}\), we see that the same maps are being inverted and therefore the above inclusion induces a full embedding on the corresponding homotopy categories. With this, the desired result follows directly from Proposition 3.3. **Remark 3.7**.: It is also possible to carry out the proof of Proposition 3.6 solely inside DGA where similar arguments would apply. ## 4. One-sided inversion in dg categories We will describe the procedure of one-sided (left or right) inversion in a dg category in a homotopy invariant fashion. The arguments follow those in the previous section and are only slightly more complicated (owing to the more complicated nature of the relevant relative cylinder). Let \(\mathcal{C}\) be a dg category, \(O_{1},O_{2}\) be objects of \(\mathcal{C}\) and \(v\in H_{n}(\operatorname{Hom}(O_{1},O_{2}))\). We will construct a functor \[\rho_{v}:\mathcal{C}\downarrow\mathrm{dgCat}\to\mathrm{Sets}\] from the undercategory of \(\mathcal{C}\) to sets. **Definition 4.1**.: Given a dg category \(\mathcal{D}\) supplied with a dg functor \(F:\mathcal{C}\to\mathcal{D}\), set \[\rho_{v}(\mathcal{D})=\{w\in H_{-n}\operatorname{Hom}_{\mathcal{D}}(F(O_{2}),F (O_{1})):F(v)F(w)=1\in H_{0}\operatorname{End}_{\mathcal{D}}(F(O_{2}))\}\] **Remark 4.2**.: The set \(\rho_{v}(\mathcal{D})\) is either empty or it is noncanonically isomorphic to the set of right zero-divisors of \(F(v)\) in \(H_{n}\operatorname{Hom}_{\mathcal{D}}(F(Y),F(X))\) (since any two elements in \(\rho_{v}(\mathcal{D})\) differ by a right zero divisor of \(F(v)\)). Since homotopic functors between dg categories induce the same maps on the graded homology categories, it follows that the functor \(\rho_{v}\) lifts to the homotopy category of \(\mathcal{C}\downarrow\mathrm{dgCat}\). Recall that we denoted by \(\mathbf{k}\langle v\rangle\) the category with two objects \(O_{1}\) and \(O_{2}\) and having its morphisms generated by a single map \(v:O_{1}\to O_{2}\) between them, with \(|v|=n\). Let \(Q_{v}\) stand for the category with the same two objects and generated by three morphisms \(v:O_{1}\to O_{2}\), \(w:O_{2}\to O_{1}\) and \(u:O_{2}\to O_{2}\), with the differential \(d(u)=vw-1\) and \(d(v)=d(w)=0\). The degrees of \(v,w\) and \(u\) are \(n,-n\) and \(1\) respectively. Imposing the relation \(u=0\) in \(Q_{v}\) we obtain a category that we denote by \(\mathbf{k}\langle v^{-}\rangle\). The following result shows that \(Q_{v}\) is a resolution of \(\mathbf{k}\langle v^{-}\rangle\). **Proposition 4.3**.: _The quotient map \(Q_{v}\to\mathbf{k}\langle v^{-}\rangle\) is a quasi-equivalence._ Proof.: Let \(Q\) be the quiver with two vertices \(O_{1},O_{2}\) and two arrows \(v:O_{1}\to O_{2},w:O_{2}\to O_{1}\). Let \(Q\langle u\rangle\) be the same quiver with the additional arrow \(u:O_{2}\to O_{2}\). Denote by \(1_{O_{1}}\) and \(1_{O_{2}}\) idempotents associated with vertices \(O_{1}\) and \(O_{2}\). We consider the algebra \(\mathbf{k}Q/(vw-1_{O_{2}})\) and the dg algebra \(\mathbf{k}Q\langle u\rangle\) with \(d(u)=vw-1_{O_{2}}\). Note that \(\mathbf{k}Q/(vw-1_{O_{2}})\) is the category algebra of \(\mathbf{k}\langle v^{-}\rangle\) and \(\mathbf{k}Q\langle u\rangle\) with the indicated differential, is the category algebra of \(Q_{v}\). The functor \(Q_{v}\to\mathbf{k}\langle v^{-}\rangle\), being bijective on objects, induces the quotient map \(j:\mathbf{k}Q\langle u\rangle\to\mathbf{k}\langle v^{-}\rangle\) with \(d(u)=vw-1_{O_{2}}\). The desired statement is equivalent to \(j\) being a quasi-isomorphism. Let us consider the modified differential on the complex \(\mathbf{k}Q\langle u\rangle\) given by \(d^{\prime}(u)=vw\). Since the commutative semisimple algebra \(\mathbf{k}\times\mathbf{k}\) splits off \(\mathbf{k}Q\langle u\rangle\) as a \(\mathbf{k}\times\mathbf{k}\)-bimodule (though not as a dg algebra), the homology with respect to the modified differential is unchanged as a graded vector space. But \((\mathbf{k}Q\langle u\rangle,d^{\prime})\) is the bimodule cobar-construction of the graded Koszul algebra \(\mathbf{k}Q/(vw)\). The homology of this cobar-construction is, therefore, spanned over \(\mathbf{k}\times\mathbf{k}\) by the elements \(v,w\) and \(vw\). Therefore the homology of \(kQ\langle u\rangle\) with the old differential \(d\) is spanned over \(\mathbf{k}\times\mathbf{k}\) by the same elements. The desired claim follows. Clearly, \(\mathbf{k}\langle v\rangle\) is a subcategory of \(Q_{v}\) and \(Q_{v}\) is cofibrant over \(\mathbf{k}\langle v\rangle\). This allows us to form the dg category \(R_{v}(\mathcal{C}):=\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}Q_{v}\). **Corollary 4.4**.: _There is a weak equivalence \(R_{v}(\mathcal{C})\simeq\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}^{ \mathbb{L}}\mathbf{k}[v^{-}]\)._ Proof.: This is immediate from Proposition 4.3. It is clear that \(R_{v}(\mathcal{C})\) is supplied with a dg functor from \(\mathcal{C}\), that is to say it is an object in \(\mathcal{C}\downarrow\mathrm{dgCat}\). The following result holds. **Proposition 4.5**.: _Given a dg category \(\mathcal{C}\), a homology class \(v\in H_{n}\operatorname{Hom}(\mathcal{C})\) and a dg category \(\mathcal{D}\) supplied with a dg functor \(\mathcal{C}\to\mathcal{D}\) there is a natural isomorphism of sets_ \[\rho_{v}(\mathcal{D})\cong[R_{v}(\mathcal{C}),\mathcal{D}]_{\mathcal{C} \downarrow\mathrm{dgCat}}\] _where the right hand side above refers to homotopy classes of maps in \(\mathcal{C}\downarrow\mathrm{dgCat}\). In other words, the functor \(\rho_{v}\) is represented up to homotopy by \(R_{v}\)._ Proof.: Let \(F:R_{v}(\mathcal{C})\to\mathcal{D}\) be a map in \(\mathcal{C}\downarrow\mathrm{dgCat}\). As such, it is determined by the images of the morphisms \(w\) and \(u\) in \(\mathcal{D}\). The morphism \(F(w)\) is a cycle in \(\operatorname{Hom}_{\mathcal{D}}(F(X),F(Y))\) and \(d(F(u))=F(v)F(w)-1\). Therefore, the functor \(F\) determines an element in \(\rho_{v}(\mathcal{D})\) and every element in \(\rho_{v}(\mathcal{D})\) comes from a functor \(R_{v}\to\mathcal{D}\). Let us suppose that two maps \(F_{1},F_{2}:R_{v}\to\mathcal{D}\) are homotopic in \(\mathcal{C}\downarrow\mathrm{dgCat}\); so there is a dg functor \(H:C_{\mathcal{C}}(R_{v})\to\mathcal{D}\) implementing a homotopy between \(F_{1}\) and \(F_{2}\). The dg category \(C_{\mathcal{C}}(R_{v})\) has the same objects as \(R_{v}\), and it is freely generated over \(R_{v}\) by morphisms \(w^{\prime}:O_{2}\to O_{1}\), \(u^{\prime}:O_{2}\to O_{2}\), \(s_{1}:O_{2}\to O_{1}\) and \(s_{2}:O_{2}\to O_{2}\). The non-zero differentials on these additional generators are given by the formulas: \[d(u^{\prime}) =vw^{\prime}-1\] \[d(s_{1}) =w-w^{\prime}\] \[d(s_{2}) =u-u^{\prime}+vs_{1}.\] It follows that the \(F_{1}(w),F_{2}(w)\in\operatorname{Hom}(\mathcal{D})\) are homologous cycles since \[F_{1}(w)-F_{2}(w^{\prime})=H(w)-H(w^{\prime})=dH(s_{1}).\] So the condition that \(F_{1}\) and \(F_{2}\) are homotopic implies that the homology classes of right inverses to \(F_{1}(w)\) and \(F_{2}(w)\) coincide. In other words, we have proved that there is a well-defined map \([R_{v},\mathcal{D}]_{\mathcal{C}\downarrow\mathrm{dgCat}}\to\rho_{v}(\mathcal{ D})\) and that it is surjective. To show that it is injective, suppose that we have two maps \(F_{1}\) and \(F_{2}\) as above, such that \(F_{1}(w),F_{2}(w)\in\operatorname{Hom}(\mathcal{D})\) are homologous cycles. We can assume that \(F_{1}(w)\) and \(F_{2}(w)\) are in fact equal; indeed if this is not the case, then replace \(F_{1}\) with a homotopic map \(F_{1}^{\prime}\) where the corresponding homotopy has \(s_{2}\)-component zero and for which \(F_{1}^{\prime}(w)=F_{2}(w)\). Then the homology class of \(F_{1}(u)-F_{2}(u)\) is well-defined. Let us define a homotopy \(H:C_{\mathcal{C}}(R_{v})\to\mathcal{D}\) by setting \(H(w)=F_{1}(w)\), \(H(w^{\prime})=F_{2}(w^{\prime})\), \(H(u)=F_{1}(u)\), \(H(u^{\prime})=F_{2}(u)\), \(H(s_{1})=F_{1}(u)-F_{2}(u)\) and \(H(s_{2})=v(F_{1}(u)-F_{2}(u))\). Then \(H\) is a homotopy between \(F_{1}\) and \(F_{2}\), proving the required injectivity. There is an analogue of this result in the category \(\operatorname{DGA}\). Let \(C\) be a dg algebra and \(v\in H_{n}(C)\). Regarding \(C\) as a dg category, we can form the corresponding one-sided derived localization \(R_{v}(C)\). The definition of \(\rho_{v}\) is likewise similar. **Definition 4.6**.: Given a dg algebra \(D\) supplied with a dg map \(F:\mathcal{C}\to\mathcal{D}\), set \[\rho_{v}(D)=\{w\in H_{-n}(D):F(v)F(w)=1\in H_{0}(D).\}\] Then the following result holds, which is a direct corollary of Proposition 4.5. **Proposition 4.7**.: _Given a dg algebra \(C\), a homology class \(v\in H_{n}(C)\) and a dg algebra \(D\) supplied with a dg map \(C\to D\), there is a natural isomorphism of sets_ \[\rho_{v}(D)\cong[R_{v},\mathcal{D}]_{C\downarrow\operatorname{DGA}}\] _where the right hand side above refers to homotopy classes of maps in \(C\downarrow\operatorname{DGA}\). In other words, the functor \(\rho_{v}\) is represented up to homotopy by \(R_{v}\)._ **Remark 4.8**.: The construction of one-sided localization \(R_{v}\) can be carried out entirely in the category \(\operatorname{DGA}\). The analogue of \(Q_{v}\) is played by the algebra \(Q_{v}^{\operatorname{alg}}\) which is free on generators \(v,w\) and \(u\) and having the differential \(d(u)=vw-1\). As in the categorical case, it is a resolution of the algebra \(\mathbf{k}[v^{-}]^{\operatorname{alg}}:=\mathbf{k}(v,w)/(vw=1)\). Therefore, for an dg algebra \(A\) and its cycle \(v\), we have \[R_{v}(A)\cong A\coprod_{\mathbf{k}[v]}Q_{v}^{\operatorname{alg}}\simeq A \coprod_{\mathbf{k}[v]}^{\operatorname{L}}\mathbf{k}[v^{-}]^{\operatorname{alg}}\] We omit the details. ## 5. Two-sided inverses from one-sided ones In this section we apply one-sided localization in dg categories to obtain the ordinary (i.e. two-sided one). This recovers some of the results of [11] and [2]. Recall that given a dg category \(\mathcal{C}\) and a morphism \(v\in\operatorname{Hom}_{\mathcal{C}}(v,w)\), we have defined the one-sided (more precisely, right-sided) localization \(R_{v}(\mathcal{C})\) as \[R_{v}(\mathcal{C}):=\mathcal{C}\coprod_{\mathbf{k}(v)}Q_{v}\simeq\mathcal{C} \coprod_{\mathbf{k}(v)}^{\operatorname{L}}\mathbf{k}\langle v^{-}\rangle,\] where \(\mathbf{k}\langle v\rangle\) is the category with two objects and a one-dimensional space of morphisms between them spanned by \(v\), \(\mathbf{k}\langle v^{-}\rangle\) is the same category where \(v\) has been freely right-inverted, and \(Q_{v}\) is a cofibrant replacement of \(\mathbf{k}\langle v^{-}\rangle\). Symmetrically, let us define the category \(\mathbf{k}\langle^{-}v\rangle\) generated over \(\mathbf{k}\langle v\rangle\) by a morphism \(w^{\prime}\) such that \(w^{\prime}v=1\). Arguing as in Proposition 4.3, we construct a resolution \(L_{v}\) of \(\mathbf{k}\langle^{-}v\rangle\) with generators \(v,w^{\prime},u^{\prime}\) and the differential \(d(u^{\prime})=w^{\prime}v-1\). This allows us to define the (derived) left localization \(L_{v}(\mathcal{C})\) of \(\mathcal{C}\) at \(v\) as \[L_{v}(\mathcal{C}):=\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}L_{v}\simeq \mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}^{\operatorname{L}}\mathbf{k} \langle^{-}v\rangle.\] We will construct a functor \[\lambda_{v}:\mathcal{C}\downarrow\mathrm{dgCat}\to\mathrm{Sets}\] from the undercategory of \(\mathcal{C}\) to sets; it is a left-sided version of the functor \(\rho_{v}\) considered earlier. **Definition 5.1**.: Given a dg category \(\mathcal{D}\) supplied with a dg functor \(F:\mathcal{C}\to\mathcal{D}\), set \[\lambda_{v}(\mathcal{D})=\{w\in H_{-n}\operatorname{Hom}_{\mathcal{D}}(F(O_{2} ),F(O_{1})):F(w)F(v)=1\in H_{0}\operatorname{End}_{\mathcal{D}}(F(O_{1}))\}\] Then we have the following result whose proof is similar to that of Proposition 5.2. **Proposition 5.2**.: _Given a dg category \(\mathcal{C}\), a homology class \(v\in H_{n}\operatorname{Hom}(\mathcal{C})\) and a dg category \(\mathcal{D}\) supplied with a dg functor \(\mathcal{C}\to\mathcal{D}\) there is a natural isomorphism of sets_ \[\lambda_{v}(\mathcal{D})\cong[L_{v}(\mathcal{C}),\mathcal{D}]_{\mathcal{C} \downarrow\mathrm{dgCat}}\] _where the right hand side above refers to homotopy classes of maps in \(\mathcal{C}\downarrow\mathrm{dgCat}\). In other words, the functor \(\lambda_{v}\) is represented up to homotopy by \(L_{v}\)._ Let us now introduce the dg category \(I_{v}\) as follows: \[I_{v}:=L_{v}(R_{v}(\mathbf{k}\langle v\rangle))\cong R_{v}(L_{v}(\mathbf{k} \langle v\rangle)).\] It is clear that \(I_{v}\) is cofibrant over \(\mathbf{k}\langle v\rangle\); it has the same objects \(O_{1},O_{2}\) as \(\mathbf{k}\langle v\rangle\) and is freely generated by the morphisms \(v:O_{1}\to O_{2}\), \(w,w^{\prime}:O_{2}\to O_{1}\), \(u:O_{2}\to O_{2}\) and \(u^{\prime}:O_{1}\to O_{1}\). The differentials are defined thus: \(d(u)=vw-1,d(u^{\prime})=w^{\prime}v-1\). Let us now denote by \(\mathbf{k}\langle v,v^{-1}\rangle\) the category \(\mathbf{k}\langle v\rangle\) with \(v\) inverted on both sides; in other words \(\operatorname{Hom}(O_{1},O_{2})\) and \(\operatorname{Hom}(O_{2},O_{1})\) are one-dimensional vector spaces spanned by \(v,v^{-1}\) with \(v\circ v^{-1}=\operatorname{id}_{O_{1}};v^{-1}\circ v=\operatorname{id}_{O_{2}}\). Then we have the following result. **Proposition 5.3**.: _The functor \(I_{v}\to\mathbf{k}\langle v,v^{-1}\rangle\) given by setting \(u,u^{\prime}\) to zero, is a quasi-equivalence; in other words, \(I_{v}\) is a cofibrant replacement of \(\mathbf{k}\langle v,v^{-1}\rangle\)._ Proof.: The claimed result can be proved algebraically, similar to 4.3; we give a more geometric proof. Consider the simplicial set \(K\) with two vertices (corresponding to the objects \(O_{1}\) and \(O_{2}\)), three non-degenerate \(1\)-simplices connecting them (corresponding to the morphisms \(v,w\) and \(w^{\prime}\)) and two further non-degenerate \(1\)-simplices (corresponding to the morphisms \(u\) and \(u^{\prime}\)). The geometric realization of \(K\) is a \(2\)-disc; the two vertices are two diametrically opposite points on its boundary, the three \(1\)-cells are the diameter and two half-circles connecting them, and the remaining two \(2\)-cells are the two half-discs bounded by the two half-circles and the given diameter. The category \(I_{v}\) is obtained from \(K\) by the application of the categorical cobar-construction to \(C_{*}(K)\), the simplicial chain coalgebra of \(K\) (cf. [7, Section 3] regarding this notion; note also, that \(I_{v}\) is the value of the left adjoint to the dg nerve functor, cf. [9, Section 1.3.1]). Since the geometric realization of \(K\) is a \(2\)-disc, it is contractible. By direct inspection \(K\) is group-like, i.e. its fundamental category is a groupoid; therefore by [7, Corollary 4.20], the dg category \(I_{v}\) is quasi-equivalent to the category with a single object and \(\mathbf{k}\) worth of its endomorphisms. Therefore, the quotient map \(I\to\mathbf{k}\langle v,v^{-1}\rangle\) is a quasi-equivalence as claimed. **Remark 5.4**.: The simplicial set \(K\) is a (two-sided) localization of the simplicial interval viewed as an \(\infty\)-category; this point of view is presented in [4, Section 3.3]. We can now define the derived (two-sided) localization of a dg category at a given morphism. **Definition 5.5**.: Let \(\mathcal{C}\) be a dg category, \(O_{1},O_{2}\) be two objects of \(\mathcal{C}\) and \(v\in H_{n}\operatorname{Hom}(O_{1},O_{2})\) determining a homotopy class of a functor \(\mathbf{k}\langle v\rangle\to\mathcal{C}\). Then the _derived localization_ of \(\mathcal{C}\) at \(v\) is the dg category \[\mathbb{L}_{v}\,\mathcal{C}:=\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}I_ {v}\simeq\mathcal{C}\coprod_{\mathbf{k}\langle v\rangle}^{\mathbb{L}}\mathbf{ k}\langle v,v^{-1}\rangle.\] The dg category \(\mathbb{L}_{v}\,\mathcal{C}\) comes equipped with (a homotopy class of) a functor \(F:\mathcal{C}\to\mathbb{L}_{v}\,\mathcal{C}\); moreover \(F(v)\) is an invertible morphism in \(H_{n}(\mathbb{L}_{v}\,\mathcal{C})\). It turns out that \(\mathbb{L}_{v}\,\mathcal{C}\) is universal for this property. **Proposition 5.6**.: _Let \(G:\mathcal{C}\to\mathcal{D}\) be a dg functor for which \(G(v)\) is invertible in \(H_{*}(\mathcal{D})\). Then there exists a unique up to homotopy functor \(\mathbb{L}_{v}\,\mathcal{C}\to\mathcal{D}\) making the following diagram commute in the homotopy category of \(\mathcal{C}\downarrow\mathrm{dgCat}\):_ Proof.: Since the inverse to \(G(v)\) in \(H_{*}(\mathcal{D})\) is unique, Proposition 4.5 implies that \(G\) extends to a functor \(R_{v}(\mathcal{C})\to\mathcal{D}\) uniquely up to homotopy in the undercategory of \(\mathcal{C}\). Similarly, Proposition 5.2 implies that it further extends to a functor \(L_{v}R_{v}(\mathcal{C})\simeq\mathbb{L}_{v}(\mathcal{C})\to\mathcal{D}\) uniquely up to homotopy in the undercategory of \(R_{v}(\mathcal{C})\) and a fortiori, uniquely in the homotopy undercategory of \(\mathcal{C}\). Now, let \(C\) be a dg algebra and \(v\in H_{n}(A)\). Then \(\mathbb{L}_{v}(C)\) is a dg algebra supplied with a map \(f:C\to\mathbb{L}_{v}(C)\). The following result holds. **Proposition 5.7**.: _Let \(g:C\to D\) be a dg algebra map for which \(v(v)\) is invertible in \(H_{*}(D)\). Then there exists a unique up to homotopy map \(\mathbb{L}_{v}C\to D\) making the following diagram commute in the homotopy category o \(C\downarrow\mathrm{DGA}\):_ Proof.: Completely analogous to the proof of Proposition 5.6 taking into account Proposition 4.7 and its left-sided analogue. ## 6. One-sided localization in pretriangulated categories In this section we show how killing morphisms in pretriangulated dg categories is related to one-sided inversion of morphisms. **Theorem 6.1**.: _Let \(\mathcal{C}\) be a pretriangulated dg category, and let_ (6.1) _be a triangle in \(\mathcal{C}\). Then we have weak equivalences_ \[\mathcal{C}/v\simeq R_{x}(\mathcal{C})\] _and_ \[\mathcal{C}/v\simeq L_{s}(\mathcal{C})\] _in \(\mathcal{C}\downarrow\mathrm{dgCat}\)._ Proof.: We prove the first weak equivalence; the argument for the second is similar. The morphisms in the triangle (6.1) are degree \(0\) cycles, and the composition of any two is a boundary. So we have \(vx=d(t)\) and \((\Sigma x)s=d(u)\) for some morphisms \(t\) and \(u\) of degree \(1\); it is always possible to choose \(t\) and \(u\) so that \(vu+ts\) is homologous to \(1_{C}\). For a dg category \(\mathcal{D}\) over \(\mathcal{C}\), we wish to compare \[\rho_{x}(\mathcal{D})=\left\{y\in Z_{0}\operatorname{Hom}_{\mathcal{D}}(B,A) \mid xy-1_{B}\in B_{0}\operatorname{Hom}_{\mathcal{D}}(B,B)\right\}/B_{0} \operatorname{Hom}_{\mathcal{D}}(B,A)\] and \[\kappa_{v}(\mathcal{D})=\left\{w\in\operatorname{Hom}_{\mathcal{D}}(B,C)_{1} \mid d(w)=v\right\}/B_{1}\operatorname{Hom}_{\mathcal{D}}(B,C).\] Given \(y\in\rho_{x}(\mathcal{D})\), choose its representative \(y^{\prime}\in Z_{0}\operatorname{Hom}_{\mathcal{D}}(B,A)\) and \(z\in\operatorname{Hom}_{\mathcal{D}}(B,B)_{1}\) such that \(d(z)=1-xy^{\prime}\). Then \(d(vz)=v(1-xy^{\prime})=v-d(ty^{\prime})\) so \[w_{z}:=vz+ty^{\prime}\mod B_{1}\operatorname{Hom}_{\mathcal{D}}(B,C)\in\kappa_{ v}(\mathcal{D}).\] If \(z^{\prime}\in\operatorname{Hom}_{\mathcal{D}}(B,B)_{1}\) is another element such that \(d(z)=1-xy^{\prime}\) then \(w_{z}-w_{z^{\prime}}=v(z-z^{\prime})\in B_{1}\operatorname{Hom}(B,C)\) so \(w_{z}\) and \(w_{z^{\prime}}\) determine the same class in \(\kappa_{v}(\mathcal{B})\). Similarly, the class of \(w_{z}\) modulo \(B_{1}\operatorname{Hom}_{\mathcal{D}}(B,C)\) does not depend on the choice of \(y^{\prime}\). The correspondence \(y\mapsto w_{z}\) defines a map \[j_{\mathcal{D}}:\rho_{x}(\mathcal{D})\longrightarrow\kappa_{v}(\mathcal{D}), \tag{6.2}\] which defines a morphism of functors \(j:\rho_{x}\rightarrow\kappa_{v}\). On representing objects, it is given by the morphism \[J:\mathcal{C}/v\to R_{x}(\mathcal{C})\] under \(\mathcal{C}\) that sends \(w\) to \(vz+ty\). Here \(w,y,z\) are the elements arising in the construction of \(\mathcal{C}/v\) and \(R_{x}(\mathcal{C})\) from \(\mathcal{C}\), and \(t\) is the morphism in \(\mathcal{C}\) fixed above. Assume now that \(\mathcal{D}\) is pretriangulated. Considering the image of (6.1) in the triangulated category \(H_{0}(\mathcal{D})\), we see that \([v]\in H_{0}\operatorname{Hom}_{\mathcal{D}}(B,C)=0\) if and only if \([x]\in H_{0}\operatorname{Hom}_{\mathcal{D}}(A,B)\) is right invertible. This implies that \(\rho_{x}(\mathcal{D})=\emptyset\) if and only if \(\kappa_{v}(\mathcal{D})=\emptyset\). So assume both are nonempty. Applying the functor \(\operatorname{Hom}_{\mathcal{D}}(B,-)\) to (6.1) we obtain a short exact sequence (since \(v\) is zero in homology by our assumption): The set \(\rho_{x}(\mathcal{B})\) is identified with the preimage of \(\operatorname{id}\in H_{0}\operatorname{Hom}(B,B)\) and so, is an affine space for \(H_{1}\operatorname{Hom}(B,C)\). The set \(\kappa_{v}(\mathcal{D})\) is likewise an affine space for \(H_{1}\operatorname{Hom}(B,C)\). We claim that the map \(j_{\mathcal{D}}:\rho_{x}(\mathcal{D})\longrightarrow\kappa_{v}(\mathcal{D})\) constructed above is compatible with this affine space structure (so it is an isomorphism). To prove the claim, take \(\alpha\in H_{1}\operatorname{Hom}_{\mathcal{D}}(B,C)\); then the action of \(\alpha\) on \(y\in H_{0}\operatorname{Hom}_{\mathcal{D}}(B,A)\) has the form \(\alpha\cdot y=y+s\alpha\). Furthermore, \(x(y+s\alpha)=1+d(z+u\alpha)\) and therefore, \[j(\alpha\cdot y)=v(z+u\alpha)+t(y+s\alpha)=w_{z}+(vu+ts)\alpha=w_{z}+\alpha= \alpha\cdot w_{z}\] since by assumption \(vu+ts\) is homologous to \(1_{C}\). We have shown that \(j_{\mathcal{D}}\) is a bijection for pretriangulated dg categories \(\mathcal{D}\) under \(\mathcal{C}\). We deduce that \(J\) induces an isomorphism in the homotopy category of the undercategory of \(\mathcal{C}\) in the Morita model of the category of dg categories, i.e. the map induced on pretriangulated hulls by \(J\) is a weak equivalence in \(\mathcal{C}\downarrow\operatorname{dgCat}\). Now note that \(J\) is essentially surjective; in fact, the objects of \(\mathcal{C}/v\) and \(R_{x}(\mathcal{C})\) both coincide with \(\mathcal{C}\), and \(J\) is the identity on objects. Since the Yoneda functor from any dg category into its pretriangulated hull is quasi fully faithful, we deduce that \(J\) was already an isomorphism in the homotopy category of \(\mathcal{C}\downarrow\operatorname{dgCat}\). By rotating the triangle in the theorem above, we obtain the following. **Corollary 6.2**.: _Let \(\mathcal{C}\) be a pretriangulated dg category, and let_ _be a triangle in \(\mathcal{C}\). Then we have a weak equivalence_ \[R_{y}(\mathcal{C})\simeq L_{x}(\mathcal{C})\] _in \(\mathcal{C}\downarrow\operatorname{dgCat}\)._ Finally, we deduce an interpretation of Drinfeld's quotient [5] as a factorization into two consecutive morphism-killing constructions. Let \(\mathcal{C}\) be a pretriangulated dg category, \(B\) be an object in \(\mathcal{C}\), and denote by \(\mathcal{B}\) the full subcategory of \(\mathcal{C}\) on \(B\). We consider Drinfeld's dg quotient \(\mathcal{C}\operatorname{/}\mathcal{B}\); recall that it is characterized by various universal properties [10]. **Corollary 6.3**.: _Given_ \[A\xrightarrow{x}B\xrightarrow{y}C\rightsquigarrow\] _a triangle in \(\mathcal{C}\), we have weak equivalences_ \[\mathcal{C}\,/\,\mathcal{B}\simeq(\mathcal{C}\,/x)/y\simeq(\mathcal{C}\,/y)/x.\] _in \(\mathcal{C}\downarrow\mathrm{dgCat}\)._ Proof.: Denote by \(z:C\to\Sigma A\) the connecting morphism for the triangle. We then have \[\mathcal{C}\,/\,\mathcal{B}\simeq\mathbb{L}_{z}(\mathcal{C})\simeq L_{z}R_{z} (\mathcal{C})\simeq(\mathcal{C}\,/x)/y,\] by Lemma 6.4, the connection between two-sided and one-sided localisation established in Section 5 and Theorem 6.1. The following result, used in the proof of Corollary 6.3, is unsurprising and well known; we include an argument for the reader's convenience. **Lemma 6.4**.: _Given_ \[A\xrightarrow{}B\xrightarrow{}C\xrightarrow{z}\Sigma A\] _a triangle in \(\mathcal{C}\), we have a weak equivalence_ \[\mathcal{C}\,/\,\mathcal{B}\simeq\mathbb{L}_{z}(\mathcal{C})\] _in \(\mathcal{C}\downarrow\mathrm{dgCat}\)._ Proof.: The dg categories \(\mathcal{C}\,/\,\mathcal{B}\) and \(\mathbb{L}_{z}(\mathcal{C})\) have quasi-equivalent pretriangulated hulls \([\mathcal{C}\,/\,\mathcal{B}]\) and \([\mathbb{L}_{z}(\mathcal{C})]\) since they have identical universal properties according to [10, Theorem 4.0.1]. The resulting quasi-equivalence \([\mathcal{C}\,/\,\mathcal{B}]\to[\mathbb{L}_{z}(\mathcal{C})]\) restricts to a dg functor \(\mathcal{C}\,/\,\mathcal{B}\to\mathbb{L}_{z}(\mathcal{C})\) and is a quasi-equivalence since the Yoneda embeddings \(\mathcal{C}\,/\,\mathcal{B}\to[\mathcal{C}\,/\,\mathcal{B}]\) and \(\mathbb{L}_{z}(\mathcal{C})\to[\mathbb{L}_{z}(\mathcal{C})]\) are quasi-fully faithful dg functors.
2305.08617
The Product of a Generalized Quaternion Group And a Cyclic Group
Let $X(Q)=QC$ be a group, where $Q$ is a generalized quaternion group and $C$ is a cyclic group such that $Q\cap C=1$. In this paper, $X(Q)$ will be characterized and moreover, a complete classification for that will be given, provided $C$ is core-free. For the reason of self-constraint, in this paper a classification of the group $X(D)=DC$ is also given, where $D$ is a dihedral group and $C$ is a cyclic group such that $D\cap C=1$ and $C$ is core-free. Remind that the group $X(D)$ was recently classified in [12], based on a number of papers on skew-morphisms of dihedral groups. In this paper, a different approach from that in [12] will be used.
Shaofei Du, Hao Yu, Wenjuan Luo
2023-05-15T13:06:53Z
http://arxiv.org/abs/2305.08617v2
###### Abstract ###### Abstract Let \(X(Q)=QC\) be a group, where \(Q\) is a generalized quaternion group and \(C\) is a cyclic group such that \(Q\cap C=1\). In this paper, \(X(Q)\) will be characterized and moreover, a complete classification for that will be given, provided \(C\) is core-free. For the reason of self-constraint, in this paper a classification of the group \(X(D)=DC\) is also given, where \(D\) is a dihedral group and \(C\) is a cyclic group such that \(D\cap C=1\) and \(C\) is core-free. Remind that the group \(X(D)\) was recently classified in [12], based on a number of papers on skew-morphisms of dihedral groups. In this paper, a different approach from that in [12] will be used. **The Product of a Generalized Quaternion Group** **And a Cyclic Group1** Footnote 1: Corresponding author: [email protected]. **Keywords** factorizations of groups, generalized quaternion group, dihedral group, skew-morphism, regular Cayley map **MSC(2010)** 20F19, 20B20, 05E18, 05E45. *This work is supported in part by the National Natural Science Foundation of China (12071312). Shaofei Du1, Hao Yu and Wenjuan Luo Capital Normal University, School of Mathematical Sciences, Beijing 100048, People's Republic of China ## 1 Introduction A group \(G\) is said to be properly _factorizable_ if \(G=AB\) for two proper subgroups \(A\) and \(B\) of \(G\), while the expression \(G=AB\) is called a _factorization_ of \(G\). Furthermore, if \(A\cap B=1\), then we say that \(G\) has an _exact factorization_. Factorizations of groups naturally arise from the well-known Frattini's argument, including its version in permutation groups. One of the most famous results about factorized groups might be one of theorems of Ito, saying that any group is metabelian whenever it is the product of two abelian subgroups (see [15]). Later, Wielandt and Kegel showed that the product of two nilpotent subgroups must be soluble (see [36] and [17]). Douglas showed that the product of two cyclic groups must be super-solvable (see [6]). The factorizations of the finite almost simple groups were determined in [28] and the factorizations of almost simple groups with a solvable factor were determined in [27]. There are many other papers related to factorizations, for instance, finite products of soluble groups, factorizations with one nilpotent factor and so on. Here we are not able to list all references and the readers may refer to a survey paper [1]. In this paper, we shall focus on the product group \(X=GC\), for a finite group \(G\) and a cyclic group \(C\) such that \(G\cap C=1\). Suppose \(C\) is core-free. Then \(X\) is also called a _skew product group_ of \(G\). Recall that the skew morphism of a group \(G\) and a skew product group \(X\) of \(G\) were introduced by Jajcay and Siran in [16], which is related to the studies of regular Cayley maps of \(G\). For the reason of the length of the paper, we are not able to explain them in detail. Recently, there have been a lot of results on skew product groups \(X\) of some particular groups \(G\). (1) Cyclic groups: So far there exists no classification of such product groups. For partial results, see [4, 5, 8, 18, 19, 24]. (2) Elementary abelian \(p\)-groups: a global structure was characterized in [9]. (3) Finite nonabelian simple group or finite nonabelian characteristically simple groups: they were classified in [2] and [3], respectively. (4) Dihedral groups: Based on big efforts of several authors working on regular Cayley maps (see [4, 12, 25, 20, 21, 22, 31, 23, 33, 34, 38, 39, 40]), the final classification of skew product groups of dihedral groups was given in [12]. (5) Generalized quaternion groups: for partial results, see [13] and [26]. By \(Q\) and \(D\), we denote a generalized quaternion group and a dihedral group, respectively. Let \(X(G)=GC\) be a group, where \(G\in\{Q,\,D\}\) and \(C\) is a cyclic group such that \(G\cap C=1\). In this paper, we shall give a characterization for \(X(Q)\) and a complete classification of \(X(Q)\) provided \(C\) is core-free. In the most of the above papers dealing with skew product groups of dihedral groups, the authors adopt some computational technics on skew-morphisms. Alternatively, in this paper, we shall realize our goals by using classical group theoretical tools and methods (solvable groups, \(p\)-groups, permutation groups, group extension theory and so on). Address that we shall pay attention to the global structures of the group \(X(G)\). Since \(X(Q)\) is closely related to \(X(D)\) and we shall adopt a completely different approach, the group \(X(D)\) will be considered too, for the reason of self-constraint in this paper. Throughout this paper, set \(C=\langle c\rangle\) and \[\begin{array}{l}Q=\langle a,b\mid a^{2n}=1,b^{2}=a^{n},a^{b}=a^{-1}\rangle \cong Q_{4n},\,n\geq 2,\\ D=\langle a,b\mid a^{n}=b^{2}=1,a^{b}=a^{-1}\rangle\cong D_{2n},\,n\geq 2. \end{array} \tag{1}\] Let \(G\in\{Q,D\}\) and \(X=X(G)=GC=\langle a,b\rangle\langle c\rangle\). Then \(\langle a\rangle\langle c\rangle\) is unnecessarily a subgroup of \(X\). Clearly, \(X\) contains a subgroup \(M\) of the biggest order such that \(\langle c\rangle\leq M\subseteq\langle a\rangle\langle c\rangle\). This subgroup \(M\) will play an important role in this paper. From now on by \(S_{X}\) we denote the core \(\cap_{x\in X}S^{x}\) of \(X\) in a subgroup \(S\) of \(X\). There are four main theorems in this manuscript. In Theorem 1.1, the global structure of our group \(X\in\{X(Q),X(D)\}\) is characterized. **Theorem 1.1**: _Let \(G\in\{Q,\,D\}\) and \(X=G\langle c\rangle\in\{X(Q),\,X(D)\}\), where \({\rm o}(c)=m\geq 2\) and \(G\cap\langle c\rangle=1\). Let \(M\) be the subgroup of the biggest order in \(X\) such that \(\langle c\rangle\leq M\subseteq\langle a\rangle\langle c\rangle\). Then one of items in Tables 1 holds._ Clearly, \(M\) is a product of two cyclic subgroups, which has not been determined so far, as mentioned before, However, further properties of our group \(X\) is given in Theorem 1.2. More powerful properties will be obtained during the proof of Theorems 1.3 and 1.4, see Remarks 1.5 and 1.6. **Theorem 1.2**: _Let \(G\in\{Q,D\}\) and \(X\in\{X(Q),X(D)\}\), and \(M\) defined as above. Then we have \(\langle a^{2},c\rangle\leq C_{X}(\langle c\rangle_{X})\) and \(|X:C_{X}(\langle c\rangle_{X})|\leq 4\). Moreover, if \(\langle c\rangle_{X}=1\), then \(M_{X}\cap\langle a^{2}\rangle\lhd M_{X}\). In particular, if \(\langle c\rangle_{X}=1\) and \(M=\langle a\rangle\langle c\rangle\), then \(\langle a^{2}\rangle\lhd X\)._ In Theorem 1.3, a classification of \(X(Q)\) is given, provided that \(C\) is core-free. **Theorem 1.3**: _Let \(X=X(Q)\). Set \(R:=\{a^{2n}=c^{m}=1,\,b^{2}=a^{n},\,a^{b}=a^{-1}\}\). Suppose \(\langle c\rangle_{X}=1\). Then \(X\) is isomorphic to one of the following groups:_ * \(X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},c^{a}=a^{2s}c^{t},c^{b}=a^{u}c^{v}\rangle,\) _where_ \[r^{t-1}-1\equiv r^{v-1}-1\equiv 0(\mathrm{mod}\ n),\,t^{2}\equiv 1(\mathrm{mod} \ m),\] \[2s\sum_{l=1}^{t}r^{l}+2sr\equiv 2sr+2s\sum_{l=1}^{v}r^{l}-u\sum_{l =1}^{t}r^{l}+ur\equiv 2(1-r)(\mathrm{mod}\ 2n),\] \[2s\sum_{l=1}^{w}r^{l}\equiv u\sum_{l=1}^{w}(1-s(\sum_{l=1}^{t}r^{l }+r))^{l}\equiv 0(\mathrm{mod}\ 2n)\Leftrightarrow w\equiv 0(\mathrm{mod}\ m).\] _Moreover, if_ \(2\mid n\)_, then_ \(u(\sum_{l=0}^{v-1}r^{l}-1)\equiv 0(\mathrm{mod}\ 2n)\) _and_ \(v^{2}\equiv 1(\mathrm{mod}\ m)\)_; if_ \(2\nmid n\)_, then_ \(u\sum_{l=1}^{v}r^{l}-ur\equiv 2sr+(n-1)(1-r)(\mathrm{mod}\ 2n)\) _and_ \(v^{2}\equiv t(\mathrm{mod}\ m)\)_; if_ \(t\neq 1\)_, then_ \(u\equiv 0(\mathrm{mod}\ 2)\)_._ * \(X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2r},(c^{2})^{a}=a^{2s}c^{2t},(c^{2})^{b} =a^{2u}c^{2},a^{c}=bc^{2w}\rangle,\) _where either_ \(w=0\) _and_ \(r=s=t=u=1\)_; or_ \[w\neq 0,\,s=u^{2}\sum_{l=0}^{w-1}r^{l}+\frac{un}{2},\,t=2wu+1,\] \[r^{2w}-1\equiv(u\sum_{l=1}^{w}r^{l}+\frac{n}{2})^{2}-r\equiv 0( \mathrm{mod}\ n),\] \[s\sum_{l=1}^{t}r^{l}+sr\equiv 2sr-u\sum_{l=1}^{t}r^{l}+ur\equiv 1-r( \mathrm{mod}\ n),\] \[2w(1+uw)\equiv nw\equiv 2w(r-1)\equiv 0(\mathrm{mod}\ \frac{m}{2}),\] \[2^{\frac{1+(-1)^{u}}{2}}\sum_{l=1}^{i}r^{l}\equiv 0(\mathrm{mod}\ n) \Leftrightarrow i\equiv 0(\mathrm{mod}\ \frac{m}{2}).\] \begin{table} \begin{tabular}{c c c c} \hline Case & \(M\) & \(M_{X}\) & \(X/M_{X}\) \\ \hline 1 & \(\langle a\rangle\langle c\rangle\) & \(\langle a\rangle\langle c\rangle\) & \(\mathbb{Z}_{2}\) \\ 2 & \(\langle a^{2}\rangle\langle c\rangle\) & \(\langle a^{2}\rangle\langle c^{2}\rangle\) & \(D_{8}\) \\ 3 & \(\langle a^{2}\rangle\langle c\rangle\) & \(\langle a^{2}\rangle\langle c^{3}\rangle\) & \(A_{4}\) \\ 4 & \(\langle a^{4}\rangle\langle c\rangle\) & \(\langle a^{4}\rangle\langle c^{3}\rangle\) & \(S_{4}\) \\ 5 & \(\langle a^{3}\rangle\langle c\rangle\) & \(\langle a^{3}\rangle\langle c^{4}\rangle\) & \(S_{4}\) \\ \hline \end{tabular} \end{table} Table 1: The forms of \(M\), \(M_{X}\) and \(X/M_{X}\) 3. \(X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},(c^{3})^{a}=a^{2s}c^{3},(c^{3})^{b}=a^{2u}c^{3}, a^{c}=bc^{\frac{im}{2}},b^{c}=a^{x}b\rangle,\) _where_ \(n\equiv 2(\mbox{mod }4)\) _and either_ \(i=0\) _and_ \(r=x=u=1\)_; or_ \(i=1\)_,_ \(6\mid m\)_,_ \(r^{\frac{m}{2}}\equiv-1(\mbox{mod }n)\) _with_ \(\mbox{\rm o}(r)=m\)_,_ \(s\equiv\frac{r^{-3}-1}{2}(\mbox{mod }\frac{n}{2})\)_,_ \(u\equiv\frac{r^{3}-1}{2r^{2}}(\mbox{mod }\frac{n}{2})\) _and_ \(x\equiv-r+r^{2}+\frac{n}{2}(\mbox{mod }n)\)_._ 4. \(X=\langle a,b,c|R,(a^{2}_{1})^{c}=a^{2r}_{1},c^{a_{1}}_{1}=a^{2s}_{1}c_{1},c^{b }_{1}=a^{2u}_{1}c_{1},a^{c}_{1}=bc^{\frac{im}{2}},b^{c}=a^{x}_{1}b,c^{a}=a^{1+2 s}_{1}c^{1+\frac{jm}{3}}\rangle,\) _where either_ \(i=0\)_,_ \(r=1\)_,_ \(x=3\) _and_ \(s=u=z=d=0\)_; or_ \(i=1\)_,_ \(n\equiv 4(\mbox{mod }8),m\equiv 0(\mbox{mod }6)\)_,_ \(r^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{2})\)_,_ \(\mbox{\rm o}(r)=m\)_,_ \(s\equiv\frac{r^{-3}-1}{2}(\mbox{mod }\frac{n}{4}),\)__\(u\equiv\frac{r^{3}-1}{2r^{2}}(\mbox{mod }\frac{n}{4}),\)__\(x\equiv-r+r^{2}+\frac{n}{4}(\mbox{mod }\frac{n}{2}),\)__\(1+2z\equiv\frac{1-r}{2r}(\mbox{mod }\frac{n}{2}),\)__\(j\in\{1,2\}\)_._ 5. \(X=\langle a,b,c|R,a^{c^{4}}=a^{r},b^{c^{4}}=a^{1-r}b,(a^{3})^{c^{\frac{m}{4}}}=a^{-3},a^{c^{\frac{m}{4}}}= bc^{\frac{3m}{4}}\rangle\)_, where_ \(m\equiv 4(\mbox{mod }8)\) _and_ \(r\) _is of order_ \(\frac{m}{4}\) _in_ \(\mathbb{Z}_{2n}^{*}.\)__ _Moreover, in the families of groups (1)-(5), for any given parameters satisfying the equations, there exists \(X=X(Q)\)._ In Theorem 1.4, a classification of \(X(D)\) is given, provided that \(C\) is core-free. Remind that our presentations for \(X(D)\) are different form that in [12] but they are essentially isomorphic. **Theorem 1.4**: _Let \(X=X(D)\). Set \(R:=\{a^{n}=b^{2}=c^{m}=1,\,a^{b}=a^{-1}\}\). Suppose \(\langle c\rangle_{X}=1.\) Then \(X\) is isomorphic to one of the following groups:_ 1. \(X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},c^{a}=a^{2s}c^{t},c^{b}=a^{u}c^{v}\rangle,\) _where_ \[\begin{array}{l}2(r^{t-1}-1)\equiv 2(r^{v-1}-1)\equiv u(\sum_{l=0}^{v-1}r^{l}-1) \equiv 0(\mbox{mod }n),\,t^{2}\equiv v^{2}\equiv 1(\mbox{mod }m)\\ 2s\sum_{l=1}^{t}r^{l}+2sr\equiv 2sr+2s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ ur\equiv 2(1-r)(\mbox{mod }n),\\ \mbox{if }t\neq 1,\,\mbox{then}\,u\equiv 0(\mbox{mod }2),\\ 2s\sum_{l=1}^{w}r^{l}\equiv u\sum_{l=1}^{w}(1-s(\sum_{l=1}^{t}r^{l}+r))^{l} \equiv 0(\mbox{mod }n)\Leftrightarrow w\equiv 0(\mbox{mod }m).\end{array}\] 2. \(X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2r},(c^{2})^{b}=a^{2s}c^{2},(c^{2})^{a}= a^{2u}c^{2v},a^{c}=bc^{2w}\rangle,\) _where either_ \(w=s=u=0\) _and_ \(r=t=1\)_; or_ \[\begin{array}{l}w\neq 0,\,s=u^{2}\sum_{l=0}^{w-1}r^{l},\,t=1+2wu,\\ nw\equiv 2w(r-1)\equiv 2w(1+uw)\equiv 0(\mbox{mod }\frac{m}{2}),\\ r^{2w}-1\equiv(u\sum_{l=1}^{w}r^{l})^{2}-r\equiv(r^{w}+1)(1+s\sum_{l=0}^{w-1}r ^{l})\equiv 0(\mbox{mod }\frac{n}{2}),\\ \sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0( \mbox{mod }\frac{m}{2}).\end{array}\] 3. \(X=\langle a,b,c|R,a^{c^{3}}=a^{r},(c^{3})^{b}=a^{2u}c^{3},a^{c}=bc^{\frac{im}{2 }},b^{c}=a^{x}b\rangle,\) _where_ \(n\equiv 2(\mbox{mod }4)\) _and either_ \(i=u=0\) _and_ \(r=x=1\)_; or_ \(i=1\)_,_ \(6\mid m\)_,_ \(l^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{2})\) _with_ \(\mbox{\rm o}(l)=m\)_,_ \(r=l^{3}\)_,_ \(u=\frac{l^{3}-1}{2l^{2}}\) _and_ \(x\equiv-l+l^{2}+\frac{n}{2}(\mbox{mod }n)\)_._ 4. \(X=\langle a,b,c|R,(a^{2})^{c^{3}}=a^{2r},(c^{3})^{b}=a^{\frac{2(l^{3}-1)}{l^{2}}}c^{ 3},(a^{2})^{c}=bc^{\frac{im}{2}},b^{c}=a^{2(-l+l^{2}+\frac{n}{4})}b,c^{a}=a^{2+4z} c^{2+3d}\rangle,\)_where either \(i=z=d=0\) and \(l=1\); or \(i=1\), \(n\equiv 4(\mathrm{mod}\ 8)\), \(m\equiv 0(\mathrm{mod}\ 6)\), \(l^{\frac{m}{2}}\equiv-1(\mathrm{mod}\ \frac{n}{4})\) with \(\mathrm{o}(l)=m\), \(r=l^{3}\), \(z=\frac{1-3l}{4l}\), \(1+3d\equiv 0(\mathrm{mod}\ \frac{m}{3})\) and \(\sum_{i=1}^{j}r^{i}\equiv 0(\mathrm{mod}\ \frac{n}{2})\Leftrightarrow j\equiv 0( \mathrm{mod}\ \frac{m}{3})\)_. 5. \(X=\langle a,b,c|R,a^{c^{4}}=a^{r},b^{c^{4}}=a^{1-r}b,(a^{3})^{c^{\frac{m}{4}}} =a^{-3},a^{c^{\frac{m}{4}}}=bc^{\frac{3m}{4}}\rangle,\)_where \(m\equiv 4(\mathrm{mod}\ 8)\) and \(r\) is of order \(\frac{m}{4}\) in \(\mathbb{Z}_{n}^{*}\)_._ _Moreover, in the families of groups (1)-(5), for any given parameters satisfying the equations, there exists \(X=X(D)\)._ **Remark 1.5**: _From Theorems 1.3 and 1.4, one may observe that \(\langle a^{4}\rangle\lhd X\) for groups in (3) and (4); and \(\langle a^{3}\rangle\lhd X\) for groups in (5)._ **Remark 1.6**: _Checking Theorem 1.3, we know that \(\langle a^{n}\rangle\lhd X(Q)\) for all cases (2) and (3) and some cases in (1). Moreover, corresponding to \(D\cong Q/\langle a^{n}\rangle\), we have \(X(D)=X(Q)/\langle a^{n},c_{1}\rangle\), where \(\langle a^{n}\rangle\lhd X\) and \(\langle a^{n},c_{1}\rangle=\langle a^{n},c\rangle_{X(Q)}\)._ **Remark 1.7**: _The group \(X\) where \(\langle c\rangle_{X}\neq 1\) is a cyclic extension of \(X/\langle c\rangle_{X}\) which is given in Theorem 1.3 and Theorem 1.4, respectively. Furthermore, by Theorem 1.2, the subgroup \(C_{X}(\langle c\rangle_{X})\) of \(X\) is of index at most 4. So one may classify \(X\) by using Theorems 1.3 and 1.4. This needs complicate computations and we cannot do it in this paper._ **Remark 1.8**: _One may determine regular Cayley maps of dihedral groups by Theorem 1.4 (which were done in [23] via skew-morphism computations) and of generalized quaternion groups by Theorem 1.3._ After this introductory section, some preliminary results will be given in Section 2, Theorems 1.1-1.4 will be proved in Sections 3-6, respectively. ## 2 Preliminaries In this section, the notation and elementary facts used in this paper are collected. ### Notation In this paper, all the groups are supposed to be finite. We set up the notation below, where \(G\) and \(H\) are groups, \(M\) is a subgroup of \(G\), \(n\) is a positive integer and \(p\) is a prime number. \(|G|\) and \(\mathrm{o}(g)\): the order of \(G\) and an element \(g\) in \(G\), resp.; \(H\leq G\) and \(H<G\): \(H\) is a subgroup of \(G\) and \(H\) is a proper subgroup of \(G\), resp.; \([G:H]\): the set of cosets of \(G\) relative to a subgroup \(H\); \(H\lhd G\) and \(H\,\mbox{char}\,\ G\): \(H\) is a normal and characteristic subgroup of \(G\), resp.; \(G^{\prime}\) and \(Z(G)\): the derived subgroup and the center of \(G\) resp.; \(M_{G}\): the core of \(M\) in \(G\) which is the maximal normal subgroup of \(G\) contained in \(M\); \(G\rtimes H\): a semidirect product of \(G\) by \(H\), in which \(G\) is normal; \(G.H\): an extension of \(G\) by \(H\), where \(G\) is normal; \(C_{M}(G)\): centralizer of \(M\) in \(G\); \(N_{M}(G)\): normalizer of \(M\) in \(G\); \(\mbox{Syl}_{p}(G)\): the set of all Sylow \(p\)-subgroups of \(G\); \([a,b]:=a^{-1}b^{-1}ab\), the commutator of \(a\) and \(b\) in \(G\); \(\Omega_{1}(G)\): the subgroup \(\langle g\in G\mid g^{p}=1\rangle\) of \(G\) where \(G\) is a \(p\)-group; \(\mathcal{O}_{n}(G)\): the subgroup \(\langle g^{p^{n}}\mid g\in G\) of \(G\) where \(G\) is a \(p\)-group; \(S_{n}\): the symmetric group of degree \(n\) (naturally acting on \(\{1,2,\cdots,n\}\)); \(A_{n}\): the alternating group of degree \(n\) (naturally acting on \(\{1,2,\cdots,n\}\)); \(\mbox{GF}(q)\): finite field of \(q\) elements; \(\mbox{AGL}(n,p)\): the affine group on \(\mbox{GF}^{n}(q)\). ### Elementary facts **Proposition 2.1**: _[_30_, Theorem 1]_ _The finite group \(G=AB\) is solvable, where both \(A\) and \(B\) are subgroups with cyclic subgroups of index no more than 2._ Recall that a group \(H\) is said a _Burnside group_ if every permutation group containing a regular subgroup isomorphic to \(H\) is either 2-transitive or imprimitive. The following results are well-known. **Proposition 2.2**: _[_35_, Theorem 25.3 and Theorem 25.6]_ _Every cyclic group of a composite order is a Brunside group. Every dihedral group is a Burnside group._ **Proposition 2.3**: _[_7_, Lemma 4.1]_ _Let \(n\geq 2\) be an integer and \(p\) a prime. Then \(\mbox{AGL}(n,p)\) contains an element of order \(p^{n}\) if and only if \((n,p)=(2,2)\) and \(\mbox{AGL}(2,2)\cong S_{4}\)._ Recall that our group \(X(D)=DC\), where \(D\) is a dihedral group of order \(2n\) and \(C\) is a cyclic group of order \(m\) such that \(D\cap C=1\), where \(n,m\geq 2\). Then we have the following results. **Lemma 2.4**: _Suppose that \(X(D)\) is a solvable and has a faithful 2-transitive permutation representation relative to a subgroup \(M\), which is of index a composite order. Then \(X(D)\leq\mathrm{AGL}(k,p)\). Moreover,_ * _if_ \(X(D)\) _contains an element of order_ \(p^{k}\)_, then_ \(X(D)=S_{4}\)_;_ * _if the hypotheses holds for_ \(M=C\) _where_ \(C\) _is core-free, then_ \(X(D)=A_{4}\)_._ **Proof** Set \(\Omega=[X(D):M]\). Let \(N\) be a minimal normal subgroup of \(X(D)\). Since \(X(D)\) is solvable, \(N\cong\mathbb{Z}_{p}^{k}\) for some prime \(p\) and integer \(k\). Since \(X(D)\) is 2-transitive, it is primitive, which implies that \(N\) is transitive on \(\Omega\) and so is regular on \(\Omega\). Therefore, \(X(D)=N\rtimes X(D)_{\alpha}\leq\mathrm{AGL}(k,p)\), for some \(\alpha\in\Omega\). Since \(X(D)\) is 2-transitive and \(|\Omega|=p^{k}\), we know \(|X(D)_{\alpha}|\geq p^{k}-1\) for any \(\alpha\in\Omega\). (i) Suppose that \(X(D)\) contains an element of order \(p^{k}\). By Proportion 2.3, we get \((k,p)=(2,2)\) so that \(X(D)=S_{4}\), reminding \(|\Omega|\) is not a prime. (ii) Let \(M=C\) where \(C\) is core-free. Set \(C=\langle c\rangle\) and \(o(c)=m\). Then \(X(D)=N\rtimes\langle c\rangle\), where \(\langle c\rangle\) is a Singer subgroup of \(\mathrm{GL}(k,p)\). Then both \(D\) and \(N\) are regular subgroups, that is \(|D|=2n=|\Omega|=p^{k}\), which implies \(p=2\). Now, we have \(|X(D)|=2^{k}(2^{k}-1)=2n\cdot m=p^{k}\cdot m\) and so \(m=2^{k}-1\). Since both \(N\) and \(D\) are Sylow 2-subgroups of \(X(D)\) and \(N\lhd X(D)\), we get \(D=N\), and so \(D\cong\mathbb{Z}_{2}^{2}\). Therefore, \(p=2\) and \(X(D)=A_{4}\). \(\square\) **Proposition 2.5**: _[_10_, Satz 1]_ _Let \(N\leq M\leq G\) such that \((|N|,|G:M|)=1\) and \(N\) be an abelian normal subgroup of \(G\). If \(N\) has a complement in \(M\), then \(N\) also has a complement in \(G\)._ The Schur multiplier \(M(G)\) of a group \(G\) is defined as the second integral homology group \(H^{2}(G;Z)\), where \(Z\) is a trivial \(G\)-module. It plays an important role in the central expansion of groups. The following result is well-known. **Proposition 2.6**: _[_32_, (2.21) of page 301]_ _The Schur multiplier \(M(S_{n})\) of \(S_{n}\) is a cyclic group of order 2 if \(n\geq 4\) and of order 1 for \(n\leq 3\)._ **Proposition 2.7**: _[_14_, Theorem 4.5]_ _Let \(H\) be the subgroup of \(G\). Then \(N_{G}(H)/C_{G}(H)\) is isomorphic to a subgroup of \(\mathrm{Aut}\,(H)\)._ **Proposition 2.8**: _[_29_, Theorem]_ _If \(G\) is a transitive permutation group of degree \(n\) with a cyclic point-stabilizer, then \(|G|\leq n(n-1)\)._ **Proposition 2.9**: _[_15_, Satz 1 and Satz 2]_ _Let \(G=AB\) be a group, where both \(A\) and \(B\) are abelian subgroups of \(G\). Then_ * \(G\) _is meta-abelian, that is,_ \(G^{\prime}\) _is abelian;_ _._ 2. _if_ \(G\neq 1\)_, then_ \(A\) _or_ \(B\) _contains a normal subgroup_ \(N\neq 1\) _of_ \(G\)_._ **Proposition 2.10**: _[_14_, Theorem 11.5]_ _Let \(G=\langle a\rangle\langle b\rangle\) be a group. If \(|\langle a\rangle|\leq|\langle b\rangle|\), then \(\langle b\rangle_{G}\neq 1\). If both \(\langle a\rangle\) and \(\langle b\rangle\) are \(p\)-groups where \(p\) is an odd prime, then \(G\) is matecyclic._ **Proposition 2.11**: _[_37_, Corollary 1.3.3]_ _Let \(G=AB\) be a group, where both \(A\) and \(B\) are subgroups of \(G\). And let \(A_{p}\) and \(B_{p}\) be Sylow \(p\)-subgroups of \(A\) and \(B\) separately, for some prime \(p\). Then \(A_{p}B_{p}\) is the Sylow \(p\)-subgroup of \(G\)._ **Proposition 2.12**: _[_11_, Theorem 12.5.1]_ _Let \(p\) is an odd prime. Then every finite \(p\)-group \(G\) containing a cyclic maximal subgroup is isomorphic to (1) \(\mathbb{Z}_{p^{n}}\); (2) \(\langle a,b\mid a^{p^{n-1}}=b^{p}=1,\,[a,b]=1\rangle,\,n\geq 2\); or (3) \(\langle a,b\mid a^{p^{n-1}}=b^{p}=1,\,[a,b]=a^{p^{n-2}}\rangle,\,n\geq 3\)._ ## 3 Proof of Theorem 1.1 To prove Theorem 1.1, let \(G=D\) or \(Q\), defined in Eq(1). Let \(X=G\langle c\rangle\in\{X(Q),X(D)\}\). Let \(M\) be the subgroup of the biggest order in \(X\) such that \(\langle c\rangle\leq M\subseteqneq\langle a\rangle\langle c\rangle\), and set \(M_{X}=\cap_{x\in X}M^{x}\). By Proposition 2.1, \(X\) is solvable. Theorem 1.1 will be proved by dealing with \(G=D\) and \(Q\), separately in Lemmas 3.1 and 3.3. **Lemma 3.1**: _Theorem 1.1 holds, provided \(G=D\) and \(X=X(D)\)._ **Proof** Let \(G=D\) so that \(X=X(D)\). Remind that \(m,n\geq 2\), \(|X|\) is even and more than \(7\). The lemma is proved by the induction on \(|X|\). All the cases when \(|X|\leq 24\) are listed below, which implies that the conclusion holds: \(M=\langle a\rangle\langle c\rangle\): \(a^{n}=c^{m}=1\), where \(4\leq nm\leq 12\); \(M=\langle a^{2}\rangle\langle c\rangle\): \(a^{2}=c^{m}=1\), where \(m\in\{2,4,6\}\)\(M_{X}=\langle a^{2}\rangle\langle c^{2}\rangle\), \(X/M_{X}=D_{8}\); \(M=\langle a^{2}\rangle\langle c\rangle\): \(a^{2}=c^{m}=1\), where \(m\in\{3,6\}\), \(M_{X}=\langle a^{2}\rangle\langle c^{3}\rangle\), \(X/M_{X}=A_{4}\); \(M=\langle a^{3}\rangle\langle c\rangle\): \(a^{3}=c^{4}=1\), \(M_{X}=\langle a^{3}\rangle\langle c^{4}\rangle=1\), \(X/M_{X}=S_{4}\); \(M=\langle a^{4}\rangle\langle c\rangle\): \(a^{4}=c^{3}=1\), \(M_{X}=\langle a^{4}\rangle\langle c^{3}\rangle=1\), \(X/M_{X}=S_{4}\). Assume that the result is true for less than \(|X|\) and \(|X|\geqq 24\). Then we shall carry out the proof by the following three steps. _Step 1: \(M_{X}\neq 1\)_ Suppose that \(M_{X}\neq 1\). Set \(M=\langle a^{i}\rangle\langle c\rangle\) for some \(i\). Since \(a^{i}\in\cap_{l_{2},l_{3}}M^{a^{l_{2}}b^{l_{3}}}=\cap_{l_{1},l_{2},l_{3}}M^{c^ {l_{1}}a^{l_{2}}b^{l_{3}}}=M_{X}\), we get that \(M_{X}=M_{X}\cap(\langle a^{i}\rangle\langle c\rangle)=\langle a^{i}\rangle \langle c^{r}\rangle\) for some \(r\). Set \(\overline{X}:=X/M_{X}=\overline{GC}\). Then we claim that \(\overline{G}\cap\overline{C}=1\). In fact, for any \(\overline{g}=\overline{c}^{\prime}\in\overline{G}\cap\overline{C}\) for some \(g\in G\) and \(c^{\prime}\in C\), we have \(gc^{\prime-1}\in M_{X}\), that is \(g\in\langle a^{i}\rangle\) and \(c^{\prime}\in\langle c^{r}\rangle\), which implies \(\overline{g}=\overline{c}^{\prime}=1\). Therefore, \(\overline{G}\cap\overline{C}=1\). Let \(M_{0}/M_{X}=\langle\overline{a}^{j}\rangle\langle\overline{c}\rangle\) be the biggest subgroup of \(\overline{X}\) containing \(\langle\overline{c}\rangle\) and contained in the subset \(\langle\overline{a}\rangle\langle\overline{c}\rangle\). Then \(\langle\overline{a}^{j}\rangle\langle\overline{c}\rangle=\langle\overline{ c}\rangle\langle\overline{a}^{j}\rangle\). Since \[\langle a^{j}\rangle\langle c\rangle M_{X}=\langle a^{j}\rangle M_{X}\langle c \rangle=\langle a^{j}\rangle\langle a^{i}\rangle\langle c\rangle\quad\text{ and}\quad\langle c\rangle\langle a^{j}\rangle M_{X}=\langle c\rangle M_{X}\langle a^{j} \rangle=\langle c\rangle\langle a^{i}\rangle\langle a^{j}\rangle,\] we get \(\langle a^{i},a^{j}\rangle\langle c\rangle\leq X\). By the maximality of \(M\), we have \(\langle a^{i},a^{j}\rangle=\langle a^{i}\rangle\) so that \(M_{0}=M\). Using the induction hypothesis on \(\overline{X}=\overline{GC}\), noting \(M_{0}/M_{X}=M/M_{X}\), which is corefree in \(\overline{X}\), we get \(\overline{X}\) is isomorphic to \(\mathbb{Z}_{2}\), \(D_{8},\,A_{4}\) or \(S_{4}\), and correspondingly, \(o(\overline{a})=k\), where \(k\in\{1,2,3,4\}\), and so \(a^{k}\in M_{X}\). Since \(M=\langle a^{i}\rangle\langle c\rangle\) and \(M_{X}=\langle a^{i}\rangle\langle c^{r}\rangle\), we know that \(\langle a^{i}\rangle=\langle a^{k}\rangle\), which implies that \(i\in\{1,2,3,4\}\). Clearly, if \(\overline{X}=\mathbb{Z}_{2}\), then \(M_{X}=M\); if \(\overline{X}=D_{8}\) and \(\mathrm{o}(\overline{c})=2\), then \(M_{X}=\langle a^{2}\rangle\langle c^{2}\rangle\); if \(\overline{X}=A_{4}\) and \(\mathrm{o}(\overline{c})=3\), then \(M_{X}=\langle a^{2}\rangle\langle c^{3}\rangle\); if \(\overline{X}=S_{4}\) and \(\mathrm{o}(\overline{c})=4\), then \(M_{X}=\langle a^{3}\rangle\langle c^{4}\rangle\); and if \(\overline{X}=S_{4}\) and \(\mathrm{o}(\overline{c})=3\), then \(M_{X}=\langle a^{4}\rangle\langle c^{3}\rangle\). _Step 2: Show that if \(M_{X}=1\) then \(G\in\{D_{2kp}\mid k=2,3,4\}\)._ Suppose that \(M_{X}=1\). Since \(\langle a\rangle_{X},\langle c\rangle_{X}\leq M_{X}\), we get \(\langle a\rangle_{X}=\langle c\rangle_{X}=1\). Now we are showing \(G_{X}=1\). For the contrary, suppose that \(G_{X}\neq 1\). If \(|G_{X}|\geqslanteq 4\), then by \(G=\langle a,b\rangle\cong D_{2n}\) we get \(\langle a\rangle_{X}\neq 1\), a contradiction. So \(|G_{X}|\leq 4\). Since \(G_{X}\lhd G\cong D_{2n}\), we know that \(|G:G_{X}|\leq 2\), which implies \(|G|\leq 8\), that is \(G\cong D_{4}\) or \(D_{8}\). A direct checking shows that \(X\) is \(D_{8}\), \(A_{4}\) or \(S_{4}\). All cases are impossible, as \(|X|\geqslanteq 24\). In what follows, we consider the faithful (right multiplication) action of \(X\) on the set of right cosets \(\Omega:=[X:\langle c\rangle]\). Suppose that \(X\) is primitive. By Proposition 2.2, every dihedral group is a Burnside group, which implies that \(X\) is \(2\)-transitive. Since \(X\) has a cyclic point-stabilizer \(\langle c\rangle\). By Lemma 2.4.(2), we get \(G=D_{4}\) and \(X=A_{4}\), contradicting with \(|X|\geqslanteq 24\). Suppose that \(X\) is imprimitive. Pick a maximal subgroup \(H\) of \(X\) which contains \(\langle c\rangle\) properly. Then \(H=H\cap X=(H\cap G)\langle c\rangle=\langle a^{s},b_{1}\rangle\langle c \rangle\leqslanteq X\), for some \(b_{1}\in G\setminus\langle a\rangle\) and some \(s\). Using the same argument as that in Step 1 (viewing \(H\) as \(G\)), one has \(a^{s}\in H_{X}\). Set \(\overline{X}=X/H_{X}\). Consider the faithful primitive action of \(\overline{X}\) on \(\Omega_{1}:=[\overline{X}:\overline{H}]\), with a cyclic regular subgroup of \(\overline{a}\), where \(|\Omega_{1}|=s\). By Proposition 2.2, a cyclic group of composed order is a Burnside group, we know that either \(s\) is a prime \(p\) so that \(\overline{X}\leq\mathrm{AGL}(1,p)\) or \(s\) is of composite order so that \(\overline{X}\) is \(2\)-transitive. In what follows, we consider these two cases, separately. Case (1): \(a^{s}=1\). In this case, \(H=\langle c\rangle\rtimes\langle b\rangle\) and \(X=\langle c,b\rangle.\langle a\rangle\). Then we have two cases: Suppose that \(s\) is of composite order so that \(\overline{X}\) is \(2\)-transitive. By Proportion 2.3, \(\overline{X}\leq\mathrm{AGL}(l,q)\) for some prime \(q\), which contains a cyclic regular subgroup \(\langle\overline{a}\rangle\) of order \(q^{l}\). By Lemma 2.4.(1), \(\overline{X}\cong S_{4}\) and \(\mathrm{o}(\overline{a})=4\) so that \(\mathrm{o}(a)=4\) (as \(H_{X}\leq\langle b,c\rangle\)), which in turn implies \(G=D_{8}\). In this case, checking by Magma, we have that either \(\mathrm{o}(c)=2,3\) and \(|X|\leq 24\); or \(\mathrm{o}(c)=4\), \(|X|=32\) but \(G_{X}\neq 1\), contradicting with \(G_{X}=1\). Suppose that \(s\) is a prime \(p\) so that \(\overline{X}\leq\operatorname{AGL}(1,p)\). Then \(\operatorname{o}(a)=p\) so that \(G\cong D_{2p}\), where \(p\geq 5\), as \(|X|\ngeq 24\). Consider the action of \(X\) on the set of blocks of length \(2\) on \(\Omega=[X:\langle c\rangle]\), with the kernel, say \(K\). Then \(K\nleq\langle c\rangle\) (as \(\langle c\rangle_{X}=1\)) so that \(K\) interchanges two points \(\langle c\rangle\) and \(\langle c\rangle b\), which implies \(|K/K\cap\langle c\rangle|=2\). Since \(K\cap\langle c\rangle\) is cyclic and \(K\cap\langle c\rangle\) fixes setwise each block of length \(2\), we get \(|K\cap\langle c\rangle|=2\). Therefore, \(|K|\leq 4\). Since \(K\rtimes\langle a\rangle\lhd X\) and \(p\geq 5\), we have \(K\rtimes\langle a\rangle=K\times\langle a\rangle\) so that \(\langle a\rangle\operatorname{char}\left(K\times\langle a\rangle\right) \lhd X\), contradicting with \(G_{X}=1\). Case (2): \(a^{s}\neq 1\). Firstly, show \(s=p\), a prime. To do that, we consider the group \(X/H_{X}\). Since \(a^{s}\in H_{X}\), we get \(H_{X}\neq 1\) and of course \(H_{X}\nleq\langle c\rangle\). Suppose that \(\langle a^{j}\rangle\langle c\rangle\leq H\). Then \(a^{j}\in M\). Using the same arguments as that in the first line of Step 1, we get \(a^{j}\in M_{X}=1\). Therefore, there exists an \(l\) such that \(bc^{l}\in H_{X}\), which implies \(H/H_{X}=\langle cH_{X}\rangle\), a cyclic group so that \(X/H_{X}=(\langle aH_{X}\rangle/H_{X})(H/H_{X})\), a product of two cyclic subgroups, which cannot be isomorphic to \(S_{4}\). Suppose that \(X/H_{X}\) is \(2\)-transitive on \(\Omega_{1}=[X/H_{X},H/H_{X}]\), with a cyclic regular subgroup. By Lemma 2.4.(1), \(X/H_{X}\cong S_{4}\), a contradiction. Therefore, \(s=p\), a prime and \(X/H_{X}\leq\operatorname{AGL}(1,p)\). Secondly, we consider the group \(\overline{H}:=H/\langle c\rangle_{H}=\overline{\langle c\rangle\langle a^{p},b\rangle}\), taking into account \(s=p\), a prime. Then \(\langle\overline{c}\rangle_{\overline{H}}=1\) and \(o(\overline{a}^{p})=o(a^{p})\). Let \(H_{0}/\langle c\rangle_{H}=\langle\overline{a}^{pj}\rangle\langle\overline{c}\rangle\) be the biggest subgroup of \(\overline{H}\) containing \(\langle\overline{c}\rangle\) and contained in the subset \(\langle\overline{a}^{p}\rangle\langle\overline{c}\rangle\). Since \(|\overline{H}|\leq|X|\), by the induction hypothesis on \(\overline{H}\), we know is \(H_{0}/\langle c\rangle_{H}=\langle\overline{a}^{pk}\rangle\langle\overline{c}\rangle\), for one of \(k\) in \(\{2,3,4\}\), which implies \(\langle a^{pk}\rangle\langle c\rangle\langle c\rangle_{H}=\langle c\rangle \langle a^{pk}\rangle\langle c\rangle_{H}=\langle c\rangle\langle a^{pk}\rangle\), giving \(\langle a^{pk}\rangle\langle c\rangle\leq H\leq X\). Therefore, we get \(a^{pk}\in M_{X}\). Since \(M_{X}=1\) and \(a^{p}=a^{s}\neq 1\), we have \[a^{pk}=1,\,\text{for\,one\,of}\,k\in\{2,3,4\}.\] Therefore, only the following three groups are remaining: \(G=D_{2kp}\), where \(k\in\{2,3,4\}\). _Step 3: Show that \(G\) cannot be \(D_{2kp}\), where \(k\in\{2,3,4\}\), provided \(M_{X}=1\)_ Suppose that \(G\cong D_{2kp}\), \(k\in\{2,3,4\}\), reminding that \(H=\langle a^{p},b\rangle\langle c\rangle\), \(M_{X}=1\) and \(X\) has blocks of length \(2k\). Moreover, \(\langle a^{p}\rangle_{X}=1\) and there exists no nontrivial element \(a^{j}\in H\) such that that \(\langle a^{j}\rangle\langle c\rangle\leq H\). Here, we only give the proof for the case \(k=4\), that is \(G\cong D_{8p}\). For \(k=2\) or \(3\), we have the same arguments but more easer. Let \(G=D_{8p}\). If \(p=2\) or \(3\), then \(G\cong D_{16}\) or \(D_{24}\). These small cases are directly excluded by Magma. So assume \(p\geq 5\). Set \(a_{1}=a^{p}\) and \(a_{2}=a^{4}\) so that \(H=\langle a_{1},b\rangle\langle c\rangle\), where \(\operatorname{o}(a_{1})=4\). Set \(C_{0}=\langle c\rangle_{H}\) and \(K=H_{X}\). Then \(H\) contains an element \(a_{1}\) of order \(4\) having two orbits of length \(4\) on each block of length \(8\), where \(a_{1}\leq K\). Consider the action of \(\overline{H}:=H/C_{0}=\langle\overline{a}_{1},\overline{b}\rangle\langle \overline{c}\rangle\) on the block containing the point \(\langle c\rangle\), noting that \(\langle\overline{c}\rangle\) is core-free. Remind that \(\langle a_{1}\rangle_{X}=1\). So \(\langle\overline{a}_{1},\overline{b}\rangle\cong D_{8}\) and \(\overline{H}\cong S_{4}\). Moreover, we have \(N_{\overline{H}}(\langle\overline{c}\rangle)=\langle\overline{c}\rangle \rtimes\langle\overline{b}\rangle\cong D_{6}\), by rechoosing \(b\) in \(\langle a_{1},b\rangle\). Therefore \(L:=\langle c\rangle\langle b\rangle\leq X\). Now we turn to consider the imprimitive action of \(X\) on \([X:L]\), which is of degree \(4p\). Let \(K\cap\langle c\rangle=\langle c_{1}\rangle\). Then every orbits of \(\langle c_{1}\rangle\) on \([X:L]\) is of length \(4\). Observing the cycle decomposition of \(c_{1}X_{L}\in X/X_{L}\) on \([X:L]\), we know that \(k_{1}:={\rm o}(c_{1}X_{L})\mid 12\). Therefore, \(c_{1}^{k_{1}}\) fixes pointwise \([X:L]\), which implies \(c_{1}^{2k_{1}}\) fixes pointwise \([X:\langle c\rangle]\). Therefore, \(c_{1}^{2k_{1}}=1\) (as \(\langle c\rangle_{X}=1\)), that is \(|K\cap\langle c\rangle|\mid 2k_{1}\) and in particular, \(|K\cap C_{0}|\mid 2k_{1}\). Moreover, since \(\langle a_{2}K\rangle\lhd X/K\cong\mathbb{Z}_{p}\rtimes\mathbb{Z}_{s}\) for some \(s\mid(p-1)\), we know that \(K\rtimes\langle a_{2}\rangle\lhd X\). Then \(|K\cap C_{0}|\mid 24\), as \(k_{1}\mid 12\). Also, \(K/(K\cap C_{0})\cong KC_{0}/C_{0}\lhd H/C_{0}\cong S_{4}\). Since \(\overline{a}_{1}\in KC_{0}/C_{0}\) where \({\rm o}(\overline{a}_{1})=4\), and every normal subgroup of \(S_{4}\) containing an element of order \(4\) must contain \(A_{4}\), we know that \(K/(K\cap C_{0})\) contains a characteristic subgroup \(K_{1}/(K\cap C_{0})\cong A_{4}\). Suppose that \(K\cap C_{0}\not\leqslant Z(K_{1})\). Then as a quotient of \(A_{4}\), we have \(3\mid|K_{1}/C_{K_{1}}(K\cap C_{0})|.\) However, \(K_{1}/C_{K_{1}}(K\cap C_{0})\leq{\rm Aut}\,(K\cap C_{0})\), which does not contain an element of order \(3\), noting \(K\cap C_{0}\leq\mathbb{Z}_{3}\times\mathbb{Z}_{8}\), by considering the cycle-decomposition of the generator of \(K\cap C_{0}\). Therefore, \(C_{K_{1}}(K\cap C_{0})=K_{1}\), that is \((K\cap C_{0})\leq Z(K_{1})\). Since \(K_{1}/(K\cap C_{0})\cong A_{4}\) and \(Z(A_{4})=1\), we get \(K\cap C_{0}=Z(K_{1})\,{\rm char}\,K_{1}\lhd X\). Therefore, \(K\cap C_{0}\leq 1\) (as \(\langle c\rangle_{X}=1\)) so that \(K\cong A_{4}\) or \(S_{4}\), which implies \(\langle a_{2}\rangle\,{\rm char}\,(K\times\langle a_{2}\rangle)\lhd X\), contradicting with \(G_{X}=1\) again. \(\Box\) To handle \(X(Q)\), we need the following result. **Lemma 3.2**: _Suppose that \(\langle c\rangle_{X}=1\) and \(X=X(G)\) where either \(G=D\) and \(M=\langle a\rangle\langle c\rangle\); or \(G=Q\). Then \(\langle a\rangle_{X}\neq 1\)._ **Proof** Since \(\langle c\rangle_{X}=1\), by Proposition 2.8, we have \(m\leq|G|\). So \(S:=G\cap G^{c}\neq 1\), otherwise \(|X|\geq(2n)^{2}\gtrless|X|\). (1) Suppose that \(G=Q\). Take a subgroup \(T\) of order a prime \(p\) of \(S\). Since \(o(a^{j}b)=4\) for any \(j\), we know \(T\leq\langle a\rangle\). Since \(S\) has the unique element of order \(p\), we get \(T^{c}=T\), giving \(T\lhd X\) and so \(\langle a\rangle_{X}\neq 1\), as desired. (2) Suppose that that \(G=D\). Let \(M=\langle a\rangle\langle c\rangle\), where \({\rm o}(a)=n\) and \({\rm o}(c)=m\). If \(n\geq m\), then by Proposition 2.10, \(\langle a\rangle_{M}\neq 1\) and then \(\langle a\rangle_{X}\neq 1\). Suppose that that \(n+1\leq m\). Since \(\langle c\rangle_{M}\neq 1\), we take \(z:=c^{\frac{m}{p}}\leq\langle c\rangle_{M}\) for a prime \(p\). Since \(\langle c\rangle_{X}\neq 1\), we know that \(\langle z^{b}\rangle\neq\langle z\rangle\) so that \(N:=\langle z\rangle\times\langle z^{b}\rangle\lhd X\). Set \(a_{1}=a^{\frac{n}{p}}\). Then \(a_{1}\in N\). If \(p=2\), then \(z\in Z(M)\). suppose that \(p\) is odd. Let \(N\leq P\in{\rm Syl}_{p}(M)\). By Proposition 2.10, \(P\) is a metacyclic group and so we know that \(N=\langle a_{1}\rangle\times\langle z\rangle\). Since \(\langle z\rangle=\langle c\rangle_{M}\), we may set \(z^{a}=z^{i}\) and \(z^{b}=a_{1}^{j}z^{l}\), where \(j\neq 0\). Then \((z^{a})^{b}=(z^{b})^{a^{-1}}=(a_{1}^{j}z^{l})^{a^{-1}}=a_{1}^{j}z^{li^{-1}}\) and \((z^{i})^{b}=a_{1}^{ji}z^{li}\), which implies \(a_{1}^{j(i-1)}=1\), that is \(i=1\), and so \(z\in Z(M)\) again. This implies \(z^{b}\in Z(M)\) and so \(N\leq Z(M)\). Thus \(a_{1}\in Z(M)\) and then \(\langle a_{1}\rangle\lhd X\). \(\Box\) **Lemma 3.3**: _Theorem 1.1 holds, provided \(G=Q\) and \(X=X(Q)\)._ **Proof** Now \(M=\langle a^{i}\rangle\langle c\rangle\) for some \(i\). We shall prove the lemma by the induction on \(|X|\). With the same argument as Lemma 3.1, we get that the conclusion for \(|X|\leq 24\) holds. Assume that the result is true for less than \(|X|\) and \(|X|\gtrless 24\). Then we shall carry out the proof by the following two cases. Suppose that that \(\langle c\rangle_{X}\neq 1\), where set \(\overline{X}=X/\langle c\rangle_{X}\). Let \(M_{1}/\langle c\rangle_{X}\) be the biggest subgroup of \(\overline{X}\) containing \(\langle\overline{c}\rangle\) and contained in \(\langle\overline{a}\rangle\langle\overline{c}\rangle\). By the induction hypothesis, we get \(M_{1}/\langle c\rangle_{X}=\langle\overline{a}^{i}\rangle\langle\overline{ c}\rangle\), where \(i\in\{1,2,3,4\}\). This gives that \(M=\langle a^{k}\rangle\langle c\rangle\) where \(k\in\{1,2,3,4\}\), as desired. Suppose that that \(\langle c\rangle_{X}=1\). Then by Lemma 3.2, \(\langle a\rangle_{X}\neq 1\). Set \(\overline{X}:=X/\langle a\rangle_{X}=\overline{G}\langle\overline{c}\rangle\) and let \(M_{2}/\langle a\rangle_{X}\) be the biggest subgroup of \(\overline{X}\) containing \(\langle\overline{c}\rangle\) and contained in \(\langle\overline{a}\rangle\langle\overline{c}\rangle\). If \(a^{n}\not\in\langle a\rangle_{X}\), then \(\overline{G}\) is a generalized quaternion group; if \(a^{n}\in\langle a\rangle_{X}\), then \(\overline{G}\) is a dihedral group. For the first case, by the induction hypothesis on \(\overline{X}\); and for the second case, by Lemma 3.1, we get \(M_{2}/\langle a\rangle_{X}=\langle\overline{a}^{i}\rangle\langle\overline{c}\rangle\), where \(i\in\{1,2,3,4\}\). Then \(\langle a^{i}\rangle\langle c\rangle\langle a\rangle_{X}=\langle c\rangle \langle a^{i}\rangle\langle a\rangle_{X}\). This gives \((\langle a^{i}\rangle\langle a\rangle_{X})\langle c\rangle\leq X\), which implies \(M=\langle a^{k}\rangle\langle c\rangle\) where \(k\in\{1,2,3,4\}\), as desired. \(\Box\) ## 4 Proof of Theorem 1.2 The proof of Theorem 1.2 consists of the following four lemmas. **Lemma 4.1**: _Suppose that \(G\in\{D,Q\}\), \(X=X(G)\), \(M=\langle a\rangle\langle c\rangle\) and \(\langle c\rangle_{X}=1\). If \(G\) is a \(2-\)group, then \(\langle a^{2}\rangle\lhd X\)._ **Proof** Suppose that \(X\) is a minimal counter-example. Let \(a_{0}\) be the involution of \(\langle a\rangle\), for both \(G\in\{Q,D\}\). Since \(\langle c\rangle_{X}=1\), by Lemma 3.2, we get \(\langle a\rangle_{X}\neq 1\), which implies \(\langle a_{0}\rangle\leq X\), as \(G\) is a \(2\)-group. Consider \(\overline{X}=X/\langle a_{0}\rangle=\overline{G}\langle\overline{c}\rangle\). Set \(\langle\overline{c}\rangle_{X}=(\langle a_{0}\rangle\times\langle c_{0}\rangle )/\langle a_{0}\rangle\). Then \(\langle c_{0}^{2}\rangle\lhd X\), which implies \(c_{0}^{2}=1\). If \(c_{0}=1\), then by using the induction on \(\overline{X}\), we get \(\langle\overline{a}^{2}\rangle\lhd\overline{X}\). Then \(\langle a^{2}\rangle\lhd X\) is a contradiction, noting \(X\) is a minimal counter-example. Therefore, \(o(c_{0})=2\). By the induction on \(X/\langle a_{0},c_{0}\rangle\), we get \((\langle a^{2}\rangle(\langle a_{0}\rangle\langle c_{0}\rangle))/\langle a_{ 0},c_{0}\rangle\lhd X/\langle a_{0},c_{0}\rangle\), that is \(H:=\langle a^{2}\rangle\rtimes\langle c_{0}\rangle\lhd X\). Then we continue the proof by the following two steps. _Step 1:_ Firstly, we shall show that \(X\) is a \(2\)-group in this case. In fact, noting that \(\langle a^{4}\rangle=\mathcal{C}_{1}(H)\operatorname{char}H\lhd X\), relabel \(\overline{X}=X/\langle a^{4}\rangle\), where write \(\langle\overline{c}\rangle_{\overline{X}}=\langle\overline{c}^{i}\rangle\). Then \(\langle a^{4}\rangle\rtimes\langle c^{i}\rangle\lhd X\). Let \(Q\) be the \(2^{\prime}\)-Hall subgroup of \(\langle c^{i}\rangle\). Since \(\operatorname{Aut}\left(\langle a^{4}\rangle\right)\) is a \(2-\)group, we know that \([Q,a^{4}]=1\) and so \(Q\lhd X\), contradicting with \(\langle c\rangle_{X}=1\). Therefore, \(\langle c^{i}\rangle\) is also a \(2-\)group. Reset \(\overline{X}=X/\langle a^{4}\rangle\langle c^{i}\rangle=\overline{G}\langle \overline{c}\rangle\). Now, \(|\overline{G}|=8\) and so \(\overline{G}\cong D_{8}\) (clearly, \(\overline{G}\) cannot be \(Q_{8}\)). Since \(\langle\overline{c}\rangle_{\overline{X}}=1\) we have \(\mathrm{o}(\overline{c})\mid 4\). Therefore, \(\overline{X}\) is a \(2-\)group and so is \(X\). _Step 2:_ Set \(K:=\langle a_{0}\rangle\times\langle c_{0}\rangle\cong\mathbb{Z}_{2}^{2}\). Consider the conjugacy of \(G\) on \(K\). Since \(\langle c_{0}\rangle\nleq X\), we get \(C_{G}(K)\nleq G\). Since \(G\) may be generated by some elements of the from \(a^{j}b\), there exists an element \(a^{i}b\in G\setminus C_{G}(K)\), exchanging \(c_{1}\) and \(c_{1}a_{0}\) (as \((a^{i}b)^{2}=a_{0}\)). Since \(X=GC=(\langle a\rangle\langle c\rangle).\langle b\rangle\), firstly we write \(c^{b}=a^{s}c^{t}\), where \(t\neq 0\). Then \[c=c^{c_{0}}=c^{b^{2}}=(a^{s}c^{t})^{b}=a^{-s}(a^{s}c^{t})^{t}=c^{t}(a^{s}c^{t})^ {t-1},\] that is \((a^{s}c^{t})^{t-1}=c^{1-t}\). Then we have \[(c^{t-1})^{b}=(c^{b})^{t-1}=(a^{s}c^{t})^{t-1}=c^{1-t}.\] If \(t\neq 1\), then \(c_{1}^{b}\in\langle c^{t-1}\rangle^{b}=\langle c^{t-1}\rangle\), contradicting with \(c_{1}^{b}=a_{1}c_{1}\). So \(t=1\), that is \(c^{b}=a^{s}c\). Secondly, we write \(c^{b}=c^{t_{1}}a^{s_{1}}\). With the same arguments, we may get \(t_{1}=1\) and \(c^{b}=ca^{s_{1}}\). Therefore, we have \(a^{s}c=c^{b}=ca^{s_{1}}\), that is \((a^{s})^{c}=a^{s_{1}}\). Clearly \(\langle a^{s}\rangle=\langle a^{s_{1}}\rangle\), that is \(c\) normalises \(\langle a^{s},b\rangle\). Then \[\langle a^{s},b\rangle\leq\cap_{c^{i}\in\langle c\rangle}G^{c^{i}}=\cap_{x\in X }G^{x}=G_{X},\] which implies \(b^{a}=ba^{-2}\in G_{X}\) so that \(a^{2}\in G_{X}\) and then \(\langle a^{2}\rangle\lhd X\). This contradicts the minimal of \(X\). \(\Box\) **Lemma 4.2**: _Suppose that \(G=D\), \(X=X(D)\), \(M=\langle a\rangle\langle c\rangle\) and \(\langle c\rangle_{X}=1\). Then \(\langle a^{2}\rangle\lhd X\)._ **Proof** By Lemma 4.1, the lemma is true when \(G\) is a 2-group. So assume that \(G\) is not a 2-group. Take a minimal counter-example \(X\). In the following Step 1, we show that the possible groups for \(G\) are \(D_{2^{c}p^{k}}\), where \(p\) is a prime and \(e\in\{1,2\}\); and in Step 2, we show that \(G\) cannot be these groups. _Step 1: Show that the possible groups for \(G\) are \(D_{2^{c}p^{k}}\), where \(p\) is an odd prime and \(e\in\{1,2\}\)._ By Lemma 3.2, let \(p\) be the maximal prime divisor of \(|\langle a\rangle_{X}|\) and set \(a_{0}=a^{\frac{n}{p}}\). Set \(\overline{X}=X/\langle a_{0}\rangle=\overline{G}\langle\overline{c}\rangle\) and \(\langle\overline{c}\rangle_{\overline{X}}=\langle\overline{c}_{0}\rangle\). (i) Suppose that that \(\langle\overline{c}\rangle_{\overline{X}}=1\). Then by the minimality of \(X\) we get \(\langle\overline{a}^{2}\rangle\lhd\overline{X}\), which implies \(\langle a^{2}\rangle\langle a_{0}\rangle\lhd X\). Since \(\langle a^{2}\rangle\langle a_{0}\rangle=\langle a^{2}\rangle\) or \(\langle a\rangle\), we get \(\langle a^{2}\rangle\lhd X\), a contradiction. (ii)Suppose that that \(\langle\overline{c}\rangle\lhd\overline{X}\). Then \(\overline{X}/C_{\overline{X}}(\langle\overline{c}\rangle)\leq\mathop{\rm Aut} \nolimits\left(\langle\overline{c}\rangle\right)\), which is abelian and so \(\overline{X}^{\prime}\leq C_{\overline{X}}(\langle\overline{c}\rangle)\). Then \(\overline{a}^{2}\in\overline{G}^{\prime}\leq\overline{X}^{\prime}\leq C_{ \overline{X}}(\langle\overline{c}\rangle)\), that is \([a^{2},c]\in\langle a_{0}\rangle\), which implies \(\langle a^{2},a_{0}\rangle\lhd X\), and again we have \(\langle a^{2}\rangle\lhd X\) is a contradiction. By (i) and (ii), we have \(1\neq\langle\overline{c}\rangle_{\overline{X}}=\langle\overline{c}_{0}\rangle \lneq\langle\overline{c}\rangle\). Reset \[K=\langle a_{0}\rangle\rtimes\langle c_{0}\rangle,\,\overline{X}=X/K= \overline{G}\langle\overline{c}\rangle,\,H=\langle a^{2}\rangle\rtimes \langle c_{0}\rangle.\] If \(\mathop{\rm o}\nolimits(a_{0})<\mathop{\rm o}\nolimits(c_{0})\), then \(\{1\}\lneqq\langle c_{0}^{j}\rangle=Z(K)\lhd X\) is, for some \(j\), a contradiction. Therefore, \(1<\mathop{\rm o}\nolimits(c_{0})\leq\mathop{\rm o}\nolimits(a_{0})\). Then we have the following two cases: _Case 1: \(K=\langle a_{0}\rangle\rtimes\langle c_{0}\rangle\cong\mathbb{Z}_{p}\rtimes \mathbb{Z}_{r}\), a Frobenius group, where \(r\geq 2\)._ In this case, \(p\) is odd. Set \(\overline{X}=X/K\). By the minimality of \(X\), we have \(H/K=\langle\overline{a}^{2}\rangle\lhd\overline{X}\), that is \(H:=\langle a^{2}\rangle\rtimes\langle c_{0}\rangle\lhd X\). Since \(K\lhd X\), we know that \(\langle a^{2}\rangle/\langle a_{0}\rangle\) and \(\langle c_{0}\rangle\langle a_{0}\rangle/\langle a_{0}\rangle\) are normal in \(H/\langle a_{0}\rangle\). Then \([a^{2},c_{0}]\leq\langle a_{0}\rangle\). So one can write \[H=\langle a^{2},c_{0}|a^{n}=c_{0}^{r}=1,\,(a^{2})^{c_{0}}=a^{2}a_{0}^{j}\rangle.\] Let \(P\in\mathop{\rm Syl}\nolimits_{p}(H)\). Then \(P\mathop{\rm char}\nolimits H\lhd X\) so that \(P\leq\langle a\rangle_{X}\). Clearly, one can check \(Z(H)=\langle a^{2p}\rangle\). Then \(\langle a^{2p}\rangle\leq\langle a\rangle_{X}\). Note \(\langle a^{2p},P\rangle=\langle a^{2p},a^{n/p^{k}}\rangle=\langle a^{2}\rangle\), where \(p^{k}\mid\mid n\), so that \(a^{2}\in\langle a\rangle_{X}\) is a contradiction again. _Case 2: \(K=\langle a_{0}\rangle\times\langle c_{0}\rangle\cong\mathbb{Z}_{p}^{2}.\)_ Set \(H=\langle a^{2},c_{0}\rangle\) again. With the same reason as that in Case 1, we have \(H\lhd X\). Suppose that \(a_{0}\notin\langle a^{2}\rangle\). Then \(p=2\) and \(\frac{n}{2}\) is odd. Noting \(\langle a^{2}\rangle\) is \(2^{\prime}-\)Hall subgroup of \(H\), we have \(\langle a^{2}\rangle\operatorname{char}H\lhd X\), which implies \(a^{2}\in\langle a\rangle_{X}\), a contradiction. Suppose that \(a_{0}\in\langle a^{2}\rangle\). Let \(H_{1}\) be the \(p^{\prime}-\)Hall subgroup of \(H\). Then \(H_{1}\) is also the \(p^{\prime}-\)Hall subgroup of \(\langle a^{2}\rangle\). Then \(H_{1}\lhd X\), which implies \(H_{1}\leq\langle a\rangle_{X}\). Suppose that \(H_{1}\neq 1\). Let \(a_{2}\) be an element of order \(q\) in \(H_{1}\), where \(q<p\) is a prime as the maximality of \(p\). Consider \(\overline{X}:=X/\langle a_{2}\rangle=\overline{G}\langle c\rangle\). Similarly, we have \(1\neq\langle\overline{c}\rangle_{\overline{X}}:=\langle\overline{c}_{2} \rangle\lneq\overline{C}\) and \(H_{0}:=\langle a^{2}\rangle\rtimes\langle c_{2}\rangle\lhd X\). Let \(P\in\operatorname{Syl}_{p}(H_{0})\). Then \(P\operatorname{char}H\) and so \(P\lhd X\), which implies \(P\leq\langle a\rangle_{X}\). Noting \(\langle H_{1},P\rangle=\langle a^{2}\rangle\), we therefore get \(a^{2}\in\langle a\rangle_{X}\), a contradiction. So \(H_{1}=1\), which means that \(G\) is \(D_{2^{e}p^{k}}\) where \(p\) is an odd prime and \(e=1,2\), as \(G\) is assumed to be not a 2-group. _Step 2: Show that \(G\) cannot be \(D_{2^{e}p^{k}}\), where \(e\in\{1,2\}\) and \(p\) is an odd prime._ Relabel \(a^{2}\) by \(a_{2}\). Following Proposition 2.12, we write \[H=\langle a_{2},c_{0}\rangle=\langle a_{2}^{p^{k}}={c_{0}}^{p}=1,a_{2}^{c_{0}} =a_{2}^{1+k^{\prime}p^{k-1}}\rangle,\] where \(k\geq 2\) and \(k^{\prime}\) may be 0. We shall get a contradiction by considering two groups. (1) The cyclic extension \(H\langle c\rangle=\langle a_{2}\rangle\langle c\rangle\) of \(H\) by \(c\), where \(c^{l}=c_{0}\) for some \(l\). Let \(\pi\) be the automorphism of \(H\) by mapping \(a_{2}\) to \(a_{2}^{i}{c_{0}}^{j}\) and \(c_{0}\) to \(c_{0}\), where \(i,j\not\equiv 0(\operatorname{mod}\,p).\) Then we have (i) \(\pi\) preserves the relation of \(H\): that is \[(a_{2}^{i}{c_{0}}^{j})^{p^{k}}=a_{2}^{ip^{k}}=1,\quad(a_{2}^{i}{c_{0}}^{j})^{p ^{k-1}}=a_{2}^{ip^{k-1}}\neq 1,\] \[\pi(a_{2})^{1+k^{\prime}p^{k-1}}=(a_{2}^{i}{c_{0}}^{j})^{1+k^{\prime}p^{k-1}} =a_{2}^{i}{c_{0}}^{j}(a^{ik^{\prime}p^{k-1}}{c_{0}}^{p^{k-1}})=a_{2}^{i(1+k^{ \prime}p^{k-1})}{c_{1}}^{j}=(\pi(a_{2}))^{\pi(c_{0})}.\] (ii) \(\pi^{l}=\operatorname{Inn}(c_{1}):\) \[\pi^{l}(a)=(a^{i}{c_{0}}^{j})^{l}=a^{i^{l}}{c_{0}}^{j\sum_{w=0}^{l-1}i^{w}}a_{ 1}^{x}=\operatorname{Inn}(c_{0})(a)=a^{1+k^{\prime}p^{k-1}},\] that is \[i^{l}+xp^{k-1}\equiv 1+k^{\prime}p^{k-1}(\operatorname{mod}\,p^{k}),\quad\sum_{w =0}^{l-1}i^{w}\equiv 0(\operatorname{mod}\,p).\] So if \(i=1(\operatorname{mod}\,p)\) (\(i-1\) is of order a \(p\)-power), then \(p\mid l\); if \(i\neq 1(\operatorname{mod}\,p)\) then \(l=up^{v}\), where \(u\mid(p-1).\) Now we have \(\operatorname{o}(c)=pl\) and \(\operatorname{o}(i)=pl\) or \(l\). (2) The group \(X=\langle H\langle c\rangle,b\rangle\). Set \[{c_{0}}^{a}=a^{m_{2}p^{k-1}}{c_{0}}^{s^{\prime}},\quad{c_{0}}^{b}=a^{m_{1}p^{k-1}}{ c_{0}}^{s},\quad c^{b}=a^{r}c^{t},\] where \(m_{2}\neq 0\). Then we are showing \(t=1\). In fact, since \(b\) preserves \(a_{2}^{c}=a_{2}^{i}{c_{0}}^{j}\), we have that in \(\overline{X}=X/\langle a_{1}\rangle\), \(\overline{a}_{2}^{-i^{t}}=\overline{a}_{2}^{-i}\), that is \(i^{t-1}\equiv 1\pmod{p^{k-1}}\), which implies \(i^{p(t-1)}\equiv 1(\bmod p^{k})\) and so either \(l\mid(t-1)\) or \(l\mid p(t-1)\). If \(l\mid(t-1)\), then \(c^{t-1}\leq\langle c^{l}\rangle=\langle c_{0}\rangle\). Since \(m_{1}\neq 0\), we have \(t=1\). Suppose that \(l\mid p(t-1)\) and \(l\nmid(t-1)\). Then \(c_{0}\in\langle c^{t-1}\rangle\). Since \[c=c^{b^{2}}=(a^{r}c^{t})^{b}=a^{-r}(a^{r}c^{t})^{t}=c^{t}(a^{r}c^{t})^{t-1},\] we have \((a^{r}c^{t})^{t-1}=c^{1-t}\), which implies \((c^{t-1})^{b}=(a^{r}c^{t})^{t-1}=c^{1-t}.\) Then \(b\) normalises \(\langle c^{1-t}\rangle\), which implies that \(b\) normalises \(\langle c_{0}\rangle\), forcing \(t=1\), that is \(c^{b}=a^{r}c\). Similarly, we may set \(c^{b}=c^{t_{1}}a^{r_{1}}\) and get \(t_{1}=1\) and \(c^{b}=ca^{r_{1}}\). Therefore, we have \(ca^{r_{1}}=c^{b}=a^{r}c\), that is \((a^{r})^{c}=a^{r_{1}}.\) Then \[\langle a^{r},b\rangle\leq\cap_{c^{i}\in\langle c\rangle}G^{c^{i}}=\cap_{x\in X }G^{x}=G_{X}.\] Since any normal subgroup of \(G\cong D_{2n}\) containing \(b\) is either \(\langle a^{2},b\rangle\) or \(G\), we get \(\langle a^{2}\rangle\lhd X\), a contradiction. \(\square\) **Lemma 4.3**: _Suppose that \(G\in\{Q,D\}\), \(X=X(G)\), \(\langle c\rangle_{X}=1\) and \(M=\langle a\rangle\langle c\rangle\). If \(\langle a\rangle\lhd X\), then \(G\lhd X\)._ **Proof**\(X=(\langle a\rangle\rtimes\langle c\rangle).\langle b\rangle\), and so we may write \(a^{c}=a^{i}\) and \(c^{b}=a^{k}c^{j}\). If \(j=1\), then \(G\lhd X\). So assume \(j\neq 1\). Since \(b^{2}=a^{n}\) (\(o(a^{n})=1\) or \(2\), if \(G=D\) or \(G=Q\), resp.) and \(\langle a^{n}\rangle\lhd X\), we get \(a^{n}\in Z(X)\), which implies \(c=c^{b^{2}}\). Then \[c=(c^{b})^{b}=(a^{k}c^{j})^{b}=a^{-k}(a^{k}c^{j})^{j}=(c^{j}a^{k})^{j-1}c^{j},\] that is \(c^{1-j}=(c^{j}a^{k})^{j-1}=(c^{j-1})^{b}\), so that \(b\) normalizes \(\langle c^{1-j}\rangle\). Since \(\overline{X}=X/C_{X}(\langle a\rangle)\leq\operatorname{Aut}\left(\langle a\rangle\right)\) which is abelian, we get \(\overline{c}=\overline{c}^{\overline{b}}=\overline{c}^{j}\), that is \(c^{1-j}\leq C_{X}(\langle a\rangle)\) so that \([c^{1-j},a]=1\). Thus we get \(\langle c^{1-j}\rangle\lhd X\). It follows from \(\langle c\rangle_{X}=1\) that \(j=1\), a contradiction. \(\square\) **Lemma 4.4**: _Suppose that \(G=Q\), \(X=X(Q)\), \(M=\langle a\rangle\langle c\rangle\) and \(\langle c\rangle_{X}=1\). Then \(\langle a^{2}\rangle\lhd X\)._ **Proof** By Lemma 4.1, we just consider the case when \(G\) is not a \(2\)-group. Take a minimal counter-example \(X\) and set \(a_{1}:=a^{n}.\) Similarly, we carry out the proof by the following two steps: _Step 1: Show that the possible groups for \(G\) are \(Q_{4p^{k}}\), where \(p\) is an odd prime._ By Lemma 3.2, let \(p\) be the maximal prime divisor of \(|\langle a\rangle_{X}|\) and \(a_{0}=a^{\frac{n}{p}}\). Then \(G/\langle a_{0}\rangle\) is either a generalized quaternion group or a dihedral group. Let \(K=\langle a_{0}\rangle\rtimes\langle c_{0}\rangle\) such that \(K/\langle a_{0}\rangle\) is the core of \(\langle a_{0},c\rangle/\langle a_{0}\rangle\) in \(X/\langle a_{0}\rangle\) and set \(H=\langle a^{2}\rangle\rtimes\langle c_{0}\rangle\). Using completely same arguments as that in Lemma 4.2, one may get \(G\cong Q_{4p^{k}}\), where \(p\) is an odd prime. Remind that Lemma 4.2 is used when we make an induction for \(G/\langle a_{0}\rangle\) being a dihedral group. _Step 2: Case \(G\cong Q_{4p^{k}}\), where \(p\) is an odd prime._ Suppose that \(a_{1}\in\langle a\rangle_{X}\). Let \(\langle a_{1}\rangle\rtimes\langle c_{1}\rangle/\langle a_{1}\rangle\) be the core of \(\langle a_{1},c\rangle/\langle a_{1}\rangle\) in \(X/\langle a_{1}\rangle\). Since \(\langle c_{1}^{2}\rangle\lhd X\), we get \(c_{1}^{2}=1\). Consider \(\overline{X}=X/\langle a_{1}\rangle\langle c_{1}\rangle\). Since \(\overline{G}\cong D_{2p^{k}}\) is a dihedral group, by Lemma 4.2, we get \(\langle\overline{a}^{2}\rangle=\langle\overline{a}\rangle\lhd\overline{X}\), which implies \(\langle a\rangle\rtimes\langle c_{1}\rangle\lhd X\). Then \(\langle a^{2}\rangle\lhd X\), a contradiction. So in what follows, we assume \(a_{1}\notin\langle a\rangle_{X}\), that is \(\langle a\rangle_{X}\) is \(p\)-group. Then we continue the proof by two substeps. _Substep 2.1: Show that the possible values of \(m\) are \(pq^{e}\), for a prime \(q\) (may be equal to \(p\)) and an integer \(e\)._ For the contrary, we assume that \(m=pq^{e}m_{1}\) where \(m_{1}\neq 1\), \(p\nmid m_{1}\) and \(q\nmid m_{1}\). Since \(H=\langle a^{2},c_{0}\rangle\) and \(b^{2}=a_{1}\), we get \(\overline{X}=X/H=\langle\overline{b}\rangle\langle\overline{c}\rangle\). By considering the permutation representation of \(\overline{X}\) on the cosets \([\overline{X}:\langle\overline{c}\rangle]\) of size \(4\), we know that \(\langle\overline{c}^{2}\rangle\lhd X\). So \(\langle b,c^{2},H\rangle\leq X\), that is \(X_{1}:=\langle a,b\rangle\langle c^{2}\rangle=G\langle c^{2}\rangle\leq X\). Firstly, suppose that \(m\) (=\(o(c)\)) is even. Then \([X:X_{1}]=2\). Let \(\langle c_{2}\rangle\) be the Sylow \(2\)-subgroup of \(\langle c\rangle\). By the induction on \(X_{1}\), and in particular, \(c^{2}\) normalizes \(\langle a^{2}\rangle\). Since \(m=pq^{e}m_{1}\) has other prime divisors distinct with \(2\) and \(p\), we get \(X_{2}:=H(\langle b\rangle\langle c_{2}\rangle)\nleq X.\) By the induction on \(X_{2}\) again, \(\langle a^{2}\rangle\lhd X_{2}\), which implies \(\langle c_{2}\rangle\) normalises \(\langle a^{2}\rangle\). In summary, \(\langle c_{2},c^{2}\rangle\) normalizes \(\langle a^{2}\rangle\). Since \(\langle b\rangle\langle c_{2}\rangle\) is a Sylow \(2-\)subgroup of \(X\), we have \(\langle c_{2},c^{2}\rangle=\langle c\rangle\), that is \(\langle a^{2}\rangle\lhd X\), a contradiction. Secondly, suppose that \(m\) is odd. Then both \(q\) and \(m_{1}\) are odd, so that \(X=X_{1}=(\langle a^{2}\rangle\rtimes\langle c\rangle)\rtimes\langle b\rangle\). By the induction on \(X_{3}:=\langle H,c^{\frac{m}{m_{1}}}\rangle=\langle a,b\rangle\langle c^{ \frac{m}{m_{1}}}\rangle\nleq X\) and \(X_{4}:=\langle H,c^{\frac{m}{pq^{e}}}\rangle=\langle a,b\rangle\langle c^{ \frac{m}{pq^{e}}}\rangle\nleq X\), respectively, we get both \(\langle c^{\frac{m}{m_{1}}}\rangle\) and \(\langle c^{\frac{m}{pq^{e}}}\rangle\) normalise \(\langle a^{2}\rangle\). Noting \(\langle c^{\frac{m}{m_{1}}},c^{\frac{m}{pq^{e}}}\rangle=\langle c\rangle\), we get \(\langle a^{2}\rangle\lhd X\), a contradiction again. _Substep 2.2: Exclude the case \(m=pq^{e}\), for a prime \(q\) and an integer \(e\geq 1\)._ Recall \(a_{1}=a^{n}\), the unique involution in \(G\), \(\langle a_{0}\rangle\) is a normal subgroup of order \(p\) in \(X\), \(\langle a_{0}\rangle\langle c_{0}\rangle/\langle a_{0}\rangle\) is the core of \(\langle a_{0}\rangle\langle c\rangle/\langle a_{0}\rangle\) in \(X/\langle a_{0}\rangle\), \(H=\langle a^{2}\rangle\rtimes\langle c_{0}\rangle\lhd X\) (by the induction hypothesis) and \(X=((H.\langle c\rangle).\langle a_{1}\rangle).\langle b\rangle\). Noting \(\langle a_{0}\rangle\langle c_{0}\rangle=(\langle a_{0}\rangle\langle c \rangle)_{X}\) and \(\langle c\rangle_{X}=1\), we get that \(\langle a_{0}\rangle\langle c_{0}\rangle\) is either \(\mathbb{Z}_{p}^{2}\) or \(\mathbb{Z}_{p}\rtimes\mathbb{Z}_{q^{e_{1}}}\) where \(q^{e_{1}}\mid p-1\) and \(e_{1}\leq e\). If \(\langle a_{0}\rangle\langle c_{0}\rangle\cong\mathbb{Z}_{p}\rtimes\mathbb{Z}_{q ^{e_{1}}}\) for some \(q<p\) and \(e_{1}\leq e\), then we get \(\langle a^{2}\rangle\operatorname{char}H\lhd X\), a contradiction. Therefore, we get \(\langle a_{0},c_{0}\rangle\cong\mathbb{Z}_{p}^{2}\), which implies \(\langle c_{0}\rangle=\langle c^{q^{e}}\rangle\leq\langle c\rangle\). Note that \(\langle a^{2p}\rangle=\mathfrak{U}_{1}(H)\operatorname{char}H\lhd X\). Thus \(\langle a^{2p}\rangle\lhd X\). Set \(X_{5}=(H.\langle c^{q}\rangle)\rtimes\langle b\rangle=\langle a,b\rangle \langle c^{q}\rangle<X\). By the induction on \(X_{5}\), we get \(\langle a^{2}\rangle\lhd X_{5}\), that is \(X_{5}=(\langle a^{2}\rangle\rtimes\langle c^{q}\rangle)\rtimes\langle b\rangle\). Clearly, \(\langle a^{2}\rangle\leq G^{\prime}\leq X_{5}^{\prime}\leq\langle a^{2},c^{q}\rangle\). So set \(X_{5}^{\prime}=\langle a^{2},c_{3}\rangle\) for \(c_{3}\in\langle c^{q}\rangle\). By Proportion 2.7, both \(X/C_{X}(\langle a^{2p}\rangle)\) and \(X_{5}/C_{X_{5}}(\langle a^{2}\rangle)\) are abelian, which implies that \(X^{\prime}\leq C_{X}(\langle a^{2p}\rangle)\) and \(X_{5}^{\prime}\leq C_{X_{5}}(\langle a^{2}\rangle)\). Note that \(\langle a^{2}\rangle\leq X_{5}^{\prime}\). Thus \(X_{5}^{\prime}\) is abelian. The \(p^{\prime}\)-Hall subgroup of \(X^{\prime}_{5}\) is normal, contradicting with \(\langle c^{q}\rangle_{X_{5}}=1\), meaning that \(X^{\prime}_{5}\) is an abelian \(p\)-group. Set \(L:=H\rtimes\langle a_{1}\rangle\nleq X_{5}\). Suppose that \(L\lhd X\). If \(H\) is abelian, then we get that either \(\langle a^{2}\rangle=Z(L)\operatorname{char}L\lhd X\), a contradiction; or \(L\) is abelian, forcing \(\langle a_{1}\rangle\operatorname{char}L\lhd X\), a contradiction again. Therefore, \(H\) is non-abelian. Note that \(X^{\prime}_{5}=\langle a^{2},c_{3}\rangle\) for \(c_{3}\in\langle c^{q}\rangle\). If \(c_{3}\neq 1\), then \(c_{0}\in\langle c_{3}\rangle\leq X^{\prime}_{5}\) as \(\operatorname{o}(c_{0})=p\), which implies that \(H=\langle a^{2},c_{0}\rangle\) is abelian, a contradiction. Therefore, \(X^{\prime}_{5}=\langle a^{2}\rangle\), which implies \(L=\langle a\rangle\rtimes\langle c_{0}\rangle\). Note that \(\langle a_{1}\rangle\operatorname{char}L\lhd X\). Thus \(\langle a_{1}\rangle\lhd X\), a contradiction. Suppose that \(L\nleq X\). Then in \(\overline{X}=X/H=(\langle\overline{c}\rangle\rtimes\langle\overline{a_{1}} \rangle).\langle\overline{b}\rangle\), we get that either \(\overline{c^{\overline{a_{1}}}}=\overline{c}^{-1}\) if \(q\) is odd; or \(\overline{c^{\overline{a_{1}}}}\) is either \(\overline{c}^{-1}\) or \(\overline{c}^{\pm 1+2^{e-1}}\) if \(q=2\). Then we shall divide it into the following two cases: _Case 1: \(q\) is an odd prime._ In this case, \(q\) is odd. Since \(\overline{c^{\overline{a_{1}}}}=\overline{c}^{-1}\) in \(\overline{X}=X/H\), we get \(\langle a^{2},c^{qp}\rangle\leq X^{\prime}_{5}\leq\langle a^{2}\rangle\langle c ^{q}\rangle\). Note that \(X^{\prime}_{5}\) is the abelian \(p\)-group. Thus either \(q\neq p\) and \(e=1\); or \(q=p\). Suppose that \(q\neq p\) and \(e=1\), that is \(\operatorname{o}(c)=pq\). Consider \(M=\langle a\rangle\langle c\rangle\lhd X\). Then by Proportion 2.9, \(M^{\prime}\) is abelian. Note that \(\langle c\rangle_{X}=1\) and \(G_{X}\) is the \(p-\)group. Thus \(M^{\prime}\) is an abelian \(p\)-group with the same argument as the case of \(X^{\prime}_{5}\). Noting \(\langle a_{1}\rangle\langle c^{p}\rangle\) is the \(p^{\prime}\)-Hall subgroup of \(M\), we get \([a_{1},c^{p}]\in\langle a_{1}\rangle\langle c^{p}\rangle\cap M^{\prime}=1\), which implies \(\overline{c^{\overline{a_{1}}}}=\overline{c}\) in \(\overline{X}=X/H\), a contradiction. So in what follows, we assume \(q=p\), that is \(\operatorname{o}(c)=p^{e+1}\). Note that \(\overline{c^{\overline{a_{1}}}}=\overline{c}^{-1}\) in \(\overline{X}=X/H\) and \(\langle a^{2}\rangle\leq X^{\prime}_{5}\). Thus \(X^{\prime}_{5}\) is either \(\langle a^{2}\rangle\langle c^{p}\rangle\) or \(\langle a^{2}\rangle\), noting \(X^{\prime}_{5}=\langle a^{2}\rangle\) only happens when \(e=1\). Suppose that \(X^{\prime}_{5}=\langle a^{2}\rangle\langle c^{p}\rangle\). Note that \(H\leq X^{\prime}_{5}\). Thus \(H=\langle a^{2}\rangle\rtimes\langle c_{0}\rangle\) is abelian. Note that both \(\langle a^{2}\rangle\) and \(\langle c\rangle\) are \(p\)-groups and \(X=(H.\langle c\rangle)\rtimes\langle b\rangle\). Set \((a^{2})^{c}=a^{2s}c^{t}_{0}\) and \(c^{b}=a^{2u}c^{v}\) where \(s\equiv 1(\operatorname{mod}\,p)\) and \(p\nmid v\). Then for an integer \(w\), we get \((a^{2})^{c^{w}}=a^{2x_{1}}c^{wt}_{0}\) and \(c^{b}_{0}=a^{x_{2}}c^{v}_{0}\) for some integers \(x_{1}\) and \(x_{2}\). Since \(((a^{2})^{c})^{b}=(a^{2s}c^{t}_{0})^{b}\), there exist some integers \(x\) and \(y\) such that \[((a^{2})^{c})^{b}=(a^{-2})^{c^{v}}=a^{x}c^{-vt}_{0}\quad\text{and}\quad((a^{2} )^{s}c^{t}_{0})^{b}=a^{y}c^{vt}_{0},\] which gives \(t\equiv 0(\operatorname{mod}\,p)\). Then \(\langle a^{2}\rangle\lhd X\), a contradiction. Suppose that \(X^{\prime}_{5}=\langle a^{2}\rangle\). Then \(\operatorname{o}(c)=p^{2}\) and \(c_{0}=c^{p}\). Note that \(X_{5}=G\rtimes\langle c^{q}\rangle=H\rtimes\langle b\rangle\). Thus \([c_{0},a_{1}]=1\). Set \(c^{a_{1}}=a^{x}c^{-1+yp}\) as \(\overline{c^{\overline{a_{1}}}}=\overline{c}^{-1}\) in \(\overline{X}=X/H\). Then \[c=c^{a_{1}^{2}}=a^{x}(a^{x}c^{-1+yp})^{-1+yq}=a^{x}c^{1-yp}a^{-x}c^{yp},\] which implies \((a^{x})^{c^{1-yp}}=a^{x}\). Then \([a^{x},c]=1\). Note that \(c_{0}=c^{p}\) and \(c_{0}=c^{a_{1}}_{0}=(c^{a_{1}})^{p}=a^{x}c^{-p}\) for some \(x\). Thus we get \(c_{0}^{2}=1\), contradicting with \(\operatorname{o}(c_{0})=p\). _Case 2: \(q=2\)._ In this case, we know that \(X_{5}=(\langle a^{2}\rangle\langle c^{2}\rangle)\rtimes\langle b\rangle\lhd X\) and \(\overline{c^{\overline{a_{1}}}}\) is either \(\overline{c}^{-1}\) or \(\overline{c}^{\pm 1+2^{e-1}}\) in \(X/H\), noting that \(\overline{c^{\overline{a_{1}}}}=\overline{c}^{\pm 1+2^{e-1}}\) only happens when \(e\geq 2\). By Proportion 2.11, we get that both \(\langle b\rangle\langle c^{p}\rangle\) and \(\langle c^{2p}\rangle\langle b\rangle\) are Sylow \(2\)-subgroups of \(X\) and \(X_{5}\), separately. Note that \(a^{2}\in X_{5}^{\prime}\), \(X_{5}^{\prime}\operatorname{char}X_{5}\lhd X\) and \(X_{5}^{\prime}\) is the abelian \(p\)-group. Thus \(X_{5}^{\prime}=H\), which implies that \(H\) is ablien. Note that \(\langle c^{2p}\rangle\langle b\rangle\) is the Sylow \(2\)-subgroup of \(X_{5}\). Thus \([c^{2p},b]\in X_{5}^{\prime}\cap\langle c^{2p}\rangle\langle b\rangle=1\), which implies that \([c^{2p},a_{1}]=1\) and \(\langle c^{2p}\rangle\langle b\rangle\) is abelian. Then \(\overline{c^{2n}}=\overline{c}^{1+2^{e-1}}\) in \(X/H\). Note that \(\langle b\rangle\langle c^{p}\rangle=(\langle c^{p}\rangle\rtimes\langle a_{1 }\rangle).\langle b\rangle\). Then \((c^{p})^{a_{1}}=c^{p+2^{e-1}p}\), which implies \(c^{2^{e-1}p}\in M^{\prime}\). Suppose that \(e=2\). Since \(|\langle b\rangle|\geq|\langle c^{p}\rangle|\), by Proportion 2.10, we get \(\langle a_{1}\rangle\lhd\langle b\rangle\langle c^{p}\rangle\). Note that \(b^{2}=a_{1}\) is an involution. Thus \([a_{1},c^{p}]=1\), a contradiction. So in what follows, we assume \(e>2\). Noting \(\langle c^{p}\rangle\rtimes\langle a_{1}\rangle=\langle a_{1},c^{p}|a_{1}^{2 }=c^{2^{e}p}=1\), \((c^{p})^{a_{1}}=c^{(1+2^{e-1})p}\rangle\), there are only three involutions in \(\langle c^{p}\rangle\rtimes\langle a_{1}\rangle\): \(a_{1},c^{2^{e-1}p}\) and \(a_{1}c^{2^{e-1}p}\). By considering the permutation representation of \(\langle b\rangle\langle c^{q}\rangle\) on the cosets \([\langle b\rangle\langle c^{p}\rangle:\langle c^{p}\rangle]\) of size \(4\), we know that \(\langle c^{2p}\rangle\lhd X\). By Proportion 2.9, \(M^{\prime}\) is abelian. Let \(M_{2}\) be the Sylow \(2\)-subgroup of \(M^{\prime}\). Note that \(M_{2}\operatorname{char}M^{\prime}\operatorname{char}M\lhd X\), \(c^{2^{e-1}p}\in M^{\prime}\), \(\langle c\rangle_{X}=1\) and \(a_{1}\) is an involution. Thus we get \(M_{2}\cong\mathbb{Z}_{2}^{2}\), which implies \(M_{2}=\langle c^{2^{e-1}p},a_{1}\rangle\). Consider \(HM_{2}\leq X\). Since \(M_{2}\lhd X\), \(H\lhd X\), \(H\cap M_{2}=1\) and \(p\) is odd prime, we get \(HM_{2}=H\times M_{2}\lhd X\), which implies \(a\) normalises \(\langle c^{2^{e-1}p}\rangle\). Since \(X=\langle a,b,c\rangle\) and \([b,c^{2p}]=1\), we get \(\langle c^{2^{e-1}p}\rangle\lhd X\), a contradiction. \(\square\) **Remark 4.5**: _If the readers are familiar with the theorem of regular maps, then two known results in [40, Theorem 1.3] and [13, Lemma 5] on skew-morphisms can be used so that one may give a short proof for Theorem 1.2, see below._ **Proof:** (1) Suppose that \(\langle c\rangle_{X}=1\). For the five cases of Theorem 1.1, we know that \(M_{X}\) is \(M\), \(\langle a^{2}\rangle\langle c^{2}\rangle\), \(\langle a^{2}\rangle\langle c^{3}\rangle\), \(\langle a^{3}\rangle\langle c^{4}\rangle\) or \(\langle a^{4}\rangle\langle c^{3}\rangle\), respectively. Suppose \(M_{X}=\langle a^{i}\rangle\langle c^{j}\rangle\), which is of the above four cases. Set \(X_{1}=GM_{X}=\langle a,b\rangle\langle c^{j}\rangle\). Then \(\langle c^{j}\rangle_{X_{1}}=1\). By [40, Theorem 1.3] and [13, Lemma 5], we get \(\langle a^{2}\rangle\lhd X_{1}\), that is \(c^{j}\) normalizes \(\langle a^{2}\rangle\), and so \(M_{X}\cap\langle a^{2}\rangle\lhd M_{X}\). (2) Suppose \(\langle c_{1}\rangle:=\langle c\rangle_{X}\neq 1\). Then by Proposition 2.7, we get \(\langle c\rangle\leq C_{X}(\langle c_{1}\rangle)\lhd X\) and \(\overline{X}=X/C_{X}(\langle c_{1}\rangle)=\langle\overline{a},\overline{b}\rangle\) is abelian. This implies \(\langle\overline{a},\overline{b}\rangle\lessneq D_{4}\). Therefore, \(\langle a^{2},c\rangle\leq C_{X}(\langle c_{1}\rangle)\). Therefore, \(|X:C_{X}(\langle c_{1}\rangle))|\leq 4\). \(\square\) ## 5 Proof of Theorem 1.3 To prove Theorem 1.3, set \(R:=\{a^{2n}=c^{m}=1,\,b^{2}=a^{n},\,a^{b}=a^{-1}\}.\) Then we shall deal with the five cases in Theorem 1.1 in the following five subsections, separately. Let \(A=G.\langle t\rangle\) where \(G\lhd A\) and \(t^{l}=g\in G\). Then \(t\) induces an automorphism \(\tau\) of \(G\) by conjugacy. Recall that by the cyclic extension theory of groups, this extension is valid if and only if \[\tau^{l}=\operatorname{Inn}(g)\quad\text{and}\quad\tau(g)=g.\] ### \(M=\langle a\rangle\langle c\rangle\) **Lemma 5.1**: _Suppose that \(X=X(Q)\), \(M=\langle a\rangle\langle c\rangle\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},c^{a}=a^{2s}c^{t},c^{b}=a^{u}c^{v}\rangle, \tag{2}\] _where_ \(r^{t-1}-1\equiv r^{v-1}-1\equiv 0({\rm mod}\ n),t^{2}\equiv 1({\rm mod}\ m),\) \(2s\sum_{l=1}^{t}r^{l}+2sr\equiv 2sr+2s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ ur\equiv 2(1-r)({\rm mod}\ 2n),\) if \(2\mid n\), then \(u(\sum_{l=0}^{v-1}r^{l}-1)\equiv 0({\rm mod}\ 2n)\,{\rm and}\,v^{2}\equiv 1({\rm mod} \ m),\) if \(2\nmid n\), then \(u\sum_{l=1}^{v}r^{l}-ur\equiv 2sr+(n-1)(1-r)({\rm mod}\ 2n)\), and \(v^{2}\equiv t({\rm mod}\ m),\) if \(t\neq 1\), then \(u\equiv 0({\rm mod}\ 2)\), \(2s\sum_{l=1}^{w}r^{l}\equiv u\sum_{l=1}^{w}(1-s(\sum_{l=1}^{t}r^{l}+r))^{l} \equiv 0({\rm mod}\ 2n)\Leftrightarrow w\equiv 0({\rm mod}\ m).\) **Proof** Noting \(\langle a^{2}\rangle\lhd X\) and \(M=\langle a\rangle\langle c\rangle\leq X\), we have \(X\) may be obtained by three cyclic extension of groups in order: \[\langle a^{2}\rangle\rtimes\langle c\rangle,\quad(\langle a^{2}\rangle\rtimes \langle c\rangle).\langle a\rangle\quad{\rm and}\quad((\langle a^{2}\rangle \rtimes\langle c\rangle).\langle a\rangle)\rtimes\langle b\rangle.\] So \(X\) has the presentation as in Eq(2). What we should to determine the parameters \(r,s,t,u\) and \(v\) by analysing three extensions. (1) \(\langle a^{2}\rangle\rtimes\langle c\rangle\), where \((a^{2})^{c}=a^{2r}\). Set \(\pi_{1}\in{\rm Aut}\,(\langle a^{2}\rangle)\) such that \(\pi_{1}(a^{2})=a^{2r}\). As mentioned before, this extension is valid if and only if \({\rm o}(\pi_{1}(a^{2}))={\rm o}(a^{2})\) and \(\pi_{1}^{m}=1\), that is \[r^{m}-1\equiv 0({\rm mod}\ n). \tag{3}\] (2) \((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a\rangle\), where \(c^{a}=a^{2s}c^{t}\). Set \(\pi_{2}\in{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle)\): \(a^{2}\to a^{2}\) and \(c\to a^{2s}c^{t}\). This extension is valid if and only if the following three equalities hold: (i) \(\pi_{2}\) preserves \((a^{2})^{c}=a^{2r}\): \[r^{t-1}-1\equiv 0({\rm mod}\ n). \tag{4}\] (ii) \({\rm o}(\pi_{2}(c))=m\): \[(a^{2s}c^{t})^{m}=c^{tm}(a^{2s})^{c^{tm}}\cdots(a^{2s})^{c^{t}}=c^{tm}a^{2s \sum_{l=1}^{m}r^{tl}}=c^{tm}a^{2s\sum_{l=1}^{m}r^{l}}=1,\] that is \[s\sum_{l=1}^{m}r^{l}\equiv 0({\rm mod}\ n). \tag{5}\] (iii) \(\pi_{2}^{2}={\rm Inn}(a^{2}):\) \[ca^{2-2r}={\rm Inn}(a^{2})(c)=\pi_{2}^{2}(c)=(a^{2s}c^{t})^{a}=a^{2s}(a^{2s}c^{ t})^{t}=c^{t^{2}}a^{2sr+2s\sum_{l=1}^{t}r^{l}},\] that is \[t^{2}-1\equiv 0({\rm mod}\ m)\quad{\rm and}\quad s\sum_{l=1}^{t}r^{l}+rs+r-1 \equiv 0({\rm mod}\ n). \tag{6}\] (3) \(((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a\rangle)\rtimes \langle b\rangle\), where \(c^{b}=a^{u}c^{v}\). Set \(\pi_{3}\in{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a \rangle):a\to a^{-1}\) and \(c\to a^{u}c^{v}\). We divide the proof into two cases according to the parity of \(u\), separately. _Case 1: \(u\) is even._ (i) \(\pi_{3}\) preserves \((a^{2})^{c}=a^{2r}\): \[r^{v-1}-1\equiv 0({\rm mod}\ n). \tag{7}\] (ii) \({\rm o}(\pi_{3}(c))=m\): \[1=(a^{u}c^{v})^{m}=c^{vm}a^{u\sum_{l=1}^{m}r^{l}},\] that is \[u\sum_{l=1}^{m}r^{l}\equiv 0({\rm mod}\ 2n). \tag{8}\] (iii) \(\pi_{3}\) preserves \(c^{a}=a^{2s}c^{t}\): that is \[a^{u}c^{v}=(a^{-2s}(a^{u}c^{v})^{t})^{a}=a^{-2s}(a^{u}(a^{2s}c^{t})^{v})^{t}=a ^{-2s}c^{t^{2}v}a^{(ru+2s\sum_{l=1}^{v}r^{l})\sum_{l=0}^{t-1}r^{l}},\] which implies \[r(u+2s)\equiv(ru+2s\sum_{l=1}^{v}r^{l})\sum_{l=0}^{t-1}r^{l}({\rm mod}\ 2n). \tag{9}\] By Eq(6), Eq(9) is if and only if \[u\sum_{l=1}^{t}r^{l}-ur-2s\sum_{l=1}^{v}r^{l}-2sr+2(1-r)\equiv 0({\rm mod}\ 2n). \tag{10}\] (iv) \(\pi_{3}^{2}={\rm Inn}(a^{n})\): If \(n\) is even, then \(a^{n}\in\langle a^{2}\rangle\), which implies \((a^{n})^{c}=a^{n}\). If \(n\) is odd, then \(c^{a^{n}}=(c^{a^{n-1}})^{a}=c^{t}a^{(n-1)(1-r)+2sr}\), where \(t^{2}\equiv 1({\rm mod}\ m)\), \(((n-1)(1-r)+2sr)(1+\sum_{l=0}^{t-1}r^{l})\equiv 0({\rm mod}\ 2n)\). Then there are two subcases: _Subcase 1.1: \(n\) is even_. In this case, \(\pi_{3}^{2}={\rm Inn}(a^{n})=1\). Suppose that \(v=1\). Then \(c=c^{b^{2}}=a^{-u}a^{u}c=c\), as desired. Suppose that \(v\neq 1\). Then \[c=c^{b^{2}}=a^{-u}(a^{u}c^{v})^{v}=c^{v}(a^{u}c^{v})^{v-1}=c^{v^{2}}a^{u\sum_{ l=1}^{v-1}r^{l}},\] that is \[u\sum_{l=1}^{v-1}r^{l}\equiv 0(\mbox{mod }2n)\quad\mbox{and}\quad v^{2}-1 \equiv 0(\mbox{mod }m). \tag{11}\] Then \(\pi_{3}^{2}=1\) holds if and only if \[u\sum_{l=1}^{v}r^{l}-ur\equiv 0(\mbox{mod }2n)\quad\mbox{and}\quad v^{2}-1 \equiv 0(\mbox{mod }m). \tag{12}\] _Subcase 1.2: \(n\) is odd._ In this case, \(c^{a^{n}}=(c^{a^{n-1}})^{a}=c^{t}a^{(n-1)(1-r)+2sr}\). Suppose that \(v=1\). Then \(c^{t}a^{(n-1)(1-r)+2sr}=c^{b^{2}}=a^{-u}a^{u}c=c\), which implies \(t=1\). So \(X=G\rtimes C\). Suppose that \(v\neq 1\). Then \[c^{t}a^{(n-1)(1-r)+2sr}=c^{b^{2}}=a^{-u}(a^{u}c^{v})^{v}=c^{v}(a^{u}c^{v})^{v-1 }=c^{v^{2}}a^{u\sum_{l=1}^{v-1}r^{l}},\] that is \[u\sum_{l=1}^{v-1}r^{l}\equiv(n-1)(1-r)+2sr(\mbox{mod }2n)\quad\mbox{and}\quad v^{2} \equiv t(\mbox{mod }m). \tag{13}\] Then \(\pi_{3}^{2}=\mbox{Inn}(a^{n})\) holds if and only if \[u\sum_{l=1}^{v}r^{l}-ur\equiv(n-1)(1-r)+2sr(\mbox{mod }2n)\quad\mbox{and}\quad v^{2}-t\equiv 0(\mbox{mod }m). \tag{14}\] _Case 2: \(u\) is odd._ If \(t=1\), then \(c\) normalises \(\langle a\rangle\), which implies \(\langle a\rangle\lhd X\). By Lemma 4.3, we get \(G\lhd X\). Then \(v=1\). So assume \(t\neq 1\) and we shall get a contradiction. Let \(S=\langle a^{2},c\rangle\). Since \(u\) is odd again, we know that \(\langle a^{2}\rangle\leq S_{X}<S\). Since \(|X:S|=4\), we have \(\overline{X}=X/S_{X}=\langle\overline{c},\overline{a}\rangle_{\cdot}\langle \overline{b}\rangle\lessneq S_{4}\). The only possibility is \(\mbox{o}(\overline{c})=2\) and \(\overline{x}\cong D_{8}\) so that \(m\) is even and \(v\) is odd. Then \(t\) is odd, as \(t^{2}\equiv 1\ (\mbox{mod }m)\). Moreover, we have \(\langle a^{2},c^{2}\rangle=S_{X}\lhd X\). Consider \(\overline{X}=X/\langle a^{2}\rangle=\langle\overline{a},\overline{c}\rangle \rtimes\langle\overline{b}\rangle\), where \(\overline{a}^{\overline{b}}=\overline{a},\overline{b}^{2}=1,\overline{c}^{ \overline{a}}=\overline{c}^{t}\) and \(\overline{c}^{\overline{b}}=\overline{ac}^{v}\). Let \(\pi_{3}\) be defined as above. Since the induced action of \(\pi_{3}\) preserves \(\overline{c}^{\overline{a}}=\overline{c}^{t}\), we have \((\overline{ac}^{v})^{\overline{a}}=(\overline{ac}^{v})^{t}\), that is \[\overline{ac}^{tv}=\overline{ac}^{v}((\overline{ac}^{v})^{2})^{\frac{t-1}{2}} =\overline{ac}^{v}(\overline{c}^{tv+v})^{\frac{t-1}{2}}=\overline{ac}^{v+ \frac{v(t+1)(t-1)}{2}},\] which implies \[tv\equiv v+\frac{v(t+1)(t-1)}{2}(\mbox{mod }m).\] Noting \(t^{2}\equiv 1\pmod{m}\), \(t\neq 1\) and \((v,m)=1\) is odd, we get \[t\equiv 1+\frac{m}{2}(\mbox{mod }m). \tag{15}\] Let \(X_{1}=GS_{X}=\langle a,b\rangle\langle c^{2}\rangle\). By Eq(15), we have \((c^{2})^{a}=(a^{2s}c^{t})^{2}=a^{2s(1+r^{-1})}c^{2}\), which implies \(c^{2}\) that normalises \(\langle a\rangle\). By Lemma 4.3, we get \(G\lhd X_{1}\). If \(n\) is odd, then \(X_{1}=\langle a,b\rangle\rtimes\langle c^{2}\rangle\lhd X\), which implies \(\langle a^{n}\rangle\lhd X_{1}\). Note that \(a^{n}\) is an involution and \(\langle c^{2}\rangle_{X_{1}}=1\), then \(Z(X_{1})=\langle a^{n}\rangle\). Then \(\langle a^{n}\rangle\,\mbox{char}\,X_{1}\lhd X\), which implies \(a^{n}\in G_{X}\), that is \(\langle a\rangle\lhd X\). By Lemma 4.3 again, we get \(G\lhd X\), which implies \(t=v=1\), a contradiction. So in what follows, we assume that \(n\) is even. By \(G\lhd X_{1}\), we get \(b^{c^{2}}\in G\). Since \(b^{c^{2}}=c^{-2}a^{n}(b^{-1}c^{2}b)b=c^{-2}a^{n}(a^{u}c^{v})^{2}b=c^{v(t+1)-2} a^{x}b\), for some \(x\), we get \(v(t+1)-2\equiv 0(\mbox{mod }m)\). By combing Eq(15) we get \[v\equiv 1\pm\frac{m}{4}(\mbox{mod }\frac{m}{2}),\quad 4\mid m. \tag{16}\] Since \[\overline{c}=\overline{c}^{\overline{b}^{2}}=\overline{a}(\overline{ac}^{v}) ^{v}=\overline{c}^{v}(\overline{ac}^{v}\overline{ac}^{v})^{\frac{v-1}{2}}= \overline{c}^{v+(tv+v)\frac{v-1}{2}},\] that is \[(v-1)(\frac{v(t+1)}{2}+1)\equiv 0(\mbox{mod }m). \tag{17}\] Then Eq(16) and Eq(17) may give \(\frac{m}{2}\equiv 0(\mbox{mod }m)\), a contradiction. (4) Insure \(\langle c\rangle_{X}=1\): If \(t=1\), then \(v=1\) and \(1-2sr\equiv r(\mbox{mod }n)\) by Eq(6). For any integer \(w\), \[(c^{w})^{a}=(a^{2s}c)^{w}=c^{w}a^{2s\sum_{l=1}^{w}r^{l}}\quad\mbox{and}\quad( c^{w})^{b}=(a^{u}c)^{w}=c^{w}a^{u\sum_{l=1}^{w}(1-2sr)^{l}}.\] Since \(\langle c\rangle_{X}=1\), we know that \(2s\sum_{l=1}^{w}r^{l}\equiv 0\equiv u\sum_{l=1}^{w}(1-2sr)^{l}(\mbox{mod }2n)\Leftrightarrow w \equiv 0(\mbox{mod }m)\). If \(t\neq 1\), then \(u\) is even, for any integer \(w\), \[(c^{w})^{a}=(a^{2s}c^{t})^{w}=c^{tw}a^{2s\sum_{l=1}^{w}r^{l}}\quad\mbox{and} \quad(c^{w})^{b}=(a^{u}c^{v})^{w}=c^{vw}a^{u\sum_{l=1}^{w}r^{l}}.\] Since \(\langle c\rangle_{X}=1\), we know that \(2s\sum_{l=1}^{w}r^{l}\equiv 0\equiv u\sum_{l=1}^{w}r^{l}(\mbox{mod }2n) \Leftrightarrow w\equiv 0(\mbox{mod }m)\). Summarizing Eq(3)-Eq(14), we get the parameters \((m,n,r,s,t,u,v)\) as shown in the lemma. Moreover, since we do every above group extension by sufficient and necessary conditions, for any given parameters satisfying the equations, there exists \(X=X(Q)\). ### \(M=\langle a^{2}\rangle\langle c\rangle\) and \(X/M_{X}\cong D_{8}\) **Lemma 5.2**: _Suppose that \(X=X(Q)\), \(M=\langle a^{2}\rangle\langle c\rangle\), \(X/M_{X}\cong D_{8}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2r},(c^{2})^{a}=a^{2s}c^{2t},(c^{2})^{b}= a^{2u}c^{2},a^{c}=bc^{2w}\rangle,\] _where either \(w=0\) and \(r=s=t=u=1\); or_ \[\begin{array}{l}w\neq 0,\,s=u^{2}\sum_{l=0}^{w-1}r^{l}+\frac{un}{2},\,t=2wu+1, \\ r^{2w}-1\equiv(u\sum_{l=1}^{w}r^{l}+\frac{n}{2})^{2}-r\equiv 0({\rm mod}\ n),\\ s\sum_{l=1}^{t}r^{l}+sr\equiv 2sr-u\sum_{l=1}^{t}r^{l}+ur\equiv 1-r({\rm mod} \ n),\\ 2w(1+uw)\equiv nw\equiv 2w(r-1)\equiv 0({\rm mod}\ \frac{m}{2}),\\ 2^{\frac{1+(-1)^{u}}{2}}\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ n)\Leftrightarrow i \equiv 0({\rm mod}\ \frac{m}{2}).\end{array}\] **Proof** Under the hypothesis, \(M_{X}=\langle a^{2}\rangle\rtimes\langle c^{2}\rangle\). Set \(2n={\rm o}(a)\) and \(m={\rm o}(c)\). If \(n\) is odd, then \(\langle\overline{a},\overline{b}\rangle\cong\mathbb{Z}_{4}\), a contradiction. So both \(n\) and \(m\) are even. Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong D_{8}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \(\overline{a}^{\overline{c}}=\overline{b}\) and \(\overline{b}^{\overline{c}}=\overline{a}.\) Set \(c_{1}:=c^{2}\) and \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). Noting \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), by Lemma 5.1, we get \[X_{1}=\langle a,b,c_{1}|R,(a^{2})^{c_{1}}=a^{2r},c_{1}^{a}=a^{2s}c_{1}^{t},c_{ 1}^{b}=a^{2u}c_{1}^{v}\rangle\] whose \[\begin{array}{l}r^{t-1}\equiv r^{v-1}\equiv 1({\rm mod}\ n),\,t^{2}\equiv v^{2 }\equiv 1({\rm mod}\ \frac{m}{2}),\\ 2s\sum_{l=1}^{t}r^{l}+2sr\equiv 2sr+2s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l} +ur\equiv 2(1-r)({\rm mod}\ 2n),\\ u(\sum_{l=0}^{v-1}r^{l}-1)\equiv 0({\rm mod}\ 2n),\\ s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ n) \Leftrightarrow i\equiv 0({\rm mod}\ \frac{m}{2}).\end{array} \tag{18}\] Moreover, since \(n\) is even and \(\langle c_{1}\rangle_{X_{1}}=1\), we get that \(b^{2}=a^{n}\) is the unique involution of \(Z(X_{1})\). Then \(a^{n}\in Z(X)\), that is \([b^{2},c]=[a^{n},c]=1\). Now \(X=X_{1}.\langle c\rangle\). Set \(a^{c}=bc_{1}^{w}\). Then \(X\) may be defined by \(R\) and \[(a^{2})^{c_{1}}=a^{2r},\,c_{1}^{a}=a^{2s}c^{2t},\,c_{1}^{b}=a^{2u}c^{2v},\,a^ {c}=bc_{1}^{w}. \tag{19}\] If \(w\equiv 0({\rm mod}\ \frac{m}{2})\), then \({\rm o}(a)={\rm o}(a^{c})={\rm o}(b)=4\), which implies \(G\cong Q_{8}\), and one can check \(X\) is isomorphic to the following form: \[X=\langle a,b,c|a^{4}=c^{4}=1,b^{2}=a^{2},a^{b}=a^{-1},a^{c}=b,b^{c}=a^{-1}\rangle,\] that is the former part of Lemma 5.2. So in that follows, we assume \(w\not\equiv 0({\rm mod}\ \frac{m}{2})\). Firstly, we get \(b^{c}=a^{c^{2}}c^{-2w}=c_{1}^{w+t-1}a^{2sr-1+n}.\) Set \(\pi\in{\rm Aut}\,(X_{1}):a\to bc_{1}^{w}\), \(b\to a^{1-2sr}c_{1}^{1-t-w}\) and \(c_{1}\to c_{1}\). We need to carry out the following seven steps: (i) \({\rm o}(\pi(b))=4:\) Since \(b^{2}\in Z(X)\), we only show \((b^{c})^{2}=a^{n}\): \[(c^{2(w+t-1)}a^{2sr-1+n})^{2}=c^{2w(t+1)}a^{2sr^{w+1}+2s\sum_{l=1}^{w+t-1}r^{ l}+2sr-2}=a^{n},\] that is \[w(t+1)\equiv 0({\rm mod}\ \frac{m}{2})\quad{\rm and}\quad sr^{w+1}+s\sum_{l=1}^{ w+t-1}r^{l}+sr-1\equiv\frac{n}{2}({\rm mod}\ n), \tag{20}\] which implies \[r^{2w}\equiv r^{w(t+1)}\equiv 1({\rm mod}\ n). \tag{21}\] (ii) \({\rm o}(\pi(a))=2n\): \[(bc^{2w})^{2n}=(c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{n}=c^{2nw(v+1)}a^{2 nu\sum_{l=w+1}^{2w}r^{l}}=1,\] that is \[nw(v+1)\equiv 0({\rm mod}\ \frac{m}{2}). \tag{22}\] (iii) \(\pi\) preserves \((a^{2})^{c_{1}}=a^{2r}\): \[((a^{2})^{c^{2}})^{c}=c^{2w(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}+n} \quad{\rm and}\quad(a^{2r})^{c}=c^{2wr(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}+n},\] that is \[w(v+1)(r-1)\equiv 0({\rm mod}\ \frac{m}{2}). \tag{23}\] (iv) \(\pi\) preserves \(c_{1}^{a}=a^{2s}c_{1}^{t}\): \[((c^{2})^{a})^{c} = (c^{2})^{bc^{2w}}=(a^{2u}c^{2v})^{c^{2w}}=a^{2ur^{w}}c^{2v}=c^{2v }a^{2ur^{w+1}},\] \[(a^{2s}c^{2t})^{c} = (c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{s}c^{2t}=c^{2ws(v+1) +2t}a^{2sru\sum_{l=w+1}^{2w}r^{l}+ns},\] that is \[v\equiv ws(v+1)+t({\rm mod}\ \frac{m}{2})\quad{\rm and}\quad u\equiv su\sum_{l =1}^{w}r^{l}+\frac{ns}{2}({\rm mod}\ n). \tag{24}\] (v) \(\pi\) preserves \(c_{1}^{b}=a^{2u}c_{1}^{v}\): \[((c^{2})^{b})^{c} = (c^{2})^{a^{1-2sr}c^{2-2t-2w}}=c^{2t}a^{2sr^{2-w}}\] \[(a^{2u}c^{2v})^{c} = (c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{u}c^{2v}=c^{2wu(v+1) +2v}a^{2u^{2}\sum_{l=w+2}^{2w+1}r^{l}+un},\] that is, \[t\equiv wu(v+1)+v({\rm mod}\ \frac{m}{2})\quad{\rm and}\quad s\equiv u^{2} \sum_{l=0}^{w-1}r^{l}+\frac{un}{2}({\rm mod}\ n). \tag{25}\] (vi) \(\pi^{2}={\rm Inn}(c_{1})\): recall \({\rm Inn}(c_{1})(a)=a^{1-2sr}c_{1}^{1-t}\), \({\rm Inn}(c_{1})(a^{2})=a^{2r}\) and \({\rm Inn}(c_{1})(b)=c_{1}^{v-1}a^{2ur}b\). \[a^{1-2sr}c^{2-2t}={\rm Inn}(c_{1})(a)=\pi^{2}(a)=b^{c}c^{2w}=a^{1-2sr}c^{2-2t-2 w+2w},\] as desired; \[a^{2r}={\rm Inn}(c_{1})(a^{2})=\pi^{2}(a^{2})=(c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2 w}r^{l}+n})^{c}=c^{2w(v+1)(1+uw+\frac{n}{2})}a^{2(u\sum_{l=1}^{w}r^{l}+\frac{n}{2} )^{2}},\] that is \[w(v+1)(1+uw+\frac{n}{2})\equiv 0({\rm mod}\ \frac{m}{2})\quad{\rm and}\quad r \equiv(u\sum_{l=1}^{w}r^{l}+\frac{n}{2})^{2}({\rm mod}\ n), \tag{26}\] and noting Eq (23) and (24), we get \(w(v+1)(r-1)\equiv ws(v+1)+t-v\equiv 0({\rm mod}\ \frac{m}{2})\) and \(u\equiv su\sum_{l=1}^{w}r^{l}+\frac{ns}{2}({\rm mod}\ n)\). Then \[c^{2(v-1)}a^{2ur}b={\rm Inn}(c_{1})(b)=\pi^{2}(b)=c^{2(t-1+wsr(v+1))}a^{2usr \sum_{l=1}^{w}r^{l}+(s+1)n}b,\] as desired. (vii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a^{2}\rangle\langle c^{2}\rangle\). Then \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G}\langle c ^{2}\rangle^{x}=\langle c^{2}\rangle_{X_{1}}=1\). Recall \(2s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}(1-s(\sum_{l=1}^{l}r^{l}+r))^{l} \equiv 0({\rm mod}\ 2n)\Leftrightarrow i\equiv 0({\rm mod}\ \frac{m}{2})\). Now we are ready to determine the parameters by summarizing Eq(18)-Eq(26). Firstly, we shall show \(v=1\). Suppose that \(u\) is odd. Then by Eq(26), we get \((u,n)=(\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\) as \((r,n)=1\). Moreover, if \(\frac{n}{2}\) is odd, then \(\sum_{l=1}^{w}r^{l}\) is even as \(r\) is odd. Then by Eq(25), we get \(s\equiv u^{2}\sum_{l=0}^{w-1}r^{l}+\frac{n}{2}({\rm mod}\ n)\), which implies \((s,n)=1\). Then we have \(\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ n)\Leftrightarrow i\equiv 0({\rm mod}\ \frac{m}{2})\) by (vii) and \(\sum_{l=0}^{v-1}r^{l}\equiv 1({\rm mod}\ n)\) from Eq(18). Then \(v\equiv 1({\rm mod}\ \frac{m}{2})\). Suppose that \(u\) is even. Then by Eq(26), we get \((u,\frac{n}{2})=(\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\). Then by Eq(25), we get \(s\equiv u^{2}\sum_{l=0}^{w-1}r^{l}({\rm mod}\ n)\), which implies \(s\) is even. Then by Eq(24), we get \(u\equiv su\sum_{l=1}^{w}r^{l}({\rm mod}\ n)\). Then we have \(\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ \frac{n}{2})\Leftrightarrow i\equiv 0({\rm mod }\ \frac{m}{2})\) by (vii) and \(\sum_{l=0}^{v-1}r^{l}\equiv 1({\rm mod}\ \frac{n}{2})\) from Eq(18). Then \(v\equiv 1({\rm mod}\ \frac{m}{2})\). Inserting \(v=1\) in Eq(18)-Eq(26), we get \(s=u^{2}\sum_{l=0}^{w-1}r^{l}+\frac{n}{2}\) and \(t=2wu+1\) in Eq(25); \(nw\equiv 0({\rm mod}\ \frac{m}{2})\) in Eq(20), (22) and (26); and \(2w(r-1)\equiv 2w(1+uw)\equiv 0({\rm mod}\ \frac{m}{2})\) in Eq(23) and (26). All these are summarized in the lemma. \(\Box\) ### \(M=\langle a^{2}\rangle\langle c\rangle\) and \(X/M_{X}\cong A_{4}\) **Lemma 5.3**: _Suppose that \(X=X(Q)\), \(M=\langle a^{2}\rangle\langle c\rangle\), \(X/M_{X}\cong A_{4}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},(c^{3})^{a}=a^{2s}c^{3},(c^{3})^{b}=a^{2u }c^{3},a^{c}=bc^{\frac{im}{2}},b^{c}=a^{x}b\rangle,\] _where \(n\equiv 2(\mbox{\rm mod }4)\) and either \(i=s=u=0\) and \(r=x=1\); or \(i=1\), \(6\mid m\), \(r^{\frac{m}{2}}\equiv-1(\mbox{\rm mod }n)\) with \(\mbox{\rm o}(r)=m\), \(s\equiv\frac{r^{-3}-1}{2}(\mbox{\rm mod }\frac{n}{2})\), \(u\equiv\frac{r^{3}-1}{2r^{2}}(\mbox{\rm mod }\frac{n}{2})\) and \(x\equiv-r+r^{2}+\frac{n}{2}(\mbox{\rm mod }n)\)._ **Proof** Under the hypothesis, \(M_{X}=\langle a^{2}\rangle\rtimes\langle c^{3}\rangle\). Set \(2n=\mbox{\rm o}(a)\) and \(m=\mbox{\rm o}(c)\). If \(n\) is odd, then \(\langle\overline{a},\overline{b}\rangle\cong\mathbb{Z}_{4}\), a contradiction. So \(n\) is even and \(3\mid m\). Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong A_{4}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \(\overline{a}^{\overline{c}}=\overline{b}\) and \(\overline{b}^{\overline{c}}=\overline{a}\overline{b}.\) Set \(c_{1}:=c^{3}\) and \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). By Lemma 5.1, we get \[X_{1}=\langle a,b,c^{3}|R,(a^{2})^{c_{1}}=a^{2r},(c_{1})^{a}=a^{2s}c_{1}^{t},( c_{1})^{b}=a^{2u}c_{1}^{v}\rangle\] whose \[\begin{array}{l}r^{t-1}-1\equiv r^{v-1}-1\equiv u(\sum_{l=0}^{v-1}r^{l}-1) \equiv 0(\mbox{\rm mod }n),\\ s\sum_{l=1}^{t}r^{l}+sr\equiv sr+s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ur \equiv 1-r(\mbox{\rm mod }n),\\ t^{2}-1\equiv v^{2}-1\equiv 0(\mbox{\rm mod }\frac{m}{3}),\\ s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{\rm mod }n) \Leftrightarrow i\equiv 0(\mbox{\rm mod }\frac{m}{3}).\end{array} \tag{27}\] Moreover, since \(n\) is even and \(\langle c_{1}\rangle_{X_{1}}=1\), we get that \(b^{2}=a^{n}\) is the unique involution of \(Z(X_{1})\), that is \([b^{2},c]=[a^{n},c]=1\). Now \(X=X_{1}.\langle c\rangle\). Set \(a^{c}=bc_{1}^{w}\). Then \(X\) may be defined by \(R\) and \[(a^{2})^{c_{1}}=a^{2r},(c_{1})^{a}=a^{2s}c_{1}^{t},(c_{1})^{b}=a^{2u}c_{1}^{v},\,a^{c}=bc_{1}^{w},b^{c}=a^{1+2x}bc_{1}^{y}. \tag{28}\] If \(w\equiv 0(\mbox{\rm mod }\frac{m}{3})\), then \(\mbox{\rm o}(a)=\mbox{\rm o}(a^{c})=\mbox{\rm o}(b)=4\), which implies \(G\cong Q_{8}\), and one can check \(X\) is isomorphic to the following form: \[X=\langle a,b,c|a^{4}=c^{3}=1,b^{2}=a^{2},a^{b}=a^{-1},a^{c}=b,b^{c}=ab\rangle,\] that is the former part of Lemma 5.3. So in that follows, we assume \(w\not\equiv 0(\mbox{\rm mod }\frac{m}{2})\). What we should do is to determine the parameters \(r,s,t,u,v,w,x\) and \(y\) by analysing the last extension \(X_{1}.\langle c\rangle\), where \(a^{c}=bc_{1}^{w}\) and \(b^{c}=a^{1+2x}bc_{1}^{y}\). Set \(\pi\in\mbox{\rm Aut}\,(X_{1}):a\to bc_{1}^{w}\), \(b\to a^{1+2x}bc_{1}^{y},\quad c_{1}\to c_{1}.\) We need to carry out the following seven steps: (i) \(\mbox{\rm o}(\pi(b))=4\): Since \(b^{2}\in Z(X)\), we only show \((b^{c})^{2}=a^{n}\): \[(a^{1+2x}bc^{3y})^{2}=c^{3ty+3y}a^{2ur^{3}\sum_{l=1}^{ty}r^{l}+2xr^{y}(r^{y}-1 )-2sr^{y}\sum_{l=1}^{y}r^{l}+n}=a^{n},\] that is \[y(tv+1)\equiv 0(\mbox{\rm mod }\frac{m}{3})\quad\mbox{\rm and}\quad u\sum_{l=1} ^{ty}r^{l}+xr^{y}-x-s\sum_{l=1}^{y}r^{l}\equiv 0(\mbox{\rm mod }n), \tag{29}\] which implies \[r^{2y}\equiv r^{y(tv+1)}\equiv 1(\mbox{\rm mod }n). \tag{30}\] (ii) o\((\pi(ab))=4\): Since \((ab)^{2}=a^{n}\in Z(X)\), we only show \(((ab)^{c})^{2}=a^{n}\): \[a^{n} = ((ab)^{c})^{2}=(c^{3ww+3yt}a^{2ur^{y}\sum_{l=1}^{w}r^{l}-2r^{y}x+2s \sum_{l=1}^{y}r^{l}-1+n})^{2}\] \[= c^{3(vw+yt+tvw+y)}a^{2((r^{w+y}+1)(ur^{y}\sum_{l=1}^{w}r^{l}-r^{ y}x+s\sum_{l=1}^{y}r^{l})+s\sum_{l=1}^{vw+yt}r^{l}-1)},\] that is \[\begin{array}{l}vw+yt+tvw+y\equiv 0({\rm mod}\ \frac{m}{3});\\ (r^{w+y}+1)(ur^{y}\sum_{l=1}^{w}r^{l}-r^{y}x+s\sum_{l=1}^{y}r^{l})+s\sum_{l=1}^ {vw+yt}r^{l}-1\equiv\frac{n}{2}({\rm mod}\ n),\end{array} \tag{31}\] which implies \(r^{2w}\equiv 1({\rm mod}\ \frac{n}{2})\). (iii) o\((\pi(a))=2n\): \[(bc^{3w})^{n}=(bc^{3w}bc^{3w})^{n}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n}) ^{n}=c^{3nw(v+1)}a^{2nur^{w}\sum_{l=1}^{w}r^{l}}=1,\] that is \[\begin{array}{l}nw(v+1)\equiv 0({\rm mod}\ \frac{m}{3}).\end{array} \tag{32}\] (iv) \(\pi\) preserves \((a^{2})^{c^{3}}=a^{2r}\): \[\begin{array}{l}((a^{2})^{c^{3}})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l }+n})^{c^{3}}=c^{3w(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}+n},\\ (a^{2r})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{r}=c^{3wr(v+1)}a^{2ur \sum_{l=w+1}^{2w}r^{l}+n},\end{array}\] that is \[\begin{array}{l}w(v+1)(r-1)\equiv 0({\rm mod}\ \frac{m}{3}).\end{array} \tag{33}\] (v) \(\pi\) preserves \((c^{3})^{a}=a^{2s}c^{3t}\): \[\begin{array}{l}((c^{3})^{a})^{c}=(c^{3})^{bc^{3w}}=(a^{2u}c^{3v})^{c^{3w}}= c^{3v}a^{2ur^{w+1}},\\ (a^{2s}c^{3t})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{s}c^{3t}=c^{3ws (v+1)+3v}a^{2sur^{w+1}\sum_{l=1}^{w}r^{l}+sn},\end{array}\] that is \[\begin{array}{l}v\equiv ws(v+1)+t({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad u \equiv su\sum_{l=1}^{w}r^{l}+\frac{sn}{2}({\rm mod}\ n).\end{array} \tag{34}\] (vi) \(\pi\) preserves \((c^{3})^{b}=a^{2u}c^{3v}\): \[\begin{array}{l}((c^{3})^{b})^{c}=(c^{3})^{a^{1+2x}b}=(c^{3}a^{2x(1-r)})^{ ab}=c^{3tv}a^{2(u\sum_{l=1}^{t}r^{l}-sr+x(r-1))},\\ (a^{2u}c^{3v})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}+n})^{u}c^{3v}=c^{3wu( v+1)+3v}a^{2u^{2}r\sum_{l=w+1}^{2w}r^{l}+un},\end{array}\] that is \[\begin{array}{l}tv\equiv wu(v+1)+v({\rm mod}\ \frac{m}{3}),\\ u\sum_{l=1}^{t}r^{l}-sr+x(r-1)\equiv u^{2}r^{w+1}\sum_{l=1}^{w}r^{l}+\frac{un}{ 2}({\rm mod}\ n).\end{array} \tag{35}\] (vii) \(\pi^{3}={\rm Inn}(c_{1})\): Recall \({\rm Inn}(c_{1})(a)=a^{1-2sr}c^{3-3t}\), \({\rm Inn}(c_{1})(a^{2})=a^{2r}\) and \({\rm Inn}(c_{1})(b)=c^{3(v-1)}a^{2ur}b\). \[a^{1-2sr}c^{3-3t}={\rm Inn}(c_{1})(a)=\pi^{3}(a)=(a^{1+2x}bc^{3( w+y)})^{c}\] \[=a^{n-1}c^{3vtw(1+x(v+1))+3(w+2y)}a^{2ur^{w}\sum_{l=1}^{tw(1+x(v+ 1))}r^{l}-r^{w}(2s\sum_{l=1}^{w(1+x(v+1))}r^{l}+2ur\sum_{l=w+1}^{2w}r^{l}+xn+2 x)},\] that is \[\begin{array}{l}1-t\equiv t(wv+wxv+wx)+w+2y({\rm mod}\ \frac{m}{3}),\\ r^{w}(u\sum_{l=1}^{tw(1+x(v+1))}r^{l}-s\sum_{l=1}^{w(1+x(v+1))}r^{l}-ux\sum_{ l=w+1}^{2w}r^{l}-\frac{(x+1)n}{2}-x)\\ \equiv 1-sr({\rm mod}\ n);\end{array} \tag{36}\] \[\begin{array}{l}a^{2r}&=\ {\rm Inn}(c_{1})(a^{2})=\pi^{3}(a^{2})=(c^{3w(v+ 1)}a^{2ur^{w}\sum_{l=1}^{w}r^{l}+n})^{c^{2}}\\ &=\ c^{3w(v+1)+3uw^{2}(v+1)(1+uw)}a^{2r^{w}(u\sum_{l=1}^{w}r^{l})^{3}}a^{n}, \end{array}\] that is \[w(v+1)+uw^{2}(v+1)(uw+1)\equiv 0({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad r \equiv r^{w}(u\sum_{l=1}^{w}r^{l})^{3}+\frac{n}{2}({\rm mod}\ n), \tag{37}\] which implies \((u,\frac{n}{2})=(\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\) as \((r,n)=1\), and moreover, if \(u\) is even, then \(\frac{n}{2}\) is odd as \(r\) is odd. Noting Eq(34), that is \(u\equiv su\sum_{l=1}^{w}r^{l}+\frac{sn}{2}({\rm mod}\ n)\), we get that \((s,\frac{n}{2})=1\) and \[c^{3(v-1)}a^{2ur}b = {\rm Inn}(c_{1})(b)=\pi^{3}(b)=(c^{3(w+t-1)}a^{2sr-1+n})^{c}\] \[= c^{3(t-1+ws(v+1))}a^{2usr\sum_{l=1}^{w}r^{l}+sn}b,\] that is \[v\equiv t+ws(v+1)({\rm mod}\ \frac{m}{3}). \tag{38}\] (viii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a^{2}\rangle\langle c^{3}\rangle\). Since \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G}\langle c ^{3}\rangle^{x}=\langle c^{3}\rangle_{X_{1}}=1\) and \(s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ n) \Leftrightarrow i\equiv 0({\rm mod}\ \frac{m}{3}).\) Noting that \((s,\frac{n}{2})=(u,\frac{n}{2})=1\) and both \(u\) and \(s\) are even only if \(\frac{n}{2}\) is odd, we have \(2^{\frac{1+(-1)^{u}}{2}}\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ n) \Leftrightarrow i\equiv 0({\rm mod}\ \frac{m}{3}).\) Now we are ready to determine the parameters by summarizing Eq(27)-Eq(38). Then we shall divide it into three steps: _Step 1: \(t=v=1\), \(w=\frac{m}{6}\), \(r^{w}\equiv-1({\rm mod}\ n)\) and \(s\equiv\frac{1-r}{2r}({\rm mod}\ \frac{n}{2})\)._ Since \((r,n)=(u,\frac{n}{2})=1\) (after Eq(37)), we get from Eq(27) that \(2^{\frac{1+(-1)^{u}}{2}}\sum_{l=1}^{v-1}r^{l}\equiv 0(\mbox{mod }n).\) By (viii), \(2^{\frac{1+(-1)^{u}}{2}}\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }n)\Leftrightarrow i \equiv 0(\mbox{mod }\frac{m}{3}),\) which means \(v\equiv 1(\mbox{mod }\frac{m}{3}).\) Inserting \(v=1\) in Eq(27)-Eq(38), we get that \(2w(wu+1)\equiv 0(\mbox{mod }\frac{m}{3})\) and \(t\equiv 1+2wu\equiv 1-2ws(\mbox{mod }\frac{m}{3})\) in Eq(31), (34) and (35). Then \(2w\equiv 0(\mbox{mod }\frac{m}{3})\) by Eq(37), which implies \(w=\frac{m}{6}\) as \(w\not\equiv 0(\mbox{mod }\frac{m}{3})\). Inserting \(w=\frac{m}{6}\) in Eq(27)-Eq(38) again, we get \(t\equiv 1(\mbox{mod }\frac{m}{3})\) in Eq(35), \(s\equiv\frac{1-r}{2r}(\mbox{mod }\frac{n}{2})\) in Eq(27) and \(r^{w}\equiv-1(\mbox{mod }n)\) in Eq(27) and (37). _Step 2: \(y=0\)_ Since \(2y\equiv 0(\mbox{mod }\frac{m}{3})\) in Eq(29), we know that \(y\) is either \(0\) or \(\frac{m}{6}.\) Suppose that \(y=\frac{m}{6}=w\). Then by Eq(31) we get that \(\frac{n}{2}\) is odd, and with Eq(29) and (31), we get \(2x\equiv(u-s)\sum_{l=1}^{w}r^{l}\equiv\frac{n}{2}-1(\mbox{mod }n)\). By Eq(34) and \((u,\frac{n}{2})=1,\) we get \(2s\sum_{l=1}^{w}r^{l}\equiv 1+\frac{sn}{2}(\mbox{mod }n),\) then \(u\sum_{l=1}^{w}r^{l}\equiv 0(\mbox{mod }\frac{n}{2}),\) contradict to \((u,\frac{n}{2})=(\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\). So \(y=0.\) _Step3: Determine \(u\) and \(x\)._ By Eq(31), we get \(s\sum_{l=1}^{w}r^{l}\equiv 1+\frac{n}{2}(\mbox{mod }n).\) Then \(\frac{(s+u)n}{2}\equiv 0(\mbox{mod }n)\) in Eq(34), which implies \(u\equiv s(\mbox{mod }2)\). By Eq(35), we get \(x(r-1)\equiv(s-u)r-u^{2}r\sum_{l=1}^{w}r^{l}+\frac{un}{2}(\mbox{mod }n).\) If \(\frac{n}{2}\) is even, then \(u\sum_{l=1}^{w}r^{l}\) is even. But in Eq(37), we get \(r\equiv-(u\sum_{l=1}^{w}r^{l})^{3}+\frac{n}{2}(\mbox{mod }n)\) which implies that \(u\sum_{l=1}^{w}r^{l}\) is odd as both \(r\) and \(\frac{n}{2}\) are odd, a contradiction. So \(\frac{n}{2}\) is odd. Then we get \(u\sum_{l=1}^{w}r^{l}\) is even in Eq(37) and \(u\) is even in Eq(35). Then \(\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i \equiv 0(\mbox{mod }\frac{m}{3})\). Recall \(s\equiv\frac{r^{-1}-1}{2}(\mbox{mod }\frac{n}{2})\) in Eq(27) and \(s\sum_{l=1}^{w}r^{l}\equiv 1+\frac{n}{2}(\mbox{mod }n)\) in Eq(31). Since \((\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\) and \(x(r-1)\equiv(s-u)r-u^{2}r\sum_{l=1}^{w}r^{l}(\mbox{mod }n),\) we get \(2x\equiv u\sum_{l=1}^{w}r^{l}+(u\sum_{l=1}^{w}r^{l})^{2}-1+\frac{n}{2}(\mbox{ mod }n)\). And by Eq(37), we get \(-r\equiv(u\sum_{l=1}^{w}r^{l})^{3}+\frac{n}{2}(\mbox{mod }n).\) Take \(l=-u\sum_{l=1}^{w}r^{l}+\frac{n}{2},\) then \(l\equiv-\frac{2ru}{1-r}(\mbox{mod }\frac{n}{2}),\)\(r\equiv l^{3}(\mbox{mod }n),\)\(u\equiv\frac{l^{3}-1}{2l^{2}}(\mbox{mod }\frac{n}{2})\) and \(1+2x\equiv-l+l^{2}+\frac{n}{2}(\mbox{mod }n)\). Let us re-write \(l\) as \(r\) and \(1+2x\) as \(x\) for the sake of formatting. Then \(s\equiv\frac{r^{-3}-1}{2}(\mbox{mod }\frac{n}{2})\), \(u\equiv\frac{r^{3}-1}{2r^{2}}(\mbox{mod }\frac{n}{2})\) and \(x\equiv-r+r^{2}+\frac{n}{2}(\mbox{mod }n).\)\(\Box\) In fact, if we add the conditions \(t=1\) and \(w\neq 0\) and delete \(\langle c\rangle_{X}=1\) in the above calculation, then we can get the following: **Lemma 5.4**: _With the notation, suppose that \(t=1\) and \(w\neq 0\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},(c^{3})^{a}=a^{2s}c^{3},(c^{3})^{b}=a^{2u} c^{3},a^{c}=bc^{\frac{m}{2}},b^{c}=a^{x}b\rangle,\] _where \(n\equiv 2(\mbox{mod }4)\), \(m\equiv 0(\mbox{mod }6)\), \(r^{\frac{m}{2}}\equiv-1(\mbox{mod }n)\), \(s\equiv\frac{r^{-3}-1}{2}(\mbox{mod }\frac{n}{2})\), \(u\equiv\frac{r^{3}-1}{2r^{2}}(\mbox{mod }\frac{n}{2})\) and \(x\equiv-r+r^{2}+\frac{n}{2}(\mbox{mod }n)\)._ ### \(M=\langle a^{4}\rangle\langle c\rangle\) and \(X/M_{X}\cong S_{4}\) **Lemma 5.5**: _Suppose that \(X=X(Q)\), \(M=\langle a^{4}\rangle\langle c\rangle\), \(X/M_{X}\cong S_{4}\) and \(\langle c\rangle_{X}=1\). Then \(X=\langle a,b,c|R,(a^{4})^{c}=a^{4r},c_{1}^{a^{2}}=a^{4s}c_{1},c_{1}^{b}=a^{4u} c_{1},(a^{2})^{c}=bc^{\frac{im}{2}},b^{c}=a^{2x}b,c^{a}=a^{2(1+2z)}c^{1+\frac{jm}{3}}\rangle,\) where either_ * \(i=0\)_,_ \(r=j=1,x=3,s=u=z=0\)_; or_ * \(i=1\)_,_ \(n\equiv 4(\mathrm{mod}\ 8)\)_,_ \(6\mid m\)_,_ \(r^{\frac{m}{2}}\equiv-1(\mathrm{mod}\ \frac{n}{2})\)_,_ \(\mathrm{o}(r)=m\)_,_ \(s\equiv\frac{r^{-3}-1}{2}(\mathrm{mod}\ \frac{n}{4})\)_,_ \(u\equiv\frac{r^{3}-1}{2r^{2}}(\mathrm{mod}\ \frac{n}{4})\)_,_ \(x\equiv-r+r^{2}+\frac{n}{4}(\mathrm{mod}\ \frac{n}{2})\)_,_ \(1+2z\equiv\frac{1-r}{2r}(\mathrm{mod}\ \frac{n}{2})\)_,_ \(j\in\{1,2\}\)_._ **Proof** Under the hypothesis, \(M_{X}=\langle a^{4}\rangle\rtimes\langle c^{3}\rangle\). Set \(2n=\mathrm{o}(a)\) and \(m=\mathrm{o}(c)\). Then \(n\) is even and \(3\mid m\). If \(\frac{n}{2}\) is odd, then \(\langle\overline{a},\overline{b}\rangle\cong Q_{8}\), a contradiction. So \(\frac{n}{2}\) is even. Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong S_{4}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \((\overline{a}^{2})^{\overline{c}}=\overline{b}\), \(\overline{b}^{\overline{c}}=\overline{a}^{2}\overline{b}\) and \((\overline{c})^{\overline{u}}=\overline{a}^{2}\overline{c}^{2}.\) Take \(a_{1}=a^{2}\) and \(c_{1}=c^{3}\). Then we set \(a_{1}^{c}=bc_{1}^{w},b^{c}=a_{1}^{x}bc_{1}^{y},c^{a}=a_{1}^{1+2z}c^{2+3d}\), where \(x\) is odd. Suppose \(w\equiv 0(\mathrm{mod}\ \frac{m}{3})\). Note that \(\mathrm{o}(a_{1})=\mathrm{o}(a_{1}^{c})=\mathrm{o}(b)=4\), which implies \(G\cong Q_{16}\). Thus one can check \(X\) can only have the following form: \(X=\langle a,b,c|a^{8}=c^{3}=1,b^{2}=a^{4},a^{b}=a^{-1},b^{c}=a_{1}^{3}b,c^{a}=a _{1}c^{2}\rangle.\) So in what follows, we assume \(w\not\equiv 0(\mathrm{mod}\ \frac{m}{3})\). Then consider \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). Noting \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), by Lemma 4.4, we know \(\langle a_{1}\rangle\lhd X_{1}\), which implies that \(c_{1}\) normalises \(\langle a_{1}\rangle\). Take \(X_{2}=\langle a_{1},b\rangle\langle c\rangle\). Then we get \(X_{2}=(\langle a_{1},b\rangle\langle c_{1}\rangle).\langle c\rangle\). Note that \(c_{1}\) normalises \(\langle a_{1}\rangle\) in \(X_{2}\). Thus by Lemma 5.4, we get \[X_{2}=\langle a_{1},b,c|R,(a_{1}^{2})^{c}=a_{1}^{2r},c_{1}^{a_{1}}=a_{1}^{2s}c _{1},c_{1}^{b}=a_{1}^{2u}c_{1},a_{1}^{c}=bc^{\frac{m}{2}},b^{c}=a^{x}b\rangle,\] where \[\begin{array}{l}n\equiv 4(\mathrm{mod}\ 8),m\equiv 0(\mathrm{mod}\ 6)\\ r^{\frac{m}{2}}\equiv-1(\mathrm{mod}\ \frac{n}{2}),s\equiv\frac{r^{-3}-1}{2}( \mathrm{mod}\ \frac{n}{4}),u\equiv\frac{r^{3}-1}{2r^{2}}(\mathrm{mod}\ \frac{n}{4}),x\equiv-r+r^{2}+\frac{n}{4}(\mathrm{mod}\ \frac{n}{2}).\end{array} \tag{39}\] Note \(X=X_{2}.\langle a\rangle\). So \(X\) may be defined by \(R\) and \[(a_{1}^{2})^{c}=a_{1}^{2r},c_{1}^{a_{1}}=a_{1}^{2s}c_{1},c_{1}^{b}=a_{1}^{2u}c _{1},a_{1}^{c}=bc^{\frac{m}{2}},b^{c}=a_{1}^{x}b,c^{a}=a_{1}^{1+2z}c^{2+3d}. \tag{40}\] What we should to determine the parameters \(r,z\) and \(d\) by analyse the last one extension. \(X_{2}.\langle a\rangle\), where \(c^{a}=a_{1}^{1+2z}c^{2+3d}\). Set \(\pi\in\mathrm{Aut}\,(X_{1}):a_{1}\to a_{1}\), \(b\to a_{1}^{-1}b\) and \(c\to a_{1}^{1+2z}c^{2+3d}\), where \(d\) is odd. We need to check the following eight equalities: (i) \(\pi\) preserves \((a_{1}^{2})^{c}=a_{1}^{2r}\): \[a_{1}^{2r}=((a_{1}^{2})^{c})^{a}=(a_{1}^{2})^{c^{2+3d}}=a_{1}^{2r^{2+3d}},\] that is \[r^{1+3d}-1\equiv 0({\rm mod}\ \frac{n}{2}). \tag{41}\] Since \(r^{\frac{m}{2}}\equiv-1({\rm mod}\ \frac{n}{2})\), we get \(\sum_{l=1}^{1+3d}r^{3l}\equiv 0({\rm mod}\ \frac{n}{2})\). (ii) \(\pi\) preserves \(c_{1}^{a_{1}}=a_{1}^{2s}c_{1}\), that is \(a_{1}^{c_{1}}=a_{1}^{1-2sr^{3}}\): \[a_{1}^{1-2sr^{3}}=(a_{1}^{c_{1}})^{a}=a_{1}^{c_{1}^{2+3d}}=a_{1}^{(1-2sr^{3})^ {2+3d}},\] that is \[(1-2sr^{3})^{1+3d}-1\equiv 0({\rm mod}\ n). \tag{42}\] (iii) \(\pi\) preserves \(c_{1}^{b}=a_{1}^{2u}c_{1}\), that is \(b^{c_{1}}=a_{1}^{2ur^{3}}b\): \[(b^{c_{1}})^{a}=a_{1}^{2ur^{3}-3+6sr^{3}+r^{4}-2r^{2}-3r-4z(r+r^{2}+r^{3})+ \frac{n}{2}}b\quad{\rm and}\quad(a_{1}^{2ur^{3}}b)^{a}=a_{1}^{2ur^{3}-1}b,\] that is \[6sr^{3}+r^{4}-2r^{2}-3r-4z(r+r^{2}+r^{3})+\frac{n}{2}\equiv 2({\rm mod}\ n). \tag{43}\] (iv) \(\pi\) preserves \(a_{1}^{c}=bc^{\frac{m}{2}}\): \[(a_{1}^{c})^{a}=a_{1}^{c^{2+3d}}=(a_{1}^{x}bc^{\frac{m}{2}})^{c_{1}^{d}}=a_{1 }^{x(1-2sr^{3})^{d}+2u\sum_{l=1}^{d}r^{3l}}bc^{\frac{m}{2}},\] \[(bc^{\frac{m}{2}})^{a}=a_{1}^{-1}b(c^{\frac{m}{2}})^{a}=a_{1}^{(2z(r+r^{2}+r^ {3})+\frac{n}{2}-2sr^{3}+r^{2}-x(1-2sr^{3})^{d+\frac{m}{6}}-2ur^{2}\sum_{l=0}^{ \frac{m}{6}-1}r^{3l})\sum_{l=0}^{\frac{m}{6}-1}r^{3l}-1}bc^{\frac{m}{2}},\] that is \[\begin{array}{l}(r^{2}-2sr^{3}-x(1-2sr^{3})^{d+\frac{m}{6}}-2ur^{2}\sum_{l=0 }^{\frac{m}{6}-1}r^{3l}+2z(r+r^{2}+r^{3})+\frac{n}{2})\sum_{l=0}^{\frac{m}{6} -1}r^{3l}\equiv\\ x(1-2sr^{3})^{d}+2u\sum_{l=1}^{d}r^{3l}+1({\rm mod}\ n).\end{array} \tag{44}\] (v) \(\pi\) preserves \(b^{c}=a_{1}^{x}b\): \[\begin{array}{l}(b^{c})^{a}=(a_{1}^{-1}b)^{a_{1}^{1+2z}c^{2+3d}}=a_{1}^{x-1- 2(1+2z)r+\frac{n}{2}+2u\sum_{l=1}^{d}r^{3l}}b,\\ (a_{1}^{x}b)^{a}=a_{1}^{x-1}b,\end{array}\] that is \[-2(1+2z)r+\frac{n}{2}+2u\sum_{l=1}^{d}r^{3l}\equiv 0({\rm mod}\ n). \tag{45}\] (vi) \({\rm o}(\pi(c))=m\): \[(c^{a})^{m}=c_{1}^{(2+3d)\frac{m}{3}}a_{1}^{(-2sr^{3}+r^{2}-x(1-2sr^{3})^{d+ \frac{m}{6}}-2ur^{2}\sum_{l=0}^{\frac{m}{6}-1}r^{3l}+2z(r+r^{2}+r^{3})+\frac{n }{2})\sum_{l=0}^{\frac{m}{3}}r^{3l}}.\] Since \(r^{\frac{m}{6}}\equiv-1(\mbox{mod }\frac{n}{2})\), we get \(\sum_{l=0}^{\frac{m}{3}}r^{3l}\equiv 0(\mbox{mod }\frac{n}{2})\). Note that \(r^{2}-x(1-2sr^{3})^{d+\frac{m}{6}}\) is even. Then \[(-2sr^{3}+r^{2}-x(1-2sr^{3})^{d+\frac{m}{6}}-2ur^{2}\sum_{l=0}^{\frac{m}{6}-1}r ^{3l}+2z(r+r^{2}+r^{3})+\frac{n}{2})\sum_{l=0}^{\frac{m}{3}}r^{3l}\equiv 0( \mbox{mod }n),\] as desired. (vii) \(\pi^{2}=\mbox{Inn}(a_{1})\): Recall \(c_{1}^{a_{1}}=a_{1}^{2s}c_{1}\) and \(a_{1}^{c}=bc^{\frac{m}{2}}\), then we know \(\mbox{Inn}(a_{1})(c)=c^{1+\frac{m}{2}}a_{1}^{-1+\frac{n}{2}}b\) and \(\mbox{Inn}(a_{1})(c_{1})=a_{1}^{2s}c_{1}\). \[c^{1+\frac{m}{2}}a_{1}^{-1+\frac{n}{2}}b=\mbox{Inn}(a_{1})(c)= \pi^{2}(c)=a_{1}^{1+2z}(c^{2+3d})^{a}=a_{1}^{1+2z}(a_{1}^{1+2z}c^{2+3d})^{2}(c _{1}^{a})^{d}\] \[=c^{(3d+2)^{2}+\frac{m}{2}}a_{1}^{-3+r^{-1}-2(r+2zr+z)+\frac{n(2 +\sum_{l=0}^{d-1}r^{3l}+(1-2sr^{3})^{d})}{4}+(2sr^{3}-r^{2}-2z(r+r^{2}+r^{3})- 1-r)\sum_{l=0}^{d-1}r^{3l}}b,\] that is, \[\begin{array}{l}(1+d)(1+3d)\equiv 0(\mbox{mod }\frac{m}{3}),\\ r^{-1}-2(r+2zr+z)+(2sr^{3}-(2zr+1)(1+r+r^{2}))\sum_{l=0}^{d-1}r^{3l}\equiv\\ 2+\frac{n(\sum_{l=0}^{d-1}r^{3l}+(1-2sr^{3})^{d})}{4}(\mbox{mod }n).\end{array} \tag{46}\] And \[a_{1}^{2s}c_{1}=\mbox{Inn}(a_{1})(c_{1})=\pi^{2}(c_{1})=c_{1}a_{1}^{(2z+1)(r+ r^{2}+r^{3})(\sum_{l=0}^{1+3d}r^{3l}+1)+\frac{n}{2}},\] that is, \[((2z+1)(r+r^{2}+r^{3})+\frac{n}{4})(\sum_{l=1}^{1+3d}r^{3l}+2)\equiv 2sr^{3} (\mbox{mod }n). \tag{47}\] (viii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a_{1}^{2}\rangle\langle c_{1}\rangle\), which implies \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G} \langle c_{1}\rangle^{x}=\langle c_{1}\rangle_{X_{1}}\). Then it is suffer to insure \(\langle c^{3}\rangle_{X_{1}}=1\). Recall \[X_{1}=\langle a,b,c_{1}|R,(a_{1})^{c_{1}}=a_{1}^{r^{3}},c_{1}^{b}=a_{1}^{2u}c_ {1},(c_{1})^{a}=c_{1}^{2+3d}a_{1}^{i}\rangle,\] where \(r^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{2})\), \(2u\equiv\frac{r^{3}-1}{r^{2}}(\mbox{mod }\frac{n}{2})\) and \(i\equiv\frac{1-r^{3}}{2}+\frac{n}{4}(\mbox{mod }\frac{n}{2})\). Since \((u,\frac{n}{4})=1\), we get \((i,\frac{n}{4})=1\). Noting \(\langle c_{1}\rangle_{X_{1}}=1\), by Lemma 5.1, we get \[\sum_{l=1}^{j}r^{3l}\equiv 0(\mbox{mod }\frac{n}{4})\Leftrightarrow j\equiv 0( \mbox{mod }\frac{m}{3}).\] Note that \(\sum_{i=1}^{1+3d}r^{i}\equiv 0(\mbox{mod }\frac{n}{2})\). Thus \(1+3d\equiv 0(\mbox{mod }\frac{m}{3})\). Since \(1+3d\neq 0\), we get \(1+3d\) is either \(\frac{m}{3}\) or \(\frac{2m}{3}\). By Eq(45), we get \(1+2z\equiv\frac{1-r}{2r}(\mbox{mod }\frac{n}{2})\). \(\Box\) ### \(M=\langle a^{3}\rangle\langle c\rangle\) and \(X/M_{X}\cong S_{4}\) **Lemma 5.6**: _Suppose that \(X=X(Q)\), \(M=\langle a^{3}\rangle\langle c\rangle\), \(X/M_{X}\cong S_{4}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,a^{c^{4}}=a^{r},b^{c^{4}}=a^{1-r}b,(a^{3})^{c^{ \frac{m}{4}}}=a^{-3},a^{c^{\frac{m}{4}}}=bc^{\frac{3m}{4}}\rangle, \tag{48}\] _where \(m\equiv 4({\rm mod}\ 8)\) and \(r\) is of order \(\frac{m}{4}\) in \(\mathbb{Z}_{2n}^{*}\)._ In this case, \(M_{X}=\langle a^{3}\rangle\langle c^{4}\rangle\). Set \(a^{3}=a_{1}\) and \(c^{4}=c_{1}\) so that \(M_{X}=\langle a_{1}\rangle\langle c_{1}\rangle\). Set \({\rm o}(a)=2n\) and \({\rm o}(c)=m\), where \(n\equiv 0({\rm mod}\ 3)\) and \(m\equiv 0({\rm mod}\ 4)\). Then in Lemma 5.7, we shall show \(\langle a_{1}\rangle\lhd X\) and in Lemma5.8, we shall get the classification of \(X\). **Lemma 5.7**: \(\langle a_{1}\rangle\lhd X\)_._ **Proof** Let \(X_{1}=M_{X}G\). Since \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), the subgroup \(X_{1}\) has been given in Lemma 5.1: \[X_{1}=\langle a,b,c_{1}|R,\,(a^{2})^{c_{1}}=a^{2r},\,(c_{1})^{a}= a_{1}^{2s}c_{1}^{t},\,(c_{1})^{b}=a_{1}^{u}c_{1}^{v}\rangle, \tag{49}\] where \[\begin{array}{l}r^{t-1}\equiv r^{v-1}\equiv 1({\rm mod}\ n),\,t^{2}\equiv 1({ \rm mod}\ \frac{m}{4}),\\ 6s\sum_{l=1}^{t}r^{l}+6sr\equiv 6sr+6s\sum_{l=1}^{v}r^{l}-3u\sum_{l=1}^{t}r^{l}+3 ur\equiv 2(1-r)({\rm mod}\ 2n),\\ 3u(\sum_{l=0}^{v-1}r^{l}-1)\equiv 0({\rm mod}\ 2n),\,{\rm and}\,v^{2}\equiv 1 ({\rm mod}\ \frac{m}{4}),\,{\rm if}\ 2\mid n,\\ 3u\sum_{l=1}^{v}r^{l}-ur\equiv 6sr+r-1({\rm mod}\ 2n),\,{\rm and}\,v^{2} \equiv t({\rm mod}\ \frac{m}{4}),\,{\rm if}\ 2\nmid n,\\ 2\mid u,\,{\rm if}\ t\neq 1,\\ 2s\sum_{l=1}^{w}r^{l}\equiv u\sum_{l=1}^{w}(1-s(\sum_{l=1}^{t}r^{l}+r))^{l} \equiv 0({\rm mod}\ \frac{2n}{3})\Leftrightarrow w\equiv 0({\rm mod}\ \frac{m}{4}).\end{array}\] Now \(X=\langle X_{1},c\rangle\). Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\rtimes\langle\overline{c}\rangle\cong S _{4}\), the only possibility under our conditions is: \[\overline{a}^{3}=\overline{c}^{4}=\overline{b}^{2}=1,\overline{a }^{\overline{b}}=\overline{a}^{-1},\,\overline{a}^{\overline{x}}=\overline{a }^{i}\overline{b}\overline{c}^{3}, \tag{50}\] where \(i\in\mathbb{Z}_{3}\). Observing Eq(49) and Eq(50), we may relabel \(a^{i}b\) by \(b\). Then in the perimage \(X\), Eq(50) corresponds to \[a^{3}=a_{1},b^{2}=a^{n},c^{4}=c_{1},a^{c}=bc^{3+4w}. \tag{51}\] Set \[(a_{1})^{c}=a_{1}^{z}c_{1}^{d}, \tag{52}\] necessarily, \(z\) is odd, as \({\rm o}(a_{1})\) is even. Then \(X\) is uniquely determined by Eq(49), Eq(51) and Eq(52). To show \(\langle a_{1}\rangle\lhd X\), for the contrary, we assume \(d\neq 0\). Then we need to deal with two cases according to the parameter \(t\) of \(X_{1}\), separately. _Case 1: \(t=1\)_ In this case, \(v=1\) and \(1-6sr-r\equiv 0(\bmod n)\) by \(X_{1}\). Set \(r_{1}=1-6sr\). Then \(r_{1}\equiv 1(\bmod 6)\), \(a^{c_{1}}=a^{r_{1}}\) and \(b^{c_{1}}=a^{ur_{1}}_{1}b\). By Eq(49), Eq(51) and Eq(52), one can check \(b^{c}=a^{2+3x}bc^{4y}\) for some \(x\) and \(y\). Since \(c\) preserves \(a^{c_{1}}_{1}=a^{r_{1}}_{1}\), there exist some \(x\) such that \[((a_{1})^{c_{1}})^{c}=a^{zr_{1}}_{1}c^{d}_{1}=c^{d}_{1}a^{zr_{1}+d}_{1}\quad \mbox{and}\quad(a^{r_{1}}_{1})^{c}=(a^{z}_{1}c^{d}_{1})^{r_{1}}=c^{dr_{1}}_{1}a ^{3z\sum_{l=1}^{r_{1}}r_{1}^{dl}}_{1},\] which gives \[d\equiv dr_{1}(\bmod\frac{m}{4}). \tag{53}\] Since \(c\) preserves \(b^{c_{1}}=a^{ur_{1}}_{1}b\), we get \[(b^{c_{1}})^{c}=a^{2+3xr_{1}+3ur_{1}}bc^{4y}\quad\mbox{and}\quad(a^{ur_{1}}_{1 }b)^{c}=c^{dur_{1}}_{1}a^{z\sum_{l=1}^{ur_{1}}r_{1}^{dl}}_{1}a^{2+3x}bc^{4y},\] which gives \[du\equiv 0(\bmod\frac{m}{4}). \tag{54}\] Since \(a^{c_{1}}_{1}=a^{c^{4}}_{1}=a^{x_{1}}_{1}c^{d(z^{3}+z^{2}+z+1)}_{1}\) for some \(x_{1}\), we get \[d(z^{3}+z^{2}+z+1)\equiv 0(\bmod\frac{m}{4}). \tag{55}\] By Eq(51), we get \(ac=cbc^{3+4w}\). Then \[a^{ac}_{1}=a^{z}_{1}c^{d}_{1}\quad\mbox{and}\quad a^{cbc^{3+4w}}_{1}=a^{x_{1}} _{1}c^{d-dz(z^{2}+z+1)}_{1},\] which gives \[dz(z^{2}+z+1)\equiv 0(\bmod\frac{m}{4}). \tag{56}\] With Eq(55), we know \(d\equiv 0(\bmod\frac{m}{4})\), contradicting with \(d=0\). _Case 2: \(t\neq 1\)_ In this case, we have \(t\neq 1\). Suppose that \(\langle a^{2}_{1}\rangle\lhd X\). Then in what follows we shall show \(\langle a^{2}_{1}\rangle\lhd X\). If so, then by considering \(\overline{X}=X/\langle a^{2}_{1}\rangle\) and \(\langle\overline{c_{1}}\rangle\lhd\overline{X}\), one may get \(t=1\), a contradiction, as \(\overline{C}\leq C_{\overline{X}}(\langle\overline{c_{1}}\rangle)=\overline{X}\). By Eq(49), \(u\) is even and \(r\equiv 1(\bmod 3)\). By using Eq(49), Eq(51) and Eq(52), one may derive \(b^{c}=a^{2+6x}bc^{y}_{1}\) for some \(x\) and \(y\), omitting the details. Since \(c\) preserves \((a_{1}^{2})^{c_{1}}=a_{1}^{2r}\), there exist some \(x\) such that \[((a_{1}^{2})^{c_{1}})^{c}=(a_{1}^{z}c_{1}^{d}a_{1}^{z}c_{1}^{d})^{c_{1}}=a_{1}^{ x}c_{1}^{d(t+1)}\quad\mbox{and}\quad(a_{1}^{2r})^{c}=(a_{1}^{z}c_{1}^{d})^{2r}=a_{1 }^{x}c_{1}^{d(t+1)r},\] which gives \[d(t+1)(r-1)\equiv 0(\mbox{mod }\frac{m}{4}). \tag{57}\] Since \(c\) preserves \((c_{1})^{b}=a_{1}^{u}c_{1}^{v}\), we get \[(c_{1}^{b})^{c}=c_{1}^{a^{2+6x}bc_{1}^{v}}=a_{1}^{x}c_{1}^{v},\,(a_{1}^{u}c_{1} ^{v})^{c}=(a_{1}^{z}c_{1}^{d})^{u}c_{1}^{v}=a_{1}^{x}c_{1}^{\frac{du(t+1)}{2}+v},\] which gives \[\frac{du(t+1)}{2}\equiv 0(\mbox{mod }\frac{m}{4}). \tag{58}\] Since \(c\) preserves \(c_{1}^{a}=a_{1}^{2s}c_{1}^{t}\), we get \(c_{1}^{bc^{2+4w}}=a_{1}^{2s}c_{1}^{t}\). Then \[c_{1}^{bc^{2+4w}}=(a_{1}^{u}c_{1}^{v})^{c^{2+4w}}=((a_{1}^{z}c_{1}^{d})^{z}c_{ 1}^{d})^{ur^{w}}c_{1}^{v}=a_{1}^{x}c_{1}^{v},\] which gives \[v\equiv t(\mbox{mod }\frac{m}{4}). \tag{59}\] By Eq(51) again, we get \(ac^{2}=cbc^{4(w+1)}\). Then \[(a_{1}^{2})^{ac^{2}}=a_{1}^{x}c_{1}^{d(t+1)(z+1)}\,\mbox{and}\,(a_{1}^{2})^{ cbc_{1}^{w+1}}=a_{1}^{x}c_{1}^{d(t+1)},\] which gives \[dz(t+1)\equiv 0(\mbox{mod }\frac{m}{4}). \tag{60}\] Since \(\mbox{o}(a_{1}^{c})=\frac{2n}{3}\), we get \(\frac{dn(t+1)}{3}\equiv 0(\mbox{mod }\frac{m}{4})\). With \((\frac{n}{3},z)=1\) and Eq(60), we get \(d(t+1)\equiv 0(\mbox{mod }\frac{m}{4})\). Then \((a_{1}^{2})^{c}=(a_{1}^{z}c_{1}^{d})^{2}=a_{1}^{x}c_{1}^{d(t+1)}=a_{1}^{x}\) for some \(x\), which implies \(\langle a_{1}^{2}\rangle\lhd X\), as desired. \(\Box\) **Lemma 5.8**: _The group \(X\) is given by Eq(48)._ **Proof** By lemma, \(\langle a_{1}\rangle\lhd X\), that is \((a_{1})^{c}=a_{1}^{z}\) by Eq(52). Since \(\langle a^{2}\rangle\lhd X_{1}\), we get \(\langle a\rangle\lhd X_{1}\) and so \(G\lhd X_{1}\), that is \(t=v=1\) in Eq(49). Then by Eq(49), (51) and (52), we can set \[X=\langle a,b,c\mid R,a^{c_{1}}=a^{r_{1}},b^{c_{1}}=a_{1}^{ur_{1}}b,(a_{1})^{ c}=a_{1}^{z},a^{c}=bc^{3+4w}\rangle,\] where \[\begin{array}{l}r_{1}=1-6sr,\,r_{1}^{\frac{m}{4}}-1\equiv 2(r_{1}-r)\equiv 0({\rm mod }\ 2n),\\ 2s\sum_{l=1}^{j}r^{l}\equiv 0\equiv u\sum_{l=1}^{j}r_{1}^{l}({\rm mod}\ \frac{2n}{3}) \Leftrightarrow j\equiv 0({\rm mod}\ \frac{m}{4}).\end{array}\] Note \(2s\sum_{l=1}^{j}r^{l}\equiv 0({\rm mod}\ \frac{2n}{3})\Leftrightarrow r_{1}^{j}-1 \equiv 0({\rm mod}\ 2n)\). In what follows, we shall divide the proof into two steps: _Step 1: Show \(m\equiv 4({\rm mod}\ 8)\)._ Set \(\langle c\rangle=\langle c_{2}\rangle\times\langle c_{3}\rangle\), where \(\langle c_{2}\rangle\) is a \(2-\)group and \(\langle c_{3}\rangle\) is the \(2^{\prime}-\)Hall subgroup of \(\langle c\rangle\). Then \(\langle c_{1}\rangle=\langle c_{2}^{4}\rangle\times\langle c_{3}\rangle\). To show \(m\equiv 4({\rm mod}\ 8)\), we only show \(c_{2}^{4}=1\). Consider \(\overline{X}=X/\langle a_{1}\rangle=\langle\overline{a},\overline{b}\rangle \langle\overline{c}\rangle\). Then one can check \(C_{\overline{X}}(\langle\overline{c}_{1}\rangle)=\overline{X}\), which implies \(\langle\overline{c_{1}}\rangle\leq Z(\overline{X})\) and \(\overline{X}/\langle\overline{c_{1}}\rangle\cong S_{4}\). Note that \(\langle\overline{c}_{3}\rangle\leq\langle\overline{c}_{3}\rangle(\langle \overline{c_{2}^{4}}\rangle\langle\overline{a}\rangle)\leq\overline{X}\) where \((|\overline{X}:\langle\overline{c}_{3}\rangle(\langle\overline{c_{2}^{4}} \rangle\langle\overline{a}\rangle)|,|\langle\overline{c}_{3}\rangle|)=1\). Thus by Proportion 2.5, we get that \(\langle\overline{c}_{3}\rangle\) has a complement in \(\overline{X}\), which implies \(X=(\langle a,b\rangle\langle c_{2}\rangle)\rtimes\langle c_{3}\rangle\). Consider \(X_{2}=\langle a,b\rangle\langle c_{2}\rangle\), where \(\langle c_{2}\rangle_{X_{2}}=1\) and \(\langle a_{1}\rangle\lhd X_{2}\), and \(\overline{X_{2}}=X_{2}/\langle a_{1}\rangle=\langle\overline{a},\overline{b} \rangle\langle\overline{c_{2}}\rangle\). Note that \(\langle\overline{c_{2}}^{4}\rangle\lhd\overline{X_{2}}\). Then one can check \(C_{\overline{X_{2}}}(\langle\overline{c_{2}}^{4}\rangle)=\overline{X}_{2}\), which implies that \(\overline{X}_{2}\) is the central expansion of \(S_{4}\). By Lemma 2.6, we get the Schur multiplier of \(S_{4}\) is \(\mathbb{Z}_{2}\), and then \({\rm o}(c_{2})\) is either \(4\) or \(8\). Suppose that \({\rm o}(c_{2})=8\). Then \(c_{2}^{4}\) normalises \(G\), and we set \(a^{c_{2}^{4}}=a^{i}\), where \(i\equiv 1({\rm mod}\ 3).\) Note that \(\langle a\rangle\leq C_{X_{2}}(\langle a_{1}\rangle)\lhd X_{2}\). Then \(\langle a,bc_{2},c_{2}^{2}\rangle\leq C_{X_{2}}(\langle a_{1}\rangle)\), which implies \(\langle a_{1}\rangle\times\langle c_{2}^{4}\rangle\lhd X_{2}\). Since \(i\equiv 1({\rm mod}\ \frac{2n}{3})\) and \(i^{2}\equiv 1({\rm mod}\ 2n)\), we get \(i=1\), which implies \([a,c_{2}^{4}]=1\). Then \(\langle a,c_{2}^{2}\rangle\leq C_{X_{2}}(\langle a_{1}\rangle\times\langle c_ {2}^{4}\rangle)\lhd X_{2}\), which implies \(bc_{2}\in C_{X_{2}}(\langle a_{1}\rangle\times\langle c_{1}\rangle)\). So \((c_{2}^{4})b=c_{2}^{4}\). Then \(c_{2}^{4}\lhd X_{2}\), a contradiction. So \({\rm o}(c_{2})=4\), which implies \(\langle c_{1}\rangle=\langle c_{3}\rangle\). Then \(X=(\langle a,b\rangle\langle c_{2}\rangle)\rtimes\langle c_{1}\rangle\). _Step 2: Determine the parameters \(r_{1},u,w\) and \(z\)._ In \(X_{2}=\langle a,b\rangle\langle c_{2}\rangle\), we know \(a^{c_{2}}=bc_{2}^{3}\) by Eq(51). Consider \(\langle a\rangle\leq C_{X_{2}}(\langle a_{1}\rangle)\lhd X_{2}\), then \(C_{X_{2}}(\langle a_{1}\rangle)\) is either \(\langle a,bc_{2},c_{2}^{2}\rangle\) or \(X_{2}\). Suppose that \(\langle c\rangle_{X}(\langle a_{1}\rangle)=X\). Then we know \(a_{1}=b^{2}\) as \([a_{1},b]=1\), that is \(n=3\) and \(a_{1}=a_{1}^{-1}\). Then one can check \(m=4\) and \(z,r=1\), as desired. Suppose that \(\langle c\rangle_{X}(\langle a_{1}\rangle)=\langle a,bc_{2},c_{2}^{2}\rangle\). Then \(a_{1}=a_{1}^{bc_{2}}=(a_{1}^{-1})^{c_{2}}\), which implies \(z=-1\). In \(X_{1}=\langle a,b\rangle\rtimes\langle c_{1}\rangle\), we know \(X_{1}=\langle a,b,c_{1}\mid R,a^{c_{1}}=a^{r_{1}},b^{c_{1}}=a_{1}^{wr_{1}}b\rangle.\) Since \(c_{2}\) preserves \(a^{c_{1}}=a^{r_{1}}\), we get \[(bc_{2}^{3})^{c_{1}}=a_{1}^{ur_{1}}bc_{2}^{3}\quad{\rm and}\quad(a^{r_{1}})^{c _{2}}=(a_{1}^{\frac{r_{1}-1}{3}})^{c_{2}}a^{c_{2}}=a_{1}^{\frac{1-r_{1}}{3}}bc_ {2}^{3},\] which gives \[ur_{1}\equiv\frac{1-r_{1}}{3}({\rm mod}\ \frac{2n}{3}).\] Recall that \(r_{1}^{j}-1\equiv 0\equiv 3u\sum_{l=1}^{j}r_{1}^{l}({\rm mod}\ 2n)\Leftrightarrow j \equiv 0({\rm mod}\ \frac{m}{4})\). Then we get that \(r_{1}^{j}-1\equiv 0({\rm mod}\ 2n)\Leftrightarrow j\equiv 0({\rm mod}\ \frac{m}{4})\), which implies \({\rm o}(r_{1})=\frac{m}{4}\). For the purpose of formatting uniformity, replacing \(r_{1}\) by \(r\), then we get Eq(48), as desired. Proof of Theorem 1.4 To prove Theorem 1.4, let \(\langle c\rangle_{X}=1\) and set \(R:=\{a^{n}=b^{2}=c^{m}=1,\,a^{b}=a^{-1}\}\). Then we shall deal with the five cases in Theorem 1.1 in the following five subsections, separately. ### \(M=\langle a\rangle\langle c\rangle\) **Lemma 6.1**: _Suppose that \(X=X(D)\), \(M=\langle a\rangle\langle c\rangle\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c}=a^{2r},c^{a}=a^{2s}c^{t},c^{b}=a^{u}c^{v}\rangle, \tag{61}\] \(2(r^{t-1}-1)\equiv 2(r^{v-1}-1)\equiv u(\sum_{l=0}^{v-1}r^{l}-1)\equiv 0({\rm mod} \ n),\,t^{2}\equiv v^{2}\equiv 1({\rm mod}\ m)\) \(2s\sum_{l=1}^{t}r^{l}+2sr\equiv 2sr+2s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ ur\equiv 2(1-r)({\rm mod}\ n),\) if \(t\neq 1\), then \(u\equiv 0({\rm mod}\ 2)\), \(2s\sum_{l=1}^{w}r^{l}\equiv u\sum_{l=1}^{w}(1-s(\sum_{l=1}^{t}r^{l}+r))^{l} \equiv 0({\rm mod}\ n)\Leftrightarrow w\equiv 0({\rm mod}\ m).\) _Moreover, \(G_{X}=\langle a^{2}\rangle\) if \(tv,t,v\neq 1\); \(\langle a^{2},b\rangle\) if \(v=1\) and \(u\) is even but \(t\neq 1\); \(\langle a^{2},ab\rangle\) if \(tv=1\) but \(t,v\neq 1\); and \(\langle a,b\rangle\) if \(t=v=1\), respectively._ **Proof** Noting \(\langle a^{2}\rangle\lhd X\) and \(M=\langle a\rangle\langle c\rangle\leq X\), we have \(X\) may be obtained by three cylic extension of groups in order: \[\langle a^{2}\rangle\rtimes\langle c\rangle,\quad(\langle a^{2}\rangle\rtimes \langle c\rangle).\langle a\rangle\quad{\rm and}\quad((\langle a^{2}\rangle \rtimes\langle c\rangle).\langle a\rangle)\rtimes\langle b\rangle.\] So \(X\) has the presentation as in Eq(61). What we should to determine the parameters \(r,s,t,u\) and \(v\) by analysing three extensions. (1) \(\langle a^{2}\rangle\rtimes\langle c\rangle\), where \((a^{2})^{c}=a^{2r}\). Set \(\pi_{1}\in{\rm Aut}\,(\langle a^{2}\rangle)\) such that \((a^{2})^{\pi_{1}}=a^{2r}\). This extension is valid if and only if \({\rm o}((a^{2})^{\pi_{1}})={\rm o}(a^{2})\) and \(\pi_{1}^{m}={\rm Inn}(a^{2})\), that is \[2(r^{m}-1)\equiv 0({\rm mod}\ n). \tag{62}\] (2) \((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a\rangle\), where \(c^{a}=a^{2s}c^{t}\). Set \(\pi_{2}\in{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle)\): \(a\to a\) and \(c\to a^{2s}c^{t}\). (i) \(\pi_{2}\) preserves \((a^{2})^{c}=a^{2r}\): \[2(r^{t-1}-1)\equiv 0({\rm mod}\ n). \tag{63}\] (ii) \({\rm o}(\pi_{2}(c))=m\): \[(a^{2s}c^{t})^{m}=c^{tm}a^{2s\sum_{l=1}^{m}r^{tl}}=c^{tm}a^{2s\sum_{l=1}^{m}r^ {l}}=1,\] that is \[2s\sum_{l=1}^{m}r^{l}\equiv 0({\rm mod}\ n). \tag{64}\] (iii) \(\pi_{2}^{2}={\rm Inn}(a^{2})\): \[ca^{2-2r}={\rm Inn}(a^{2})(c)=\pi_{2}^{2}(c)=(a^{2s}c^{t})^{a}=c^{t^{2}}a^{2sr+2s \sum_{l=1}^{t}r^{l}},\] that is \[t^{2}-1\equiv 0({\rm mod}\ m)\quad{\rm and}\quad 2(s\sum_{l=1}^{t}r^{l}+rs+r-1) \equiv 0({\rm mod}\ n). \tag{65}\] (3) \(((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a\rangle)\rtimes \langle b\rangle\), where \(c^{b}=a^{u}c^{v}\). Set \(\pi_{3}\in{\rm Aut}\,((\langle a^{2}\rangle\rtimes\langle c\rangle).\langle a \rangle):a\to a^{-1}\) and \(c\to a^{u}c^{v}\). We divide two cases, separately. _Case 1: \(u\) is even._ (i) \(\pi_{3}\) preserves \((a^{2})^{c}=a^{2r}\): \[2(r^{v-1}-1)\equiv 0({\rm mod}\ n). \tag{66}\] (ii) \({\rm o}(\pi_{3}(c))=m\): \[1=(a^{u}c^{v})^{m}=c^{vm}a^{u\sum_{l=1}^{m}r^{l}},\] that is \[u\sum_{l=1}^{m}r^{l}\equiv 0({\rm mod}\ n). \tag{67}\] (iii) \(\pi_{3}\) preserves \(c^{a}=a^{2s}c^{t}\): that is \[a^{u}c^{v}=(a^{-2s}(a^{u}c^{v})^{t})^{a}=a^{-2s}c^{t^{2}v}a^{(ru+2s\sum_{l=1}^{ v}r^{l})\sum_{l=0}^{t-1}r^{l}},\] which implies \[r(u+2s)\equiv(ru+2s\sum_{l=1}^{v}r^{l})\sum_{l=0}^{t-1}r^{l}({\rm mod}\ n). \tag{68}\] With Eq(65), Eq(68) is if and only if \[u\sum_{l=1}^{t}r^{l}-ur-2s\sum_{l=1}^{v}r^{l}-2sr+2(1-r)\equiv 0\pmod{n}. \tag{69}\] (iv) \(\pi_{3}^{2}=1\): Suppose that \(v=1\). Then \(c=c^{b^{2}}=a^{-u}a^{u}c=c\), as desired. Suppose that \(v\neq 1\). Then \[c=c^{b^{2}}=a^{-u}(a^{u}c^{v})^{v}=c^{v}(a^{u}c^{v})^{v-1}=c^{v^{2}}a^{u\sum_ {l=1}^{v-1}r^{l}},\] that is \[u\sum_{l=1}^{v-1}r^{l}\equiv 0(\mbox{mod }n)\,\mbox{and}\,v^{2}-1\equiv 0(\mbox{ mod }m). \tag{70}\] Then \(\pi_{3}^{2}=1\) is if and only if \[u\sum_{l=1}^{v}r^{l}-ur\equiv 0(\mbox{mod }n)\quad\mbox{and}\quad v^{2}-1\equiv 0 (\mbox{mod }m). \tag{71}\] _Case 2: \(u\) is odd._ If \(t=1\), then by Lemma 4.3, we get \(G\lhd X\), which implies \(v=1\). So assume \(t\neq 1\) and we shall get a contradiction. Let \(S=\langle a^{2},c\rangle\). Since \(u\) is odd again, we know that \(\langle a^{2}\rangle\leq S_{X}\lessneq S\). Since \(|X:S|=4\), we have \(\overline{X}=X/S_{X}=\langle\overline{c},\overline{a}\rangle\rtimes\langle \overline{b}\rangle\lessneq S_{4}\). The only possibility is \(\mbox{o}(\overline{c})=2\) and \(\overline{x}\cong D_{8}\) so that \(m\) is even and \(v\) is odd. Then \(t\) is odd, as \(t^{2}\equiv 1\pmod{m}\). Moreover, we have \(\langle a^{2},c^{2}\rangle=S_{X}\lhd X\). Consider \(\overline{X}=X/\langle a^{2}\rangle=\langle\overline{a},\overline{c}\rangle \rtimes\langle\overline{b}\rangle\), where \(\overline{a}^{\overline{b}}=\overline{a}\), \(\overline{c}^{\overline{a}}=\overline{c}^{t}\) and \(\overline{c}^{\overline{b}}=\overline{a}\overline{c}^{v}\). Let \(\pi_{3}\) be defined as above. Since the induced action of \(\pi_{3}\) preserves \(\overline{c}^{\overline{a}}=\overline{c}^{t}\), we have \((\overline{a}\overline{c}^{v})^{\overline{a}}=(\overline{a}\overline{c}^{v}) ^{t}\), that is \[\overline{a}\overline{c}^{tv}=\overline{a}\overline{c}^{v}((\overline{a} \overline{c}^{v})^{2})^{\frac{t-1}{2}}=\overline{a}\overline{c}^{v}( \overline{c}^{tv+v})^{\frac{t-1}{2}}=\overline{a}\overline{c}^{v+\frac{v(t+1 )(t-1)}{2}},\] which implies \[tv\equiv v+\frac{v(t+1)(t-1)}{2}\pmod{m}.\] Noting \(t^{2}\equiv 1\pmod{m}\), \(t\neq 1\) and \((v,m)=1\) is odd, we get \[t\equiv 1+\frac{m}{2}(\mbox{mod }m). \tag{72}\] Let \(X_{1}=GS_{X}=\langle a,b\rangle\langle c^{2}\rangle\). By Eq(72), we have \((c^{2})^{a}=(a^{2s}c^{t})^{2}=a^{2s(1+r^{-1})}c^{2}\), which implies \(c^{2}\) normalises \(\langle a\rangle\). Then we get \(G\lhd X_{1}\) and so \(b^{c^{2}}\leq G\). Since \(b^{c^{2}}=c^{-2}(bc^{2}b)b=c^{-2}(a^{u}c^{v})^{2}b=c^{v(t+1)-2}a^{x}b\), for some \(x\), we get \(v(t+1)-2\equiv 0(\mbox{mod }m)\). By combing Eq(72) we get \[v\equiv 1\pm\frac{m}{4}\,(\mbox{mod }\frac{m}{2}),\quad 4\mid m. \tag{73}\] Since \(\overline{c}=\overline{c}^{\overline{c}^{2}}=\overline{a}(\overline{a} \overline{c}^{v})^{v}=\overline{c}^{v}(\overline{a}\overline{c}^{v}\overline {a}\overline{c}^{v})^{\frac{v-1}{2}}=\overline{c}^{v+(tv+v)\frac{v-1}{2}}\), we get \[(v-1)(\frac{v(t+1)}{2}+1)\equiv 0(\mbox{mod }m). \tag{74}\] Then Eq(73) and Eq(74) may give \(\frac{m}{2}\equiv 0(\mbox{mod }m)\), a contradiction. (4) Insure \(\langle c\rangle_{X}=1\): When \(u\) is even, for any integer \(w\), we get \[(c^{w})^{a}=(a^{2s}c^{t})^{w}=c^{tw}a^{2s\sum_{l=1}^{w}r^{l}}\quad\mbox{and} \quad(c^{w})^{b}=(a^{u}c^{v})^{w}=c^{vw}a^{u\sum_{l=1}^{w}r^{l}}.\] Since \(\langle c\rangle_{X}=1\), we know that \(2s\sum_{l=1}^{w}r^{l}\equiv 0\equiv u\sum_{l=1}^{w}r^{l}(\mbox{mod }n)\) is if and only if \(w\equiv 0(\mbox{mod }m)\). When \(u\) is odd, we know \(t=v=1\). Then \(2(1-2sr)\equiv 2r(\mbox{mod }n)\) by Eq(65). For any integer \(w\), \[(c^{w})^{a}=(a^{2s}c)^{w}=c^{w}a^{2s\sum_{l=1}^{w}r^{l}}\quad\mbox{and}\quad(c ^{w})^{b}=(a^{u}c)^{w}=c^{w}a^{u\sum_{l=1}^{w}(1-2sr)^{l}}.\] Since \(\langle c\rangle_{X}=1\), we know that \(2s\sum_{l=1}^{w}r^{l}\equiv 0\equiv u\sum_{l=1}^{w}(1-2sr)^{l}(\mbox{mod }n)\) is if and only if \(w\equiv 0(\mbox{mod }m)\). Summarizing Eq(62)-Eq(71), we get the parameters \((m,n,r,s,t,u,v)\) as shown in the lemma. \(\Box\) ### \(M=\langle a^{2}\rangle\langle c\rangle\) and \(X/M_{X}\cong D_{8}\) **Lemma 6.2**: _Suppose that \(X=X(D)\), \(M=\langle a^{2}\rangle\langle c\rangle\), \(X/M_{X}\cong D_{8}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,(a^{2})^{c^{2}}=a^{2r},(c^{2})^{a}=a^{2s}c^{2t},(c^{2})^{b}= a^{2u}c^{2},a^{c}=bc^{2w}\rangle,\] _where either \(w=s=u=0\) and \(r=t=1\); or_ \[\begin{array}{l}w\neq 0,\,s=u^{2}\sum_{l=0}^{w-1}r^{l},\,t=1+2wu,\\ nw\equiv 2w(r-1)\equiv 2w(1+uw)\equiv 0(\mbox{mod }\frac{m}{2}),\\ r^{2w}-1\equiv(u\sum_{l=1}^{w}r^{l})^{2}-r\equiv(r^{w}+1)(1+s\sum_{l=0}^{w-1 }r^{l})\equiv 0(\mbox{mod }\frac{n}{2}),\\ \sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0(\mbox{mod }\frac{m}{2}).\end{array}\] **Proof** Under the hypothesis, \(M_{X}=\langle a^{2}\rangle\rtimes\langle c^{2}\rangle\). Set \(n=\mbox{o}(a)\) and \(m=\mbox{o}(c)\). Then both \(n\) and \(m\) are even. Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong D_{8}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \(\overline{a}^{\overline{r}}=\overline{b}\) and \(\overline{b}^{\overline{r}}=\overline{a}.\) Set \(c_{1}:=c^{2}\) and \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). Noting \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), by Lemma 6.1, we get \[X_{1}=\langle a,b,c_{1}|R,(a^{2})^{c_{1}}=a^{2r},c_{1}^{a}=a^{2s}c_{1}^{t},c_{1 }^{b}=a^{2u}c_{1}^{v}\rangle,\] where \[\begin{array}{l}r^{t-1}-1\equiv r^{v-1}-1\equiv u\sum_{l=0}^{v-1}r^{l}-u \equiv 0(\mbox{mod }\frac{n}{2}),\,t^{2}\equiv v^{2}\equiv 1(\mbox{mod }\frac{m}{2}),\\ s\sum_{l=1}^{t}r^{l}+sr\equiv sr+s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ur \equiv 1-r(\mbox{mod }\frac{n}{2}),\\ s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0(\mbox{mod }\frac{m}{2}).\end{array} \tag{75}\] Now \(X=X_{1}.\langle c\rangle\). Set \(a^{c}=bc_{1}^{w}\). Then \(X\) may be defined by \(R\) and \[(a^{2})^{c_{1}}=a^{2r},\,c_{1}^{a}=a^{2s}c^{2t},\,c_{1}^{b}=a^{2u}c^{2v},\,a^{c}= bc_{1}^{w}. \tag{76}\] If \(w\equiv 0({\rm mod}\ \frac{m}{2})\), then \({\rm o}(a)={\rm o}(a^{c})={\rm o}(b)=2\), which implies \(X\cong D_{8}\). Then \(w=s=u=0\) and \(r=t=1\), as desired. So in that follows, we assume \(w\not\equiv 0({\rm mod}\ \frac{m}{2})\). Firstly, we get \(b^{c}=a^{c^{2}}c_{1}^{-w}=a^{1-2sr}c_{1}^{1-t-w}.\) Set \(\pi\in{\rm Aut}\,(X_{1}):a\to bc_{1}^{2w}\), \(b\to a^{1-2sr}c_{1}^{1-t-w}\) and \(c_{1}\to c_{1}\). We need to carry out the following seven steps: (i) \({\rm o}(\pi(b))=2:\) \[(c^{2(w+t-1)}a^{2sr-1})^{2}=c^{2w(t+1)}a^{2sr^{w+1}+2s\sum_{l=1}^{w+t-1}r^{l}+2 sr-2}=1,\] that is \[w(t+1)\equiv 0({\rm mod}\ \frac{m}{2})\quad{\rm and}\quad sr^{w+1}+s\sum_{l=1} ^{w+t-1}r^{l}+sr-1\equiv 0({\rm mod}\ \frac{n}{2}), \tag{77}\] which implies \[r^{2w}\equiv r^{w(t+1)}\equiv 1({\rm mod}\ \frac{n}{2}). \tag{78}\] (ii) \({\rm o}(\pi(a))=n\): \[(bc^{2x})^{n}=(c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}})^{\frac{n}{2}}=c^{hw(v+ 1)}a^{un\sum_{l=w+1}^{2w}r^{l}}=1,\] that is \[\frac{n}{2}w(v+1)\equiv 0({\rm mod}\ \frac{m}{2}). \tag{79}\] (iii) \(\pi\) preserves \((a^{2})^{c_{1}}=a^{2r}\): \[((a^{2})^{c^{2}})^{c}=c^{2w(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}} \quad{\rm and}\quad(a^{2r})^{c}=c^{2wr(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}},\] that is \[w(t+1)(r-1)\equiv 0({\rm mod}\ \frac{m}{2}). \tag{80}\] (iv) \(\pi\) preserves \(c_{1}^{a}=a^{2s}c_{1}^{t}\): \[((c^{2})^{a})^{c}=c^{2v}a^{2ur^{w+1}}\quad{\rm and}\quad(a^{2s}c^{2t})^{c}=c^{ 2ws(v+1)+2t}a^{2sru\sum_{l=w+1}^{2w}r^{l}},\] that is \[v\equiv ws(v+1)+t(\mbox{mod }\frac{m}{2})\quad\mbox{and}\quad u\equiv su\sum_{l=1 }^{w}r^{l}(\mbox{mod }\frac{n}{2}). \tag{81}\] (v) \(\pi\) preserves \(c_{1}^{b}=a^{2u}c_{1}^{v}\): \[((c^{2})^{b})^{c} = (c^{2})^{a^{1-2sr}c^{2-2t-2w}}=c^{2t}a^{2sr^{2-w}},\] \[(a^{2u}c^{2v})^{c} = (c^{2w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}})^{u}c^{2v}=c^{2wu(v+1)+2v }a^{2u^{2}\sum_{l=w+2}^{2w+1}r^{l}},\] that is, \[t\equiv wu(v+1)+v(\mbox{mod }\frac{m}{2})\quad\mbox{and}\quad s\equiv u^{2} \sum_{l=0}^{w-1}r^{l}(\mbox{mod }\frac{n}{2}). \tag{82}\] (vi) \(\pi^{2}=\mbox{Inn}(c_{1})\): Recall \(\mbox{Inn}(c_{1})(a)=a^{1-2sr}c_{1}^{1-t}\), \(\mbox{Inn}(c_{1})(a^{2})=a^{2r}\) and \(\mbox{Inn}(c_{1})(b)=c_{1}^{v-1}a^{2ur}b\). \[a^{1-2sr}c^{2-2t}=\mbox{Inn}(c_{1})(a)=\pi^{2}(a)=b^{c}c^{2w}=a^{1-2sr}c^{2-2t -2w+2w},\] as desired; \[a^{2r}=\mbox{Inn}(c^{2})(a_{1})=\pi^{2}(a^{2})=(c^{2w(v+1)}a^{2u\sum_{l=w+1}^ {2w}r^{l}})^{c}=c^{2w(v+1)(1+uw)}a^{2(u\sum_{l=1}^{w}r^{l})^{2}},\] that is \[w(v+1)(1+uw)\equiv 0(\mbox{mod }\frac{m}{2})\quad\mbox{and}\quad r\equiv(u\sum_ {l=1}^{w}r^{l})^{2}(\mbox{mod }\frac{n}{2}), \tag{83}\] which implies \((u,\frac{n}{2})=1\) and \((\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\) as \((r,\frac{n}{2})=1\); and \[c^{2(v-1)}a^{2ur}b=\mbox{Inn}(c_{1})(b)=\pi^{2}(b)=c^{2(w+t-1)+2w(v+1)(sr-1)+ 2vw}a^{2sur\sum_{l=1}^{w}r^{l}}b,\] that is, \[v\equiv t+wvsr+wsr(\mbox{mod }\frac{m}{2})\quad\mbox{and}\quad s\sum_{l=1}^{w}r^ {l}\equiv 1(\mbox{mod }\frac{n}{2}), \tag{84}\] which implies \((u,\frac{n}{2})=1\). (vii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a^{2}\rangle\langle c^{2}\rangle\). Since \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G}\langle c ^{2}\rangle^{x}=\langle c^{2}\rangle_{X_{1}}=1\) and \(u\sum_{l=1}^{i}r^{l}\equiv 0\equiv s\sum_{l=1}^{i}r^{l}(\mbox{mod }\frac{n}{2}) \Leftrightarrow i\equiv 0(\mbox{mod }\frac{m}{2})\), noting \((s,\frac{n}{2})=(u,\frac{n}{2})=1\), we have \(\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0(\mbox{mod }\frac{m}{2})\). Now we are ready to determine the parameters by summarizing Eq(75)-Eq(84). Since \((r,\frac{n}{2})=(u,\frac{n}{2})=1\) (after Eq(83)), we get from Eq(75) that \(\sum_{l=1}^{v-1}r^{l}\equiv 0(\mbox{mod }\frac{n}{2}).\) By (vii), \(\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0( \mbox{mod }\frac{m}{2}),\) which means \(v\equiv 1(\mbox{mod }\frac{m}{2})\). Inserting \(v=1\) in Eq(75)-Eq(84), we get that \(s=u^{2}\sum_{l=0}^{w-1}r^{l}\) and \(t=2wu+1\) in Eq(82); \((r^{w}+1)(1+s\sum_{l=0}^{w-1}r^{l})\equiv 0(\mbox{mod }\frac{n}{2})\) in Eq (77) \(nw\equiv 0(\mbox{mod }\frac{m}{2})\) in Eq(79); and \(2w(r-1)\equiv 2w(1+uw)\equiv 0(\mbox{mod }\frac{m}{2})\) in Eq(80) and (83). All these are summarized in the lemma. \(\Box\) ### \(M=\langle a^{2}\rangle\langle c\rangle\) and \(X/M_{X}\cong A_{4}\) **Lemma 6.3**: _Suppose that \(X=X(D)\), \(M=\langle a^{2}\rangle\langle c\rangle,\)\(X/M_{X}\cong A_{4}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,a^{c^{3}}=a^{r},(c^{3})^{b}=a^{2u}c^{3},a^{c}=bc^{\frac{im}{2 }},b^{c}=a^{x}b\rangle,\] _where \(n\equiv 2(\mbox{mod }4)\) and either \(i=u=0\) and \(r=x=1\); or \(i=1\), \(6\mid m\), \(l^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{2})\) with \(\mbox{o}(l)=m\), \(r=l^{3}\), \(u=\frac{l^{3}-1}{2l^{2}}\) and \(x\equiv-l+l^{2}+\frac{n}{2}(\mbox{mod }n)\)._ **Proof** Under the hypothesis, \(M_{X}=\langle a^{2}\rangle\rtimes\langle c^{3}\rangle\). Set \(n=\mbox{o}(a)\) and \(m=\mbox{o}(c)\). Then \(n\) is even and \(3\mid m\). Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong A_{4}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \(\overline{a}^{\overline{c}}=\overline{b}\) and \(\overline{b}^{\overline{c}}=\overline{a}\overline{b}.\) Set \(c_{1}:=c^{3}\) and \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). By Lemma 6.1, we get \[X_{1}=\langle a,b,c^{3}|R,(a^{2})^{c_{1}}=a^{2r},(c_{1})^{a}=a^{2s}c_{1}^{t},( c_{1})^{b}=a^{2u}c_{1}^{v}\rangle\] whose \[\begin{array}{l}r^{t-1}-1\equiv r^{v-1}-1\equiv u(\sum_{l=0}^{v-1}r^{l}-1) \equiv 0(\mbox{mod }\frac{n}{2}),\,t^{2}\equiv v^{2}\equiv 1(\mbox{mod }\frac{m}{3}),\\ s\sum_{l=1}^{t}r^{l}+sr\equiv sr+s\sum_{l=1}^{v}r^{l}-u\sum_{l=1}^{t}r^{l}+ur \equiv 1-r(\mbox{mod }\frac{n}{2}),\\ s\sum_{l=1}^{i}r^{l}\equiv u\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }n) \Leftrightarrow i\equiv 0(\mbox{mod }\frac{m}{3}).\end{array} \tag{85}\] Now \(X=X.\langle c\rangle\). Set \(a^{c}=bc_{1}^{w}\). Then \(X\) may be defined by \(R\) and \[(a^{2})^{c_{1}}=a^{2r},(c_{1})^{a}=a^{2s}c_{1}^{t},(c_{1})^{b}=a^{2u}c_{1}^{v}, \,a^{c}=bc_{1}^{w},b^{c}=a^{1+2x}bc_{1}^{y}. \tag{86}\] If \(w\equiv 0(\mbox{mod }\frac{m}{3})\), then \(\mbox{o}(a)=\mbox{o}(a^{c})=\mbox{o}(b)=2\), which implies \(X\cong A_{4}\). Then \(n=2\), \(i=u=s=0\) and \(r=x=t=v=1\), as desired. So in that follows, we assume \(w\not\equiv 0(\mbox{mod }\frac{m}{3})\). To determine the parameters \(r,s,t,u,v,w,x\) and \(y\), we only to consider the last extension \(X_{1}.\langle c\rangle\) in Eq(86), where \(a^{c}=bc_{1}^{w}\) and \(b^{c}=a^{1+2x}bc_{1}^{y}\). Set \(\pi\in\mbox{Aut}\,(X_{1}):a\to bc_{1}^{w},\,b\to a^{1+2x}bc_{1}^{y},\quad c_{1} \to c_{1}.\) We need to carry out the following eight steps: (i) \(\mbox{o}(\pi(b))=2\): \[(a^{1+2x}bc^{3y})^{2}=ba^{-(1+2x)}c^{3y}a^{1+2x}bc^{3y}=c^{3y(tv+1)}a^{2r^{y}(u \sum_{l=1}^{ty}r^{l}+xr^{y}-x-s\sum_{l=1}^{y}r^{l})}=1,\] that is \[y(tv+1)\equiv 0({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad u\sum_{l=1}^{ty}r^{l}+ xr^{y}-x-s\sum_{l=1}^{y}r^{l}\equiv 0({\rm mod}\ \frac{n}{2}), \tag{87}\] which implies \(r^{2y}\equiv r^{y(tv+1)}\equiv 1({\rm mod}\ \frac{n}{2})\); (ii) o\((\pi(ab))=2\): \[\begin{array}{rcl}1&=&(c^{3(vw+yt)}a^{2ur^{y}\sum_{l=1}^{w}r^{l}-2r^{y}x+2s \sum_{l=1}^{y}r^{l}-1})^{2}\\ &=&c^{3(vw+yt+tvw+y)}a^{2((r^{w+y}+1)(ur^{y}\sum_{l=1}^{w}r^{l}-r^{y}x+s\sum_{ l=1}^{y}r^{l})+s\sum_{l=1}^{vw+yt}r^{l}-1)},\end{array}\] that is \[\begin{array}{l}vw+yt+tvw+y\equiv 0({\rm mod}\ \frac{m}{3});\\ (r^{w+y}+1)(ur^{y}\sum_{l=1}^{w}r^{l}-r^{y}x+s\sum_{l=1}^{y}r^{l})+s\sum_{l=1} ^{vw+yt}r^{l}\equiv 1({\rm mod}\ \frac{n}{2}),\end{array} \tag{88}\] which implies \(r^{2w}\equiv 1({\rm mod}\ \frac{n}{2})\); (iii) o\((\pi(a))=n\): \[(bc^{3w})^{n}=(bc^{3w}bc^{3w})^{\frac{n}{2}}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2 w}r^{l}})^{\frac{n}{2}}=c^{3\frac{n}{2}w(v+1)}a^{nur^{w}\sum_{l=1}^{w}r^{l}}=1,\] that is \[\frac{n}{2}w(v+1)\equiv 0({\rm mod}\ \frac{m}{3}). \tag{89}\] (iv) \(\pi\) preserves \((a^{2})^{c_{1}}=a^{2r}\): \[\begin{array}{l}((a^{2})^{c^{3}})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{ l}})^{c^{3}}=c^{3w(v+1)}a^{2ur\sum_{l=w+1}^{2w}r^{l}},\\ (a^{2r})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}})^{r}=c^{3wr(v+1)}a^{2ur \sum_{l=w+1}^{2w}r^{l}},\end{array}\] that is \[\begin{array}{l}w(v+1)(r-1)\equiv 0({\rm mod}\ \frac{m}{3}).\end{array} \tag{90}\] (v) \(\pi\) preserves \(c_{1}^{a}=a^{2s}c_{1}^{t}\): \[\begin{array}{l}((c^{3})^{a})^{c}=(c^{3})^{bc^{3w}}=(a^{2u}c^{3v})^{c^{3w}}= c^{3v}a^{2ur^{w+1}},\\ (a^{2s}c^{3t})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}})^{s}c^{3t}=c^{3ws( v+1)+3v}a^{2sur^{w+1}\sum_{l=1}^{w}r^{l}},\end{array}\] that is \[\begin{array}{l}v\equiv ws(v+1)+t({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad u \equiv su\sum_{l=1}^{w}r^{l}({\rm mod}\ \frac{n}{2}).\end{array} \tag{91}\] (vi) \(\pi\) preserves \(c_{1}^{b}=a^{2u}c_{1}^{v}\): \[\begin{array}{l}((c^{3})^{b})^{c}=(c^{3})^{a^{1+2x}b}=(c^{3}a^{2x(1-r)})^{ab}=c^ {3tv}a^{2(u\sum_{l=1}^{t}r^{l}-sr+x(r-1))},\\ (a^{2u}c^{3v})^{c}=(c^{3w(v+1)}a^{2u\sum_{l=w+1}^{2w}r^{l}})^{u}c^{3v}=c^{3wu(v +1)+3v}a^{2u^{2}r\sum_{l=w+1}^{2w}r^{l}},\end{array}\] that is \[\begin{array}{l}t\equiv wu(v+1)+1({\rm mod}\ \frac{m}{3});\\ u\sum_{l=1}^{t}r^{l}-sr+x(r-1)\equiv u^{2}r^{w+1}\sum_{l=1}^{w}r^{l}({\rm mod }\ \frac{n}{2}).\end{array} \tag{92}\] (vii) \(\pi^{3}={\rm Inn}(c^{3})\): Recall \({\rm Inn}(c_{1})(a)=a^{1-2sr}c_{1}^{1-t}\), \({\rm Inn}(c_{1})(a^{2})=a^{2r}\) and \({\rm Inn}(c_{1})(b)=c_{1}^{v-1}a^{2ur}b\). \[\begin{array}{l}a^{1-2sr}c^{3-3t}={\rm Inn}(c_{1})(a)=\pi^{3}(a)=(a^{1+2x} bc^{3(w+y)})^{c}\\ a^{-1}c^{3vt(w+wxv+wx)+3(w+2y)}a^{2rw(u\sum_{l=1}^{t(w+wxv+wx)}r^{l}-s\sum_{l=1 }^{w+wxv+wx}r^{l}-ux\sum_{l=w+1}^{2w}r^{l}-x)},\end{array}\] that is \[\begin{array}{l}1-t\equiv vt(w+wxv+wx)+w+2y({\rm mod}\ \frac{m}{3})\\ 1-sr\equiv r^{w}(u\sum_{l=1}^{t(w+wxv+wx)}r^{l}-s\sum_{l=1}^{w+wxv+wx}r^{l}-ux \sum_{l=w+1}^{2w}r^{l}-x)({\rm mod}\ \frac{n}{2});\end{array} \tag{93}\] \[a^{2r}={\rm Inn}(c_{1})(a^{2})=\pi^{3}(a^{2})=(a^{2})^{c^{3}}=c^{3w(v+1)+3uw^ {2}(v+1)(uw+1)}a^{2rw(u\sum_{l=1}^{w}r^{l})^{3}},\] that is \[w(v+1)+uw^{2}(v+1)(uw+1)\equiv 0({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad r \equiv r^{w}(u\sum_{l=1}^{w}r^{l})^{3}({\rm mod}\ \frac{n}{2}), \tag{94}\] which implies \((u,\frac{n}{2})=(\sum_{l=1}^{w}r^{l},\frac{n}{2})=1\) as \((r,\frac{n}{2})=1\), and \[c^{3(v-1)}a^{2ur}b={\rm Inn}(c_{1})(b)=\pi^{3}(b)=(c^{3(w+t-1)}a^{2sr-1})^{c}= c^{3(t-1)+3wsr(v+1)}a^{2sur\sum_{l=1}^{w}r^{l}}b,\] that is \[v\equiv t+wsr(v+1)({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad 1\equiv s\sum_{l=1 }^{w}r^{l}({\rm mod}\ \frac{n}{2}), \tag{95}\] which implies \((s,\frac{n}{2})=1\). (viii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a^{2}\rangle\langle c^{3}\rangle\). Since \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G}\langle c ^{3}\rangle^{x}=\langle c^{3}\rangle_{X_{1}}=1\) and \(u\sum_{l=1}^{i}r^{l}\equiv 0\equiv s\sum_{l=1}^{i}r^{l}({\rm mod}\ \frac{n}{2})\Leftrightarrow i \equiv 0({\rm mod}\ \frac{m}{3})\), noting \((s,\frac{n}{2})=(u,\frac{n}{2})=1\), we have \(\sum_{l=1}^{i}r^{l}\equiv 0({\rm mod}\ \frac{n}{2})\Leftrightarrow i \equiv 0({\rm mod}\ \frac{m}{3})\). Now we are ready to determine the parameters by summarizing Eq(85)-Eq(95). Since \((r,\frac{n}{2})=(u,\frac{n}{2})=1\) (after Eq(94)), we get from Eq(85) that \(\sum_{l=1}^{v-1}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\). By (viii), \(\sum_{l=1}^{i}r^{l}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow i\equiv 0( \mbox{mod }\frac{m}{3})\), which means \(v\equiv 1(\mbox{mod }\frac{m}{3})\). Inserting \(v=1\) in Eq(85)-Eq(95), we get \(2w(wu+1)\equiv 0(\mbox{mod }\frac{m}{3})\) and \(v\equiv 1+2wu(\mbox{mod }\frac{m}{3})\) in Eq(88), (92) and (95), which implies \(2wu\equiv 0(\mbox{mod }\frac{m}{3})\). Then \(2w\equiv 0(\mbox{mod }\frac{m}{3})\) as \(nw\equiv 0(\mbox{mod }\frac{m}{3})\) and \((u,\frac{n}{2})=1\), which implies \(w=\frac{m}{6}\) as \(w\not\equiv 0(\mbox{mod }\frac{m}{3})\). Inserting \(w=\frac{m}{6}\) in Eq(85)-Eq(95) again, we get that \(r^{w}\equiv-1(\mbox{mod }\frac{n}{2})\) in Eq(85) and (95) and \(t\equiv 1(\mbox{mod }\frac{m}{3})\) in Eq(92). Since \(2y\equiv 0(\mbox{mod }\frac{m}{3})\) in Eq(87), we know \(y\) is either \(0\) or \(\frac{m}{6}\). If \(y=\frac{m}{6}=w\), then with Eq(87) and (88), we get \(s\equiv u(\mbox{mod }\frac{n}{2})\). Then \(2sr\equiv 1-r(\mbox{mod }\frac{n}{2})\) in Eq(85), which implies \((r-1,\frac{n}{2})=1\), and then \(r=1\), contradicting with \(r^{w}\equiv-1(\mbox{mod }\frac{n}{2})\). So \(y=0\). By Eq(92), we get \(2x\equiv u\sum_{l=1}^{w}r^{l}+(u\sum_{l=1}^{w}r^{l})^{2}-1(\mbox{mod }\frac{n}{2})\), which implies \(\frac{n}{2}\) is odd, then \(s\equiv\frac{r^{-1}-1}{2}(\mbox{mod }\frac{n}{2})\) in Eq(85). And by Eq(94), we get \(-r\equiv(u\sum_{l=1}^{w}r^{l})^{3}(\mbox{mod }\frac{n}{2})\). Take \(l=-\frac{2ru}{1-r}\), then \(r=l^{3}\), \(u=\frac{l(r-1)}{2r}\) and \(1+2x\equiv-l+l^{2}(\mbox{mod }\frac{n}{2})\). Since \(1+2x\) is odd, we get \(1+2x\equiv-l+l^{2}+\frac{n}{2}(\mbox{mod }n)\). \(\Box\) In fact, if we add the conditions \(t=1\) and \(w\neq 0\) and delete \(\langle c\rangle_{X}=1\) in the above calculation, then we can get the following: **Lemma 6.4**: _With the notation, suppose that \(t=1\) and \(w\neq 0\). Then_ \[X=\langle a,b,c|R,a^{c^{3}}=a^{r},(c^{3})^{b}=a^{2u}c^{3},a^{c}=bc^{\frac{m}{2 }},b^{c}=a^{x}b\rangle,\] _where \(n\equiv 2(\mbox{mod }4)\), \(m\equiv 0(\mbox{mod }6)\), \(l^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{2})\), \(r=l^{3}\), \(u=\frac{l^{3}-1}{2l^{2}}\) and \(x\equiv-l+l^{2}+\frac{n}{2}(\mbox{mod }n)\)._ ### \(M=\langle a^{4}\rangle\langle c\rangle\), \(X/M_{X}\cong S_{4}\) and \(\langle c\rangle_{X}=1\) **Lemma 6.5**: _Suppose that \(X=X(D)\), \(M=\langle a^{4}\rangle\langle c\rangle,\,X/M_{X}\cong S_{4}\) and \(\langle c\rangle_{X}=1\). Then \(X=\langle a,b,c|R,(a^{2})^{c^{3}}=a^{2r},(c^{3})^{b}=a^{\frac{2(l^{3}-1)}{l^{2 }}}c^{3},(a^{2})^{c}=bc^{\frac{im}{2}},b^{c}=a^{2(-l+l^{2}+\frac{n}{4})}b,c^{a }=a^{2+4z}c^{1+\frac{km}{3}}\rangle,\) where either \(i=z=0\) and \(k=l=1\); or \(i=1\), \(n\equiv 4(\mbox{mod }8),\,m\equiv 0(\mbox{mod }6)\), \(l^{\frac{m}{2}}\equiv-1(\mbox{mod }\frac{n}{4})\) with \(\mbox{o}(l)=m\), \(r=l^{3}\), \(z=\frac{1-3l}{4l}\), \(k\in\{1,2\}\) and \(\sum_{i=1}^{j}r^{i}\equiv 0(\mbox{mod }\frac{n}{2})\Leftrightarrow j\equiv 0(\mbox{mod }\frac{m}{3})\)._ **Proof** Under the hypothesis, \(M_{X}=\langle a^{4}\rangle\rtimes\langle c^{3}\rangle\). Set \(n=\mbox{o}(a)\) and \(m=\mbox{o}(c)\). Then \(4\mid n\) and \(3\mid m\). Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\langle\overline{c}\rangle \cong S_{4}\), we can choose \(\overline{b}\) such that the form of \(X/M_{X}\) is the following: \((\overline{a}^{2})^{\overline{c}}=\overline{b}\), \(\overline{b}^{\overline{c}}=\overline{a}^{2}\overline{b}\) and \((\overline{c})^{\overline{a}}=\overline{a}^{2}\overline{c}^{2}.\) Take \(a_{1}=a^{2}\) and \(c_{1}=c^{3}\). Then we set \(a_{1}^{c}=bc_{1}^{w},b^{c}=a_{1}^{x}bc_{1}^{y},c^{a}=a_{1}^{1+2z}c^{2+3d}\), where \(x\) is odd. Suppose \(w\equiv 0(\mbox{mod }\frac{m}{3})\). Note \(\mbox{o}(a^{2})=\mbox{o}((a^{2})^{c})=\mbox{o}(b)=2\), then \(X\cong S_{4}\). Then \(l=1,i=z=d=0\) as desired. So in what follows, we assume \(w\not\equiv 0(\mbox{mod }\frac{m}{3})\). Then consider \(X_{1}=GM_{X}=\langle a,b\rangle\langle c_{1}\rangle\). Noting \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), by Lemma 4.2, we know \(\langle a_{1}\rangle\lhd X_{1}\), which implies that \(c_{1}\) normalises \(\langle a_{1}\rangle\). Take \(\langle a_{1},b\rangle\langle c\rangle\). Then we get \(X_{2}=(\langle a_{1},b\rangle\langle c_{1}\rangle).\langle c\rangle\). Note that \(c_{1}\) normalises \(\langle a_{1}\rangle\) in \(X_{2}\). Then by Lemma 6.4, we get \[X=\langle a_{1},b,c|R,(a_{1}^{2})^{c}=a_{1}^{2r},c_{1}^{a_{1}}=a_{1}^{2s}c_{1},c_{1}^{b}=a_{1}^{2u}c_{1},a_{1}^{c}=bc^{\frac{m}{2}},b^{c}=a^{x}b\rangle,\] where \[\begin{array}{l}n\equiv 4({\rm mod}\ 8),\,m\equiv 0({\rm mod}\ 6),\,v=1,\,w= \frac{m}{6},\,y=0,\\ r=l^{3},\,s=\frac{1-l^{3}}{2l^{3}},\,u=\frac{l^{3}-1}{2l^{2}},\,1+2x=-l+l^{2} +\frac{n}{4},\,l^{\frac{m}{2}}\equiv-1({\rm mod}\ \frac{n}{4}).\end{array} \tag{96}\] Note \(X=X_{2}.\langle a\rangle\). So \(X\) may be defined by \(R\) and \[(a_{1}^{2})^{c}=a_{1}^{2r},c_{1}^{a_{1}}=a_{1}^{2s}c_{1},c_{1}^{b}=a_{1}^{2u}c _{1},a_{1}^{c}=bc^{\frac{m}{2}},b^{c}=a_{1}^{x}b,c^{a}=a_{1}^{1+2z}c^{2+3d}. \tag{97}\] What we should to determine the parameters \(r,z\) and \(d\) by analyse the last one extension. \(X_{2}.\langle a\rangle\), where \(c^{a}=a^{2+4z}c^{2+3d}\). Set \(\pi\in{\rm Aut}\,(X_{1}):a^{2}\to a^{2}\), \(b\to a^{-2}b\) and \(c\to a^{2+4z}c^{2+3d}\). We need to check the following seven equalities: (i) \(\pi\) preserves \((a^{2})^{c^{3}}=a^{2r}\): \[a^{2r}=((a^{2})^{c^{3}})^{a}=(a^{2})^{c^{3(2+3d)}}=a^{2r^{2+3d}},\] that is \[r^{1+3d}-1\equiv 0({\rm mod}\ \frac{n}{2}). \tag{98}\] (ii) \(\pi\) preserves \((c^{3})^{b}=a^{\frac{2(l^{3}-1)}{l^{2}}}c^{3}\): \[\begin{array}{rcl}((c^{3})^{b})^{a}&=&(c^{3(2+3d)}a^{2r(1+2z)+4r^{1+2d}zl+r^ {d}(2lr^{1+d}+2l^{2}+\frac{n}{2}+4l^{2}z)})ba^{2}\\ &=&c^{3(2+3d)}a^{2l(l^{3}-1)+2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+ \frac{n}{2}+4l^{2}z)},\end{array}\] that is \[1-r\equiv 2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+4l^{2}z)({\rm mod}\ \frac{n}{2}). \tag{99}\] (iii) \(\pi\) preserves \((a^{2})^{c}=bc^{\frac{m}{2}}\): \[\begin{array}{rcl}((a^{2})^{c})^{a}&=&(a^{2})^{c^{2+3d}}=(a^{2(-l+l^{2}+ \frac{n}{4})}bc^{\frac{m}{2}})^{c^{3d}}=a^{2r^{d}(l^{2}+\frac{n}{4})-2l}bc^{ \frac{m}{2}}\\ (bc^{\frac{m}{2}})^{a}&=&ba^{2}(c^{3(2+3d)}a^{2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^ {1+d}+2l^{2}+\frac{n}{2}+4l^{2}z)})\frac{m}{6}\\ &=&a^{\frac{2}{1-r}(2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+\frac{n}{2}+ 4l^{2}z))-2}bc^{\frac{m}{2}},\end{array}\] that is, \[\begin{array}{rcl}\frac{2}{1-r}(2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2} +\frac{n}{2}+4l^{2}z))-2\equiv\\ 2r^{d}(l^{2}+\frac{n}{4})-2l({\rm mod}\ n).\end{array} \tag{100}\] With Eq(99), we get \[r^{d}l\equiv 1({\rm mod}\ \frac{n}{4}).\] (iv) \(\pi\) preserves \(b^{c}=a^{2(-l+l^{2}+\frac{n}{4})}b\): \[(b^{c})^{a}=(ba^{2})^{a^{2+4z}c^{2}}=a^{2l^{2}-8zl-8l+\frac{n}{2}}b\quad{\rm and} \quad(a^{2(-l+l^{2}+\frac{n}{4})}b)^{a}=a^{2(-l+l^{2})-2+\frac{n}{2}}b,\] that is \[z\equiv\frac{1-3l}{4l}({\rm mod}\ \frac{n}{4}). \tag{101}\] (v) \({\rm o}(\pi(c))=m\): \[1=(a^{2+4z}c^{2+3d})^{m}=c^{m(2+3d)}a^{\sum_{i=0}^{\frac{m}{3}-1}r^{i}(2r(1+2z )+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+\frac{n}{2}+4l^{2}z))}.\] Note that \(r^{\frac{m}{6}}\equiv-1({\rm mod}\ \frac{n}{4})\). Then \(\sum_{i=0}^{\frac{m}{3}-1}r^{i}\equiv 0({\rm mod}\ \frac{n}{4})\). Note that \(2r+\frac{n}{2}+2r^{d}l^{2}(l^{2+3d}+1)\equiv 0({\rm mod}\ 4)\) which implies \(4|(2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+\frac{n}{2}+4l^{2}z))\). Then \[\sum_{i=0}^{\frac{m}{3}-1}r^{i}(2r(1+2z)+4r^{1+2d}zl+r^{d}(2lr^{1+d}+2l^{2}+ \frac{n}{2}+4l^{2}z))\equiv 0({\rm mod}\ n),\] as desired. (vi) \(\pi^{2}={\rm Inn}(a^{2})\): Recall \((a^{2})^{c^{3}}=a^{2r}\) and \((a^{2})^{c}=bc^{\frac{m}{2}}\), then we know \({\rm Inn}(a^{2})(c)=c^{1+\frac{m}{2}}a^{-2}b\) and \({\rm Inn}(a^{2})(c^{3})=c^{3}a^{2-2r}\). \[c^{1+\frac{m}{2}}ba^{2}={\rm Inn}(a^{2})(c)=\pi^{2}(c)=(a^{2+4z}c^{2+3d})^{a} =c^{1+3(1+d)(1+3d)+\frac{m}{2}}ba^{2+l(1-r^{d}l)+\frac{n}{2}\sum_{i=0}^{d}r^{ i}},\] that is, \[\begin{array}{l}(1+d)(1+3d)\equiv 0({\rm mod}\ \frac{m}{3}),\\ 0\equiv l(1-r^{d}l)+\sum_{i=0}^{d}r^{i}\frac{n}{2}({\rm mod}\ n);\end{array} \tag{102}\] and \[c^{3}a^{2-2r}={\rm Inn}(a^{2})(c^{3})=\pi^{2}(c^{3})=c^{3(2+3d)^{2}}a^{(1+ \sum_{i=0}^{1+3d}r^{i})(1-2l^{2}-l^{3}+2r^{1+d}+\frac{n}{2})},\] that is, \[1\equiv(2+3d)^{2}({\rm mod}\ \frac{m}{3})\quad{\rm and}\quad 0\equiv(\sum_{i=0} ^{1+3d}r^{i}-1)(1-r)({\rm mod}\ n). \tag{103}\] (vii) Insure \(\langle c\rangle_{X}=1\): Since \(\langle c\rangle_{X}\leq M\), we get \(\langle c\rangle_{X}\leq M_{X}=\langle a^{4}\rangle\langle c^{3}\rangle\), which implies \(\langle c\rangle_{X}=\cap_{x\in X}C^{x}=\cap_{x\in G}C^{x}=\cap_{x\in G}\langle c ^{3}\rangle^{x}=\langle c^{3}\rangle_{X_{1}}\). Then it is suffer to insure \(\langle c^{3}\rangle_{X_{1}}=1\), where \(X_{1}=\langle a,b,c|R,(a^{2})^{c^{3}}=a^{2l^{3}},(c^{3})^{b}=a^{\frac{2(l^{3}-1 )}{l^{2}}}c^{3},(c^{3})^{a}=c^{3(2+3d)}a^{1-l^{3}+\frac{n}{2}}\rangle\). Then by Lemma 6.1, we get \(\sum_{i=1}^{j}r^{i}\equiv 0({\rm mod}\ \frac{n}{2})\Leftrightarrow j \equiv 0({\rm mod}\ \frac{m}{3})\). By Eq(98) and \(r^{\frac{m}{6}}\equiv-1({\rm mod}\ \frac{n}{4})\), we get \(\sum_{i=1}^{1+3d}r^{i}\equiv 0({\rm mod}\ \frac{n}{2})\). Then \(1+3d\equiv 0({\rm mod}\ \frac{m}{3})\). Then \(0\equiv l(1-r^{d}l)({\rm mod}\ n)\) with Eq(102). \(\Box\) ### \(M=\langle a^{3}\rangle\langle c\rangle\) and \(X/M_{X}\cong S_{4}\) **Lemma 6.6**: _Suppose that \(X=X(D)\), \(M=\langle a^{3}\rangle\langle c\rangle\), \(X/M_{X}\cong S_{4}\) and \(\langle c\rangle_{X}=1\). Then_ \[X=\langle a,b,c|R,a^{c^{4}}=a^{r},b^{c^{4}}=a^{1-r}b,(a^{3})^{c^{\frac{m}{4}}}= a^{-3},a^{c^{\frac{m}{4}}}=bc^{\frac{3m}{4}}\rangle, \tag{104}\] _where \(m\equiv 4(\mathrm{mod}\ 8)\) and \(r\) is of order \(\frac{m}{4}\) in \(\mathbb{Z}_{2n}^{*}\)._ In this case, \(M_{X}=\langle a^{3}\rangle\langle c^{4}\rangle\). Set \(a^{3}=a_{1}\) and \(c^{4}=c_{1}\) so that \(M_{X}=\langle a_{1}\rangle\langle c_{1}\rangle\). Set \(\mathrm{o}(a)=n\) and \(\mathrm{o}(c)=m\), where \(n\equiv 0(\mathrm{mod}\ 3)\) and \(m\equiv 0(\mathrm{mod}\ 4)\). Then in Lemma 6.7, we shall show \(\langle a_{1}\rangle\lhd X\) and in Lemma 6.9, we shall get the classification of \(X\). **Lemma 6.7**: \(\langle a_{1}\rangle\lhd X\)_._ **Proof** Let \(X_{1}=M_{X}G\). Since \(\langle a\rangle\langle c_{1}\rangle\leq X_{1}\) and \(\langle c_{1}\rangle_{X_{1}}=1\), the subgroup \(X_{1}\) has been given in Lemma 6.1: \[X_{1}=\langle a,b,c_{1}|\langle a,b\rangle,c_{1}^{\frac{m}{4}}=1,(a^{2})^{c_{1 }}=a^{2r},(c_{1})^{a}=a^{6s}c_{1}^{t},(c_{1})^{b}=a_{1}^{u}c_{1}^{v}\rangle, \tag{105}\] where * if \(t=1\), then \(v=1\), \(6s\sum_{l=1}^{\frac{m}{4}}r^{l}\equiv 2(1-6sr)-2r\equiv 0(\mathrm{mod}\ n)\) and \(6s\sum_{l=1}^{w}r^{l}\equiv 0\equiv 3u\sum_{l=1}^{w}(1-6sr)^{l}(\mathrm{mod}\ n)\) is if and only if \(w\equiv 0(\mathrm{mod}\ \frac{m}{4})\); * if \(t\neq 1\), then both \(n\) and \(u\) are even, and \(r,s,t,u,v\) are given by \[\begin{array}{l}r^{t-1}-1\equiv r^{v-1}-1\equiv\frac{3u}{2}(\sum_{l=0}^{v- 1}r^{l}-1)\equiv 0(\mathrm{mod}\ \frac{n}{2}),\\ 3s\sum_{l=1}^{t}r^{l}+3sr\equiv 3sr+3s\sum_{l=1}^{v}r^{l}-\frac{3u}{2}\sum_{l=1} ^{t}r^{l}+\frac{3ur}{2}\equiv 1-r(\mathrm{mod}\ \frac{n}{2}),\\ t^{2}-1\equiv v^{2}-1\equiv 0(\mathrm{mod}\ \frac{m}{4}),\\ 3s\sum_{l=1}^{i}r^{l}\equiv\frac{3u}{2}\sum_{l=1}^{i}r^{l}\equiv 0(\mathrm{mod} \ \frac{n}{2})\Leftrightarrow i\equiv 0(\mathrm{mod}\ \frac{m}{4}).\end{array}\] Now \(X=\langle X_{1},c\rangle\) and we need to write the relation of \(c\) with \(X_{1}\). Since \(X/M_{X}=\langle\overline{a},\overline{b}\rangle\rtimes\langle\overline{c}\rangle\cong S _{4}\), checked by Magma, under our condition, the only possibilities are the following: \[\overline{a}^{3}=\overline{c}^{4}=\overline{b}^{2}=1,\overline{a}^{\overline{ b}}=\overline{a}^{-1},\overline{a}^{\overline{c}}=\overline{a}^{\overline{t}} \overline{b}\overline{c}^{3}, \tag{106}\] where \(i\in\mathbb{Z}_{3}\). Observing Eq(105) and Eq(106), we may relabel \(a^{i}b\) by \(b\). Then in \(X\), Eq(106) corresponds to \[a^{3}=a_{1},b^{2}=1,c^{4}=c_{1},a^{c}=bc^{3+4w}. \tag{107}\] Moreover, the conjugacy of \(\overline{c}\) on \(M_{X}\) is needed: \[(a_{1})^{c}=a_{1}^{z}c_{1}^{d}. \tag{108}\] Then the group \(X\) is uniquely determined by Eq(105), Eq(107) and Eq(108). For the contrary, we assume \(d\neq 0\). Then we need to deal with two cases, according to the parameter \(t\) of \(X_{1}\). _Case 1: \(t=1\)._ In this case, \(v=1\) and \(2(1-6sr)-2r\equiv 0(\mbox{mod }n)\) by \(X_{1}\). Set \(r_{1}=1-6sr\). Then \(r_{1}\equiv 1(\mbox{mod }3)\), \(a^{c_{1}}=a^{r_{1}}\) and \(b^{c_{1}}=a_{1}^{ur_{1}}b\). By Eq(105), Eq(107) and Eq(108), one can check \(b^{c}=a_{1}^{x}bc^{2+4y}\) for some \(x\) and \(y\). Since \(c\) preserves \(a_{1}^{c_{1}}=a_{1}^{r_{1}}\), we get \[((a_{1})^{c_{1}})^{c}=a_{1}^{zr_{1}}c_{1}^{d}=c_{1}^{d}a_{1}^{zr_{1}^{1+d}} \mbox{and}\,(a_{1}^{r_{1}})^{c}=(a_{1}^{z}c_{1}^{d})^{r_{1}}=c_{1}^{dr_{1}}a_{ 1}^{3z\sum_{l=1}^{r_{1}}r_{1}^{dl}},\] which gives \[d\equiv dr_{1}(\mbox{mod }\frac{m}{4}). \tag{109}\] Since \(c\) preserves \(b^{c_{1}}=a_{1}^{ur_{1}}b\), we get \[(b^{c_{1}})^{c}=a^{2+3xr_{1}+3ur_{1}}bc^{4y}\quad\mbox{and}\quad(a_{1}^{ur_{1} }b)^{c}=c_{1}^{dur_{1}}a_{1}^{z\sum_{l=1}^{ur_{1}}r_{1}^{dl}}a^{2+3x}bc^{4y},\] which gives \[du\equiv 0(\mbox{mod }\frac{m}{4}). \tag{110}\] Since \(a_{1}^{c_{1}}=a_{1}^{c^{4}}=a_{1}^{x_{1}}c_{1}^{d(z^{3}+z^{2}+z+1)}\) for some \(x_{1}\), we get \[d(z^{3}+z^{2}+z+1)\equiv 0(\mbox{mod }\frac{m}{4}); \tag{111}\] By last equation of Eq(107), we get \(ac=cbc^{3+4w}\). Then \[a_{1}^{ac}=a_{1}^{z}c_{1}^{d}\,\mbox{and}\,a_{1}^{cbc^{3+4w}}=a_{1}^{x_{1}}c_{ 1}^{d-dz(z^{2}+z+1)},\] which gives \[dz(z^{2}+z+1)\equiv 0(\mbox{mod }\frac{m}{4}). \tag{112}\] With Eq(111), we know \(d\equiv 0(\mbox{mod }\frac{m}{4})\), a contradiction. _Case 2: \(t\neq 1\)._ Suppose that \(t\neq 1\). Then if \(\langle a_{1}^{2}\rangle\lhd X\), consider \(\overline{X}=X/\langle a_{1}^{2}\rangle\) and \(\langle\overline{c_{1}}\rangle\lhd\overline{X}\). Note \(\overline{C}\leq C_{\overline{X}}(\langle\overline{c_{1}}\rangle)=\overline{X}\), then \(t=1\), contradicting with \(t\neq 1\). So in what follows, we shall show \(\langle a_{1}^{2}\rangle\lhd X\). By Eq(105), both \(u\) and \(n\) are even and \(r\equiv 1(\mbox{mod }3)\). Then \(z\) is odd as \((a,\frac{n}{3})=1\). By Eq(105), Eq(107) and Eq(108), one can check \(b^{c}=a^{2+6x}bc^{4y}\) for some \(x\) and \(y\). Since \(c\) preserves \((a_{1}^{2})^{c_{1}}=a_{1}^{2r}\), we get \[((a_{1}^{2})^{c_{1}})^{c}=(a_{1}^{z}c_{1}^{d}a_{1}^{z}c_{1}^{d})^{c_{1}}=a_{1} ^{x_{1}}c_{1}^{d(t+1)},\,(a_{1}^{2r})^{c}=(a_{1}^{z}c_{1}^{d})^{2r}=a_{1}^{x_{ 2}}c_{1}^{d(t+1)r},\] which gives \[d(t+1)(r-1)\equiv 0(\mbox{mod }\frac{m}{4}). \tag{113}\] Since \(c\) preserves \((c_{1})^{b}=a_{1}^{u}c_{1}^{v}\), we get \[(c_{1}^{b})^{c}=c_{1}^{a^{2+6x}bc^{4y}}=a_{1}^{x_{3}}c_{1}^{v},\,(a_{1}^{u}c_{ 1}^{v})^{c}=(a_{1}^{z}c_{1}^{d})^{u}c_{1}^{v}=a_{1}^{x_{4}}c_{1}^{\frac{du(t+1 )}{2}+v},\] which gives \[\frac{du(t+1)}{2}\equiv 0(\mbox{mod }\frac{m}{4}). \tag{114}\] Since \(c\) preserves \(c_{1}^{a}=a_{1}^{2s}c_{1}^{t}\), which implies \(c_{1}^{bc^{2+4w}}=a_{1}^{2s}c_{1}^{t}\), we get \[c_{1}^{bc^{2+4w}}=(a_{1}^{u}c_{1}^{v})^{c^{2+4w}}=((a_{1}^{z}c_{1}^{d})^{z}c_{ 1}^{d})^{ur^{w}}c_{1}^{v}=a_{1}^{x}c_{1}^{v},\] which gives \[v\equiv t(\mbox{mod }\frac{m}{4}). \tag{115}\] By last equation of Eq(107) again, we get \(ac^{2}=cbc^{4(w+1)}\). Then \[(a_{1}^{2})^{ac^{2}}=a_{1}^{x}c_{1}^{d(t+1)(z+1)}\,\mbox{and}\,(a_{1}^{2})^{ cbc_{1}^{w+1}}=a_{1}^{x}c_{1}^{d(t+1)},\] which gives \[dz(t+1)\equiv 0(\mbox{mod }\frac{m}{4}). \tag{116}\] Since \(\mbox{o}(a_{1}^{c})=\frac{n}{3}\), we get \(\frac{dn(t+1)}{6}\equiv 0(\mbox{mod }\frac{m}{4})\). With \((\frac{n}{6},z)=1\) and Eq(116), we get \(d(t+1)\equiv 0(\mbox{mod }\frac{m}{4})\). Then \((a_{1}^{2})^{c}=(a_{1}^{z}c_{1}^{d})^{2}=a_{1}^{x}c_{1}^{d(t+1)}=a_{1}^{x}\), which implies \(\langle a_{1}^{2}\rangle\lhd X\), as desired. \(\Box\) **Lemma 6.8**: _With the notation, suppose that \(\langle a_{1}\rangle\lhd X\), \(\langle c\rangle_{X}=1\) and \(\langle c\rangle\) is \(2-\)group. Then \(\mbox{o}(c)=4\)._ **Proof** Consider \(\overline{X}=X/\langle a_{1}\rangle=\langle\overline{a},\overline{b}\rangle \langle\overline{c}\rangle\). Note \(\overline{M_{X}}=\langle\overline{c}^{4}\rangle\), Then one can check \(C_{\overline{X}}(\langle\overline{c}_{1}\rangle)=\overline{X}\), which implies \(\overline{X}\) is the central expansion of \(S_{4}\). Since the Schur multiplier of \(S_{4}\) is \(\mathbb{Z}_{2}\), we know \(\mathrm{o}(c)\) is either \(4\) or \(8\). For the contrary, we assume \(\mathrm{o}(c)=8\). Then consider \(X_{1}=\langle a,b\rangle\rtimes\langle c_{1}\rangle\), where \(c_{1}=c^{4}\) and \(\mathrm{o}(c_{1})=2\), again. By Lemma 6.1, we have \[X_{1}=\langle a,b,c_{1}\mid R,a^{c_{1}}=a^{r},b^{c_{1}}=a_{1}^{u}b\rangle,\] where \(r\equiv 1(\mathrm{mod}\ 3).\) Note that \(\langle a\rangle\leq\langle c\rangle_{X}(\langle a_{1}\rangle)\lhd X\). Thus \(\langle a,bc,c^{2}\rangle\leq\langle c\rangle_{X}(\langle a_{1}\rangle)\), which implies \(\langle a_{1}\rangle\times\langle c_{1}\rangle\lhd X\). One can check \(r=1(\mathrm{as}\ 3r\equiv 3(\mathrm{mod}\ \frac{n}{3})\) and \(1\equiv r^{2}=2r-1(\mathrm{mod}\ \frac{n}{3}))\). Then \(\langle a,c^{2}\rangle\leq\langle c\rangle_{X}(\langle a_{1}\rangle\times \langle c_{1}\rangle)\lhd X\), which implies \(bc\in\langle c\rangle_{X}(\langle a_{1}\rangle\times\langle c_{1}\rangle)\). So \(c_{1}^{b}=c_{1}\). Then \(c_{1}\in\langle c\rangle_{X}=1\), a contradiction. \(\Box\) **Lemma 6.9**: _The group \(X\) is given by Eq(104)._ **Proof** By lemma, \(\langle a_{1}\rangle\lhd X\), that is \((a_{1})^{c}=a_{1}^{z}\) by Eq(108). Since \(\langle a^{2}\rangle\lhd X_{1}\), we get \(\langle a\rangle\lhd X_{1}\) and so \(G\lhd X_{1}\), that is \(t=v=1\) in Eq(105). Then by Eq(105), (107) and (108), we can set \[X=\langle a,b,c\mid R,a^{c_{1}}=a^{r_{1}},b^{c_{1}}=a_{1}^{ur_{1}}b,(a_{1})^{ c}=a_{1}^{z},a^{c}=bc^{3+4w}\rangle,\] where \(r_{1}=1-6sr\), \(r_{1}^{\frac{m}{4}}-1\equiv 2(r_{1}-r)\equiv 0(\mathrm{mod}\ n)\) and \(r_{1}^{i}-1\equiv 0\equiv 3u\sum_{l=1}^{i}r_{1}^{l}(\mathrm{mod}\ n)\) is if and only if \(i\equiv 0(\mathrm{mod}\ \frac{m}{4})\). Set \(\langle c\rangle=\langle c_{2}\rangle\times\langle c_{3}\rangle\), where \(\langle c_{2}\rangle\) is \(2-\)group and \(\langle c_{3}\rangle\) is \(2^{\prime}-\)Hall subgroup of \(\langle c\rangle\). Then \(\langle c_{1}\rangle=\langle c_{2}^{4}\rangle\times\langle c_{3}\rangle\). And we shall show \(c_{2}^{4}=1\). Consider \(\overline{X}=X/\langle a_{1}\rangle=\langle\overline{a},\overline{b}\rangle \langle\overline{c}\rangle\). Then one can check \(C_{\overline{X}}(\langle\overline{c}_{1}\rangle)=\overline{X}\), which implies \(\langle\overline{c_{1}}\rangle\leq Z(\overline{X})\) and \(\overline{X}/\langle\overline{c_{1}}\rangle\cong S_{4}\). Note \(\langle\overline{c}_{3}\rangle\leq\langle\overline{c}_{3}\rangle(\langle \overline{c_{2}^{4}}\rangle\langle\overline{a}\rangle)\leq\overline{X}\) where \((|\overline{X}:\langle\overline{c}_{3}\rangle(\langle\overline{c}_{2}^{4} \rangle\langle\overline{a}\rangle)|,|\langle\overline{c}_{3}\rangle|)=1\), then by Proportion 2.5, \(\langle\overline{c}_{3}\rangle\) has a complement in \(\overline{X}\), which implies \(X=(\langle a,b\rangle\langle c_{2}\rangle)\rtimes\langle c_{3}\rangle.\) Consider \(X_{2}=\langle a,b\rangle\langle c_{2}\rangle\), where \(\langle c_{2}\rangle_{X_{2}}=1\) and \(\langle a_{1}\rangle\lhd X_{2}\). Then by Lemma 6.8, we get \(|\langle c_{2}\rangle|=4\), which implies \(\langle c_{1}\rangle=\langle c_{3}\rangle\). Then \[X=(\langle a,b\rangle\langle c_{2}\rangle)\rtimes\langle c_{1}\rangle.\] In what follows, we shall determine \(X\). In \(X_{2}=\langle a,b\rangle\langle c_{2}\rangle\), we know \(a^{c_{2}}=bc_{2}^{3}\) by Eq(107). Consider \(\langle a\rangle\leq C_{X_{2}}(\langle a_{1}\rangle)\lhd X_{2}\). Then \(C_{X_{2}}(\langle a_{1}\rangle)\) is either \(\langle a,bc_{2},c_{2}^{2}\rangle\) or \(X_{2}\). Suppose that \(\langle c\rangle_{X}(\langle a_{1}\rangle)=X\). Then we know \(a_{1}^{2}=1\) as \([a_{1},b]=1\), that is \(n=3\) or \(6\). Then one can check \[\begin{array}{l}X=X_{2}\cong S_{4}\text{, if }n=3;\\ X=X_{2}=\langle a,b,c|a^{6}=b^{2}=c^{4}=1,a^{b}=a^{-1},a^{c}=bc^{3},b^{c}=a^{2 }b\rangle\text{, if }n=6.\end{array}\] Suppose that \(\langle c\rangle_{X}(\langle a_{1}\rangle)=\langle a,bc_{2},c_{2}^{2}\rangle\). Then \(a_{1}=a_{1}^{bc_{2}}=(a_{1}^{-1})^{c_{2}}\), which implies \(z=-1\). In \(X_{1}=\langle a,b\rangle\rtimes\langle c_{1}\rangle\), we know \[X_{1}=\langle a,b,c_{1}\mid R,a^{c_{1}}=a^{r_{1}},b^{c_{1}}=a_{1}^{ur_{1}}b\rangle.\] Since \(c_{2}\) preserves \(a^{c_{1}}=a^{r_{1}}\), we get \[(bc_{2}^{3})^{c_{1}}=a_{1}^{ur_{1}}bc^{3}\,\text{and}\,(a^{r_{1}})^{c_{2}}=a_{1}^{ \frac{1-r_{1}}{3}}bc^{3},\] which gives \[ur_{1}\equiv\frac{1-r_{1}}{3}(\text{mod}\ \frac{n}{3}).\] Then \[X=\langle a,b,c\mid R,a^{c_{1}}=a^{r_{1}},b^{c_{1}}=a_{1}^{\frac{1-r_{1}}{3}}b,(a_{1})^{c_{2}}=a_{1}^{-1},a^{c_{2}}=bc_{2}^{3}\rangle,\] where \(c_{2}=c^{\frac{m}{4}}\) and \(\text{o}(r_{1})=\frac{m}{4}\).
2305.07524
Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution
Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN). In turbo spin echo sequences (TSE) the sequence parameters can have a strong influence on the actual resolution of the acquired image and have consequently a considera-ble impact on the performance of the NN. We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural net-work parameters for SR-TSE. This MR-physics-informed training procedure jointly optimizes the radiofrequency pulse train of a proton density- (PD-) and T2-weighted TSE and a subsequently applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The found radiofrequency pulse train designs generate an optimal signal for the NN to perform the SR task. Our method generalizes from the simulation-based optimi-zation to in vivo measurements and the acquired physics-informed SR images show higher correlation with a time-consuming segmented high-resolution TSE sequence compared to a pure network training approach.
Hoai Nam Dang, Vladimir Golkov, Thomas Wimmer, Daniel Cremers, Andreas Maier, Moritz Zaiss
2023-05-12T14:40:25Z
http://arxiv.org/abs/2305.07524v1
Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution ###### Abstract Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN). In turbo spin echo sequences (TSE) the sequence parameters can have a strong influence on the actual resolution of the acquired image and have consequently a considerable impact on the performance of the NN. We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural network parameters for SR-TSE. This MR-physics-informed training procedure jointly optimizes the radiofrequency pulse train of a proton density- (PD-) and T2-weighted TSE and a subsequently applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The found radiofrequency pulse train designs generate an optimal signal for the NN to perform the SR task. Our method generalizes from the simulation-based optimization to in vivo measurements and the acquired physics-informed SR images show higher correlation with a time-consuming segmented high-resolution TSE sequence compared to a pure network training approach. Keywords:super-resolution, turbo spin echo, joint optimization ## 1 Introduction Magnetic resonance imaging plays an essential role in clinical diagnosis by acquiring the structural information of biological tissue. Spatial resolution is a crucial aspect in MRI for the precise evaluation of the acquired images. However, there is an inherent trade-off between the spatial resolution of the images and the time required for acquiring them [1]. In order to obtain high-resolution (HR) MR images, patients are required to remain stable in the MR scanner for long time, which leads to patients' discomfort and inevitably introduces motion artifacts that again compromise image quality and actual resolution [2]. Since super-resolution (SR) can improve the image quality without changing the MRI hardware, this post-processing tool has been widely used to overcome the challenge of obtaining HR MRI scans [3]. Using model-based methods like interpolation algorithms [4] and iterative deblurring algorithms [5] or learning-based methods such as dictionary learning [6], SR achieved the restauration of fine structures and contours. In recent years, deep learning has become a main-stream approach for super-resolution imaging, and a number of neural network-based SR models were proposed [7]. Among the proposed model- or learning-based methods, convolutional neural networks (CNN) produce superior SR results with better clarity and less artifacts [8]. Super-resolution for MRI data has been only recently applied [9-12]. In [10] a CNN is proposed for cardiac MRI to estimate an end-to-end non-linear mapping between the upscaled low-resolution (LR) images and corresponding HR images to rebuild a HR 3D volume. In other work, motion compensation for the fetal brain was achieved by a CNN architecture [11] to solve the 3D re-construction problems. SR MRI has been also applied to low-field MR brain imaging [12]. However, these existing methods only used single contrast MRI images and did not make full use of multi-contrast information. In the clinical routine, T1, T2 and PD weighted images are often acquired together for diagnosis with complementary information. Although each weighted image highlights only certain types of tissues, they reflect the same anatomy, and can provide synergy when used in combination [8]. Fast imaging techniques like Turbo-Spin-Echo (TSE) [13] sequences can also be utilized to sample more data in given timeframe, thus allowing a higher resolution. However, due to the long echo-train duration the T2-decay is significant during the signal acquisition. This process acts as a voxel-T2-dependent k-space filter that lowers the actual resolution w.r.t the nominal resolution due to a broadening of the point-spread-function (PSF) [14]. However, by adjusting the refocusing radiofrequency (RF) pulses, the signal decay can be reduced during the TSE echo-train [15]. The RF pulse train strongly influence the signal dynamic in a highly complex fashion, as each RF pulse affects all future signal contributions. Current MRI super-resolution methods use contrasts acquired from typical clinical protocols as input for the neural network and disregard the influence of the MR sequence parameters for optimization. Using so-called known operator learning [16], we propose an approach that utilizes a MR physics model during the optimization to not only train a neural network for super-resolution, but also adapt the refocusing RF pulses to directly influence the PSF. This approach also allows the use of the uncorrupted theoretical contrast as ground truth, which is only available during the simulation. By using two different encoding schemes in our sequences, we gain additional information from the two different contrasts PD and T2 that are used as input for the CNN and both will have different PSFs, thus provide valuable information for the SR task. Both sequences are optimized jointly to allow generation of optimal contrasts for the SR task of the neural network. The main contribution and the novelty of our work is the end-to-end optimization of MR sequence and neural network parameters for super-resolution TSE. For this purpose, we use a fully differentiable Bloch simulation embedded in the forward propagation to jointly optimize the RF pulse train of proton density (PD) and T2 weighted TSE sequences and a subsequent applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The ground truth targets are directly generated by the simulation and represent the uncorrupted MR contrast. Our jointly optimized approach is compared to a network trained on a TSE with a 180\({}^{\circ}\) RF pulse train. The optimized sequences and networks are verified at the real scanner system by performing in vivo measurements of a healthy subject and compared to a highly segmented, high-resolution vendor-provided sequence. ## 2 Theoretical Background ### Image Degradation in TSE-Sequences. When there is no relaxation decay during the echo-train, the k-space signal obtained from a TSE pulse sequence \(S(k_{x},k_{y})\) yields the true spatial distribution of the theoretical transverse magnetization \(M_{\perp}(x,y)\) via the Fourier transform (FT): \[M_{\perp}(x,y)=\int\int_{k_{x},k_{y}}S\big{(}k_{x},k_{y}\big{)}\cdot e^{i\big{(} k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y} \tag{1}\] When considering the signal relaxation behavior during acquisition an additional filtering function in k-space, the Modulation Transfer Function (MTF) for each tissue type (tt), has to be applied: \[\widetilde{M}_{\perp}(x,y) =\sum\nolimits_{tt}M(x,y)\] \[=\sum\nolimits_{tt}\int_{k_{x},k_{y}}S_{tt}\big{(}k_{x},k_{y} \big{)}\cdot MTF_{tt}\big{(}k_{x},k_{y}\big{)}\cdot e^{i\big{(}k_{x}x+k_{y}y \big{)}}dk_{x}dk_{y}\] \[=\sum\nolimits_{tt}\int_{k_{x},k_{y}}S_{tt}\big{(}k_{x},k_{y} \big{)}\cdot e^{i\big{(}k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y}\] \[\qquad\qquad*\int_{k_{x},k_{y}}MTF_{tt}\big{(}k_{x},k_{y}\big{)} \cdot e^{i\big{(}k_{x}x+k_{y}y\big{)}}dk_{x}dk_{y}\] \[=\sum\nolimits_{tt}M_{\perp,tt}(x,y)\cdot B_{tt}(x,y),\] where \(*\) denotes a convolution and \(B(x,y)\) is a blur kernel equal to the Fourier-transformed MTF. For a single-shot 180\({}^{\circ}\) TSE sequence with constant RF pulses the filtering function in k-space can be described as: \[MTF_{tt}\big{(}k_{x},k_{y}\big{)}=\sum\nolimits_{tt}e^{-\frac{t\big{(}k_{x}x \cdot k_{y}\big{)}}{T2tt}}\,,\qquad\quad(2)\] which is unique for each tissue type with different T2 value. Using variable RF pulses, the MTF can become more homogeneous across the k-space and therefore reduce the width of the PSF. Methods ### Sequences A single-shot 2D TSE sequence is being used as default sequence for our optimization. The acquisition time for the single-shot 2D TSE is 0.76 s at 1.56 mm in-plane resolution. Single slice acquisition was used for all sequences. The refocusing RF pulses of the PDw TSE with TE=12 ms and T2w TSE with TE=96 ms were optimized jointly. The PDw TSE sequence uses a centric phase-encoding reordering. For T2w imaging the centric phase-encoding reordering is shifted to have the central k-space line encoded at the given echo time TE, which for TE=96 ms is at the 8th echo. Other parameters were as follows: acquisition matrix of 128\(\times\)128, undersampling factor in phase: 2x, reconstructed with GRAPPA[17], FOV=200 mm\(\times\)200 mm, slice thickness of 8 mm and bandwidth of 133 Hz/pixel. For all sequences, the 90\({}^{\circ}\) excitation pulse was kept fixed. ### Simulation & Optimization All simulations and optimizations were performed in a fully differentiable Bloch simulation framework [18]. The framework generates MR sequences and corresponding reconstruction automatically based on the target contrast of interest. The optimization is carried out in an MR scanner simulation environment mirroring the acquisition of a real MR scanner. The forward simulation consists of a chain of tensor-tensor multiplication operations, representing the Bloch equations, that are differentiable in all parameters and supports an analytic derivative-driven nonlinear optimization. The entire process - MRI sequence, reconstruction, and evaluation - is modelled as one computational chain and is part of the forward and backwards propagation during the optimization, as depicted in Figure 1. The optimization problem is described by: \[\Psi^{*},\Theta^{*}=\operatorname*{argmin}_{\Psi,\Theta}\left(\sum_{i}\left\| M_{\lambda,i}-\operatorname*{NN}_{\Theta}\left(RECO\left(\operatorname{SCAN}_{\Psi}(P_{i}) \right)\right)\right\|_{p}\right), \tag{3}\] where \(\Psi\) are the optimized sequence parameters and \(\Theta\) the neural network parameters. For given tissue maps \(P_{i}\) for each Voxel \(i\) the Bloch simulation \(SCAN\) outputs the MR signal and is reconstructed by the algorithm \(RECO\). Signal simulations were performed by a fully differentiable extension of the Extended Phase Graph (EPG) [19] formalism. The simulation is done with PyTorch [20] complex-valued datatype and outputs a complex-valued signal. The forward simulation outputs the TSE signal which is conventionally reconstructed to magnitude images, and in addition the corresponding contrast as ground truth target as given in equation [1]. For the SR network a CNN, DenseNet [21] was adapted, which receives the magnitude TSE images of PDw and T2w TSE as input. To prevent scaling discrepancy, the TSE images are normalized to have maximum value of 1 before applying the CNN for both cases in simulation and in vivo. The DenseNet consist of 4 Dense blocks (Convolution-\(>\)BatchNorm\(>\)PReLu\(>\)Concat) followed by an UpsampleBlock (bicubic upsampling-\(>\)Convolution-\(>\)BatchNorm\(>\)PReLu) and a final CNN Layer. Each convolution had a 3\(\times\)3 kernel size, except for the first layer with a kernel size of 7\(\times\)7. In total, the model had 174,706 trainable parameters. Of the sequence parameters \(\Psi\), the amplitude of the refocusing RF pulses of the TSE sequences and the NN parameters \(\Theta\) were optimized jointly in an end-to-end training procedure using the Adam optimizer [22] in PyTorch. We follow a known-operator approach [16], where the conventional reconstruction, including parallel imaging by means of GRAPPA, is fixed, but fully differentiable. The simulation is fully differentiable and all parameters except the refocusing RF pulses are fixed. The gradient update propagates back through the whole chain of differentiable operators. The complete RF pulse train and CNN are updated at each iteration step. The RF pulses are initialized with random values around 50\({}^{\circ}\) and standard deviation of 0.5\({}^{\circ}\). The training data consisted of synthetic brain samples based on the BrainWeb [23] database. The fuzzy model segments were filled with in vivo-like tissue parameters: Proton density PD values were taken from [24], T1 and T2 from [25], T2' was calculated from T2 and T2* values [26] and diffusion coefficient D was taken from [27]. B0 and B1 were assumed to be without inhomogeneities. In total, 19 subject volumes each consisting of 70 slices were used as training data and one separate subject volume as test dataset. The simulation uses coil sensitivity maps acquired at the MR system and calculated using ESPIRiT [28]. The optimizations were performed on an Intel Xeon E5-2650L with 256GB RAM. A full optimization on CPU took 4 days with memory consumption of 230GB RAM. The learning rate for the model parameters and sequence parameters were lr_model=0.001, lr_rf=0.01, respectively. Other hyperparameters of the optimization were: batch size = 1, n_epoch=10, damping factors of Adam (0.9, 0.999). ### Data acquisition at a real MR system After the optimization process, all sequences were exported using the Pulseq standard [29] and the populseq tool [30]. Pulseq files could then be interpreted on a real MRI scanner including all necessary safety checks, and were executed on a PRISMA 3T scanner (Siemens Healthineers, Erlangen, Germany) using a 20-channel head coil. Raw Figure 1: Overview of the proposed processing pipeline: The MR signal of PDw and T2w TSE is simulated for a given RF pulse train; GRAPPA reconstruction and SR CNN are applied subsequently. The output is compared to the actual theoretical uncorrupted HR contrasts at TE\({}_{\text{eff}}\) and a gradient descent is performed to update refocusing FA and NN parameters, simultaneously. data were automatically sent back to the terminal and the same reconstruction pipeline used for the simulated data was used for measured images. As high-resolution reference a vendor-provided TSE sequence was acquired with following parameters: 32-shot segmented, GRAPPA2, TE=12/96 ms, TR=12 s, FOV=200 mm\(\times\)200 mm, matrix of 256\(\times\)256, FA=180\({}^{\circ}\). All MRI scans were under approval of the local ethics board and were performed after written informed consent was obtained. Measurements were performed on a healthy volunteer. ### Reconstruction and Evaluation Signals of the TSE sequences were reordered, and reconstructed with GRAPPA. The optimization was based solely on magnitude images. Structural similarity index measure (SSIM) [31] and the Peak Signal-to-Noise Ratio (PSNR) were calculated for evaluation of simulation and in vivo measurements w.r.t. the simulated ground truth and the HR segmented in vivo measurement, respectively. The evaluation was performed in Matlab [32] with the build-in functions for SSIM and PSNR. ## 4 Results ### Qualitative Visual Results The original LR TSE image with the zero-filled image and the reconstructed SR image are compared to our optimized RF pulse train design and a conventional 180\({}^{\circ}\) RF pulse train TSE sequence for each contrast in Figure 2. The optimization process can be seen in Supporting Figure S1 and Supporting Animation S3. Starting from the initialized values, the RF pulses converge to the optimal RF pulse train, while the NN parameters are optimized simultaneously. The converged RF pulse state has been found to be independent from the initialization. The final optimized RF pulse design for the PDw and T2w TSE sequences is shown in Figure 2a. It can be observed, that in all cases the SR image leads to an improvement over the LR TSE image by showing clearer resolved borders between white and gray matter. The optimized RF pulse train further improves the nominal resolution, which can be observed by a clear increase of sharpness of the sulcus between Gyrus cinguli and Gyrus frontalis superior as indicated by the red arrows. The optimized sequence and CNN translate well to in vivo measurements, where similar improvements as seen in the simulated images can be observed (Figure 2d,e). ### Quantitative Metrics Results Table 1 and Table 2 report the quantitative metrics scores of PSNR and SSIM for the images shown in Figure 2. The quantitative metrics agree with our visual observations and show that our end-to-end optimization approach performs better than the SR based on existing conventional 180\({}^{\circ}\) TSE sequence data only. Compared to the acquisition time of the segmented reference sequence with 192.85s, our optimized single-shot sequence only requires an acquisition time of 0.76s. Thus, the SR performance can potentially be further increased by a multi-shot scan sacrificing a little more time. To find the best procedure multiple ablation studies were performed (Supporting Information). \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & PSNR - PDw & SSIM - PDw & PSNR - T2w & SSIM – T2w \\ \hline LR – 180\({}^{\circ}\) & 10.4 & 0.39 & 11.6 & 0.40 \\ \hline ZF – 180\({}^{\circ}\) & 14.7 & 0.73 & 22.2 & 0.79 \\ \hline HR NN – 180\({}^{\circ}\) & 28.7 & 0.93 & 29.4 & 0.93 \\ \hline LR – opt. FA & 10.8 & 0.38 & 11.2 & 0.39 \\ \hline ZF – opt. FA & 17.5 & 0.75 & 23.6 & 0.80 \\ \hline \end{tabular} \end{table} Table 1: PSNR and SSIM for simulation in Figure 2 using the high-resolution GT as reference. Figure 2: (a) Optimized RF pulse train and phase encoding for both contrast (due to centric reordering, the central k-space line k_y=0 is acquired at repetition 0 and 7 for the PDw and T2w TSE, respectively). (b) Simulation of a static 180\({}^{\circ}\) RF pulse train and (c) optimized RF pulse train compared to the uncorrupted ground truth. (d) In vivo measurements of a static 180\({}^{\circ}\) RF pulse train and (e) optimized RF pulse train compared to the vendor’s TSE sequence is shown as a high-resolution reference. In both cases the improvement of the optimized RF pulses over the constant 180\({}^{\circ}\) RF pulse train can be observed by a better resolved border between white and gray matter as indicated by the red arrows. SSIM and PSNR values are shown in Table 1 and 2. ## 5 Discussion We demonstrated a new end-to-end learning process for TSE super-resolution by jointly optimizing refocusing RF pulse trains and neural network parameters. This approach utilizes a differentiable MR physics simulation embedded in the forward and backward propagation. The joint-optimization outperforms a pure neural network training. Although our approach is solely based on simulated data, the optimized sequence and trained CNN translate well to in vivo data. By using simulation-based training data, we are able to use the theoretical uncorrupted contrast as ground truth target. Apart from the expensive acquisition of HR in vivo data, real measured target data also have inherent drawbacks compared to their LR counterpart. Due to the longer scan time motion artifacts become more significant and to acquire the same contrast the bandwidth has to be increased, leading to a decrease of SNR [33]. However, we also admit the limitation of a simulation-based optimization, as the performance is bound to the accuracy of the model behind the simulation. We can observe in our results, that our approach is not able to resolve small vessel structures, as these are not existing in our synthetic brain database. Fortunately, the NN does not hallucinate details, when encountering these structures. Using real measured data to finetune the trained network could be a possible solution for this problem. Another way could be including uncertainty quantification layers [34] in the CNN to handle unknown structures. Our approach is compatible with any network architecture e.g. [35-37] to further improve the SR task. Furthermore, the training objective can be also extended to requirements on the MR sequence by including constraints in the loss function e.g. reduced RF pulse amplitudes for decrease of energy deposition SAR or increase of SNR. To conclude, we propose an end-to-end optimization of MR sequence and neural network parameters for TSE super-resolution. This flexible and general end-to-end approach benefits from a MR physics informed training procedure, allowing a simple target-based problem formulation, and outperforms pure neural network training. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & PSNR - PDw & SSIM - PDw & PSNR - T2w & SSIM – T2w \\ \hline LR \(-180^{\circ}\) & 12.6 & 0.35 & 16.2 & 0.38 \\ \hline ZF \(-180^{\circ}\) & 14.1 & 0.59 & 22.5 & 0.70 \\ \hline HR NN – \(180^{\circ}\) & 23.2 & 0.84 & 26.2 & 0.89 \\ \hline LR – opt. FA & 13.2 & 0.37 & 15.9 & 0.38 \\ \hline ZF – opt. FA & 15.5 & 0.65 & 22.8 & 0.70 \\ \hline HR NN – opt. FA & **26.0** & **0.87** & **27.0** & **0.90** \\ \hline \end{tabular} \end{table} Table 2: PSNR and SSIM for in vivo measurements in Figure 2 using the segmented high-resolution TSE as reference.
2308.08833
CMB: A Comprehensive Medical Benchmark in Chinese
Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely translating English-based medical evaluation may result in \textit{contextual incongruities} to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large-scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. We hope this benchmark provide first-hand experience in existing LLMs for medicine and also facilitate the widespread adoption and enhancement of medical LLMs within China. Our data and code are publicly available at https://github.com/FreedomIntelligence/CMB.
Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li
2023-08-17T07:51:23Z
http://arxiv.org/abs/2308.08833v2
# CMB: A Comprehensive Medical Benchmark in Chinese ###### Abstract Large Language Models (LLMs) provide a possibility to make a great breakthrough in medicine. The establishment of a standardized medical benchmark becomes a fundamental cornerstone to measure progression. However, medical environments in different regions have their local characteristics, e.g., the ubiquity and significance of traditional Chinese medicine within China. Therefore, merely translating English-based medical evaluation may result in _contextual incongruities_ to a local region. To solve the issue, we propose a localized medical benchmark called CMB, a Comprehensive Medical Benchmark in Chinese, designed and rooted entirely within the native Chinese linguistic and cultural framework. While traditional Chinese medicine is integral to this evaluation, it does not constitute its entirety. Using this benchmark, we have evaluated several prominent large-scale LLMs, including ChatGPT, GPT-4, dedicated Chinese LLMs, and LLMs specialized in the medical domain. It is worth noting that our benchmark is not devised as a leaderboard competition but as an instrument for self-assessment of model advancements. We hope this benchmark could facilitate the widespread adoption and enhancement of medical LLMs within China. Check details in [https://cmedbenchmark.llmzoo.com/](https://cmedbenchmark.llmzoo.com/). ## 1 Introduction Over the past two centuries, medical advancements have substantially increased human life expectancy. Medicine's effectiveness often hinges on experience, with veter physicians typically outperforming novices. In parallel, large language models like ChatGPT are shaped by their vast data experiences. This mutual reliance on experiential learning between physicians and LLMs suggests a promising frontier for the integration of LLMs into the medical domain. **Medical evaluation is highly professional.** Although the future of _LLMs for medicine_ is promising, their evaluation is a challenging topic. Deploying LLMs in hospitals raises significant ethical concerns that real-world feedback becomes difficult. Existing works on LLMs tend to leverage subjective evaluation (Zheng et al., 2023) where none of references is used during the assessment. However, the evaluation in medicine is much more professional than that of the general domain. For instance, assessing _radiology_-related issues poses a challenge for the public, a senior professor in medicine, or even a _general practitioner_. Subjective evaluation would be difficult to be scaled up since professional manual judging is expensive. **Benchmark for medical knowledge.** Another school of evaluation protocol is objective evaluation, where the expected output has a clear reference. Certain protocols emphasize natural language understanding tasks that are not knowledge-intensive, as seen in studies (Zhang et al., 2022; Peng et al., 2019). In the era of Large Language Models (LLM), modern NLP evaluations underscore the significance of knowledge (Huang et al., 2023; Hendrycks et al., 2021). In biomedicine, a typical example to probe knowledge is BioLAMA Sung et al. (2021); however, it is tailored to evaluate masked language models instead of auto-regressive ones. Another benchmark is MultiMedBench Tu et al. (2023), covering question answer, report summarization, visual question answering, report generation, and medical image classification. Note that MultiMedBench is only in English. **The necessity to localize medical benchmark.** During economic globalization, a unified medical standard may overlook the unique medical needs and practices of different regions and ethnic groups, indicating the necessity to localize medical benchmarks. For example, in Asia, Traditional Chinese Medicine (TCM) not only offers profound insights and localized medical solutions in the prevention, treatment, and rehabilitation of diseases but also has formed a medical paradigm closely associated with regional, climatic, dietary, and lifestyle characteristics, over its long historical evolution. Simultaneously, it poses significant challenges when applying the Western medical framework to a local environment, which needs cross-cultural communication and understanding. Therefore, we should adopt a _native_ medical benchmark instead of a _translated_ medical benchmark for a local environment. Note that the precise translation of medical terminologies necessitates both medical professions and the cultural context in the target language. **The philosophy to create CMB.** The CMB dataset as a whole includes multiple-choice questions in qualification examination (CMB-Exam) and complex clinical diagnostic questions based on actual case studies (CMB-Clin). Each multiple-choice question offers four to six options, and there is one or more correct answers. Clinical diagnostic questions are set based on actual and complex cases encountered in the teaching process, and the correct answer is determined by the consensus of teaching experts. The sources of existing medical benchmarks could be the internet Li et al. (2023), hospitals, etc. However, these data sources have either privacy or inaccuracy issues. First, we decide to leverage qualification examination as the data source, resulting in **CMB-Exam** subset. The merits of qualification examination are two bold: (I) the ground truth of qualification examination is objective and typically accurate; (II) there is clear anchor (i.e., 60% accuracy) that is aligned with a qualified expert in a specific domain. As shown in Figure 1, the multiple-choice questions cover four clinical medical professions: _physicians_, _nurses_, _medical technicians_, and _pharmacists_. The involved exams cover the whole professional career path, ranging from undergraduate medical basic knowledge exams, graduate selection exams, standardized exams, professional qualification exams, intermediate professional title exams, to advanced professional title exams. Figure 1: Components of the CMB dataset. Left: The structure of CMB-Exam, consisting of multiple-choice and multiple-answer questions. Right: an example of CMB-Clin. Each example consists of a description and a multi-turn conversation. Other than the exams in CMB-Exam that is related to _theoretical_ knowledge, the second subset of CMB (i.e., **CMB-Clin**) is more _practical_. CMB-Clin includes complex clinical diagnostic problems that evaluate the model's ability to synthesize knowledge and reasoning. On the one hand, the knowledge aspect implies the need for the model to draw upon its medical knowledge when answering questions. On the other hand, the reasoning facet necessitates the model's ability to analyze case reports, thus combining its own medical knowledge to respond to inquiries. We believe CMB-Exam and CMB-Clin are complementary in medicine, and both as a whole could be a complete evaluation protocol to not only the career of a medical doctor but also the learning path of a medical LLM. **Take-away messages from CMB.** After benchmarking various LLMs in CMB, we get the following observations that might be insightful. **I) GPT-4** exhibits significant superiority in the medical domain, with indigenous large-scale models also demonstrating commendable performance; **II)** Most specialized **medical models** still lag behind general models in performance, indicating ample room for improvement in the medical modeling field; **III)** Accuracy exhibits significant disparities across professional levels and knowledge areas, notably between **traditional Chinese medicine** and Western medicine; **IV)** The effectiveness of the **CoT and few-shot prompts** varies among models with different accuracy levels, especially presenting potential risks in knowledge-intensive tasks; and **V)** Results of automatic evaluation using GPT-4 highly agree with **expert evaluation** results. ## 2 Related work ### Medical Benchmark Medical benchmarks have evolved to broadly encompass two types of tasks based on the capabilities of the models they seek to probe: Objective tasks and Subjective tasks. The former typically assumes the form of multiple-choice questions (Welbl et al., 2018; Jin et al., 2020; Pal et al., 2022; Hendrycks et al., 2021; Singhal et al., 2022; Li et al., 2021; Abacha and Demner-Fushman, 2019), information retrieval (Abacha et al., 2017; Zhu et al., 2019; Abacha et al., 2019), and cloze-style reading comprehension (Suster and Daelemans, 2018; Pampari et al., 2018; Zhu et al., 2020), which serve to evaluate a model's medical knowledge with unbiased accuracy. Sources for these tasks range from medical textbooks and exams to case reports such as CliCR (Suster and Daelemans, 2018), Wikipedia like MedHop (Welbl et al., 2018), and medical practices exemplified by MMLU (Hendrycks et al., 2021) and MedMCQA (Pal et al., 2022). In contrast, subjective tasks involve open-ended text generation constructed directly from consumer queries and doctor responses, often sourced from online medical forums. The task typically demands models to generate consumer-oriented replies (Singhal et al., 2022; Li et al., 2023) or explanations for multiple-choice questions (Liu et al., 2023). As of now, there are relatively few open-ended text generation question-answering tasks that specifically center around providing consultation based on diagnostic reports. Few existing benchmark datasets encapsulate both task types, with MultiMedQA (Singhal et al., 2022) and CMExam (Liu et al., 2023) sharing the closest resemblance to our work. Differing from prior work, our dataset exceeds in size and includes questions not only from the Chinese National Medical Licensing Examination but also from various authoritative medical textbooks. Moreover, our subjective tasks deviate from the existing works, stemming from textbook examples requiring answers to diagnosis-related questions based on case reports, resembling real-life consultation scenarios. ### Other Benchmarks of Large Language Models The explosive growth in the number and capability of LLMs has led to a multitude of works aiming to discern their true capacity, evaluating both their general and specific abilities. General ability benchmarks include comprehensive test suites, each targeting different aspects of LLM's proficiency, ranging from handling multi-turn dialogues (Zheng et al., 2023) to gauging language comprehension and reasoning abilities (Srivastava et al., 2022; Zhang et al., 2023; Zhong et al., 2023). OpenLLM (Beeching et al., 2023) provides a public competition platform to compare and assess the performance of various LLM models across multiple tasks. In terms of specific abilities, several benchmarks, apart from those related to medicine, aim to evaluate different capabilities of models. ARB (Sawada et al., 2023) was introduced to assess LLMs' performance in high-level reasoning tasks across multiple domains. C-Eval Huang et al. (2023) serves as the first comprehensive benchmark to evaluate the advanced knowledge and reasoning abilities of Chinese-based models. M3Exam (Zhang et al., 2023b) provides a unique and comprehensive evaluation framework, combining various languages, modalities, and levels, to test the general abilities of Juris Master in different contexts. Gaokao (Zhang et al., 2023c), MATH (Hendrycks et al., 2021c), and APPS (Hendrycks et al., 2021a) focus on assessing LLM proficiency in complex, context-specific tasks, and code generation, respectively. ## 3 Dataset ### CMB-Exam: Comprehensive Medical Exams #### 3.1.1 Taxonomy To obtain a precise taxonomy of medical evaluation, we aligned it with the disciplinary and examination systems of the medical field. First, we chose four main medical professions: physicians, pharmacists, medical technicians, and nurses, covering various occupational difficulty levels of examinations. Considering the learning trajectories and professional growth paths, we additionally include _discipline examinations_ and _graduate entrance examinations_ for these four professions, ultimately resulting in six categories: Physician, Nurse, Technician, Pharmacist, Undergraduate Disciplines, and Graduate Entrance Exam. One could refer to Table 1 for the detailed taxonomy. Moreover, we carried out a more detailed subject division within each subcategory, resulting in a total of 174 categories, the detailed directory list of which can be found in Appendix A. Through this structured arrangement, our directory structure reflects characteristics closely connected to the actual medical field, providing a solid foundation for further analysis and research. #### 3.1.2 Data Collecting and Processing Data SourcesThe data used is derived from publicly available mock examination questions, course-work exercises, and summaries of commonly misunderstood examination questions. A significant portion of these materials comes from the Chinese Medical Question Database3, from which we obtained explicit permission to share the data. Footnote 3: [https://www.medtiku.com/](https://www.medtiku.com/) Manual VerificationThe data has various formats, with PDF and JSON being the most prevalent. For PDF documents, we first used Optical Character Recognition (OCR) to transform them into plain text. This text was then processed into structured formats and underwent manual verification to ensure both OCR accuracy and proper formatting. \begin{table} \begin{tabular}{l l l l} \hline \hline **Category** & **Subcategory** & **\# Subject** & **\# Questions** \\ \hline Physician (\(\textless{}\)\(\%\)) & Resident Physician (\(\textless{}\)\(\%\)); Licensed Assistant Physician (\(\textless{}\)\(\%\)); & 81 & 124,926 \\ & Licensed Physician (\(\textless{}\)\(\%\)); Associate Professional Physician (\(\textless{}\)\(\%\)); & & \\ & Advanced Professional Physicians (\(\textless{}\)\(\%\)); & & \\ \hline Nurse (\(\textgreater{}\)\(\%\)) & Practicing Nurse (\(\textgreater{}\)\(\%\)); Licensed Practical Nurse (\(\textgreater{}\)\(\%\)); Charge Nurse (\(\textgreater{}\)\(\%\)) & 8 & 16,919 \\ & (\(\%\)); Advanced Practice Nurse (\(\textgreater{}\)\(\%\)) & & \\ \hline Technicians (\(\textless{}\)\(\%\)) & Medical Technician (\(\textless{}\)\(\%\)); Medical Technologist (\(\textless{}\)\(\%\)); Supervising Technologist & 21 & 27,004 \\ & ogist (\(\textless{}\)\(\%\)) & & \\ \hline Pharmacist (\(\textless{}\)\(\%\)) & Licensed Pharmacist (\(\textless{}\)\(\%\)); Licensed TCM Pharmacist (\(\textless{}\)\(\%\)); & 8 & 33,354 \\ & Junior Pharmacist (\(\textless{}\)\(\%\))\(\%\); Junior Pharmacist Assistant (\(\textless{}\)\(\%\)); Junior TCM Pharmacist Assistant (\(\textless{}\)\(\%\)); & \\ & Chief Pharmacis (\(\textgreater{}\)\(\%\)); Chief TCM Pharmacists (\(\textless{}\)\(\%\)); Chief TCM Pharmacists (\(\textless{}\)\(\%\)) & \\ \hline Undergraduate Disciplines (\(\textgreater{}\)\(\textless{}\)\(\%\)) & Fundamental Medicine (\(\textless{}\)\(\%\)); Clinical Medicine (\(\textless{}\)\(\%\)); Traditional Chielines (\(\textless{}\)\(\%\)); Preventive Medicine & 53 & 62,271 \\ & nees (TCM) and Chinese Health Medicine (\(\textless{}\)\(\%\)); Preventive Medicine & & \\ \hline Graduate Entrance Exam (\(\textgreater{}\)\(\%\)) & Integrated Western Medicine (\(\textless{}\)\(\%\)); Integrated TCM (\(\textless{}\)\(\%\)); Political & 5 & 16,365 \\ & Science (\(\textless{}\)\(\textgreater{}\)\(\%\)); Nursing (\(\textgreater{}\)\(\%\)) & & 176 & 280,839 \\ \hline \hline \end{tabular} * We referenced the National Standard Subject Classification of the People’s Republic of China, see [https://xkb.pku.edu.cn/docs/2018-10/222328083310968071.pdf](https://xkb.pku.edu.cn/docs/2018-10/222328083310968071.pdf). \end{table} Table 1: Statistics of the CMB-Exam Categories, Subcategories, Subjects, and Questions. Data PreprocessingAll questions underwent a standardized data preprocessing procedure, including de-duplication and cleansing. In instances where we were unable to verify the question quality from the source, we conducted manual validation to ensure the absence of grammatical errors. Additionally, with the aid of the comment system provided by the Chinese Medical Question Database, we enacted a rigorous selection and deletion process for the data, ensuring the accuracy of the knowledge embedded in the questions. Data StatisticsFinally, we obtained a total of 280,839 multiple-choice questions. To assess the model's comprehension of medical knowledge, we randomly selected 400 questions from each subcategory as a test set. Additionally, to facilitate experimentation with few-shot learning strategies, we randomly selected 10 questions with explanations from each subcategory as a dev set. The remaining 269,359 questions were used as the train set. ### CMB-Clin: Clinical Diagnostic Questions The QA dataset is based on 74 classical complex and real-world cases originating from textbooks, offering an opportunity to investigate models' proficiency in knowledge application amidst real-life diagnosis and treatment circumstances. A model's competence is gauged not merely by its mastery of medical knowledge but also by its ability to synthesize and apply this knowledge to solve real-world problems. #### 3.2.1 Task Formulation In our dataset, we simulate dialogue interactions between an examiner and a candidate, focusing on assessing the model's diagnostic and therapeutic capacities. The data is with 74 real consultation scenarios (or ailments), each consisting of a case instance with multiple questions, culminating in 208 questions in total. As shown in Figure 1, each case presents a patient description followed by interrelated, sequential questions. It includes three parts: **I) Description**\(D\): patient information, including medical history summaries and chief complaints, physical examinations such as visual and tactile inspection, ancillary examinations like biopsy and CT scans; **II) Questions**\(Q\): questions related to diagnosis and treatment based on descriptions. Some questions might be interrelated; and **III) Solutions**\(S\): corresponding solutions to questions. For instance, in the \(k\)-th conversation round, the input \(x\) is formed by concatenating the patient's description with previous question-answer pairs and the current question, represented as \(x=D_{i}+Q_{i}+S_{i}+\ldots Q_{i+k}\). The expected response is \(S_{i+k}\). ## 4 Experiments on CMB-Exam ### Experimental Setup ModelsWe evaluate the following Chinese medical LLMs to compare their performance on CMB-Exam: HuatuoGPT (Zhang et al., 2023), BianQue (Chen et al., 2023), ChatMed-Consult (Zhu and Wang, 2023), MedicalGPT (Xu, 2023), ChatGLM-Med (Wang et al., 2023), Bentsao (Wang et al., 2023), and DoctorGLM (Xiong et al., 2023). In addition to these specialized models, we also include two proprietary models (i.e., ChatGPT (gpt-3.5-turbo-16k-0613) and GPT-4 (gpt-4-0613) and \begin{table} \begin{tabular}{l c c c} \hline \hline Split & \#subcategory & \#Q per subcategory & \#Q in total \\ \hline Test & 28 & 400 & 11,200 \\ Dev & 28 & 10 1 & 280 \\ Train & 28 & -2 & 269,359 \\ \hline \hline \end{tabular} * It is with explanations in dev set. * Each subcategory has a different number of questions. \end{table} Table 2: Data split in CMB-Exam. two publicly-available general-domain instruction-following models (i.e., ChatGLM-24Du et al., 2022) and Baichuan-13B-Chat5). Please refer to Appendix B for more details. Footnote 4: [https://github.com/THUDM/ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) Footnote 5: [https://github.com/baichuan-inc/Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) Footnote 6: [https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) Decoding HyperparametersFor all the aforementioned models (except for ChatGPT and GPT-4), we adopt their default hyper-parameters specified in transformers.GenerationConfig6. Besides, to reduce the variance in generation, we adopt greedy decoding for all models with min_new_tokens and max_new_tokens set to 1 and 512, respectively, to avoid empty or lengthy answers. Evaluation DetailsWe evaluate the models in both answer-only and chain-of-thought (CoT) settings. We extract answers from model outputs using an empirically designed regular expression. Each extracted answer is compared to the solution and is deemed correct if and only if they are exactly matched. We adopt accuracy as our metric. Footnote 6: [https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) ### Benchmarking Results We report the zero-shot results in Table 3. There are several observations drawn from different aspects. On general LLMs.Among the generic LLMs, the performance of GPT-4 in medicine significantly surpasses that of other models, with a marginal cliff-like improvement of 20 percent. This impressive performance has contributed to our profound appreciation of the capabilities of this model. Simultaneously, two indigenous general-purpose models, ChatGLM2-6B and Baichuan-13B-chat, are closely trailing GPT-4. Notably, the ChatGLM2 model, with only 6B parameters, even outperforms ChatGPT, a testament to the rapid iterative capabilities of indigenous large-scale models and their excellence in specialized knowledge domains. On medical LLMs.Among the medical LLMs, there are some regrettable observations. In the medical field, the development of specialized models seems to be overshadowed by updates in general large-scale models. Specifically, we observe that the performance of BianQue-2 and DoctorGLM in the medical model domain was underwhelming. These two models, due to their lack of superior directive-following capabilities and input length limitations, struggled to fully understand the intent \begin{table} \begin{tabular}{l|c|c|c c c c c|c} \hline \hline **Model** & **Open** & **Physician** & **Numes** & **Pharmacost** & **Technical** & **Disciplines** & **Graduate** **Extance** & **Avg** \\ \hline \multicolumn{1}{c|}{_General Models_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_Grand Models_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} \\ \hline GPT4 & ✗ & **5.90 (59.90)** & **69.31 (09.31)** & **52.9 (15.29)** & **51.6 (51.60)** & **59.69 (59.69)** & **54.19 (54.19)** & **59.46 (59.46)** \\ \hline ChatGLM2-6B & ✓ & 4.02 (40.22) & 48.50 (48.50) & 40.34 (40.38) & 38.67 (38.67) & 37.19 (37.25) & 33.37 (33.43) & 39.71 (39.74) \\ + CoT & ✓ & 4.02 (41.13) & 47.56 (48.37) & 36.60 (36.76) & 36.58 (37.17) & 35.56 (36.31) & 35.06 (35.68) & 38.51 (99.23) \\ \hline ChatGLM & ✗ & 4.05 (47.05) & 4.59 (45.69) & 35.69 (45.69) & 40.08 (40.08) & 37.94 (37.94) & 28.81 (28.81) & 38.31 (38.33) \\ + CoT & ✓ & 17.75 (47.15) & 19.94 (19.96) & 16.60 (16.00) & 20.25 (20.25) & 19.29 (19.25) & 16.19 (16.19) & 18.23 (18.23) \\ \hline Baichuan-13B-chat & ✓ & 34.80 (37.16) & 41.25 (42.11) & 35.41 (36.91) & 35.17 (36.20) & 31.81 (36.39) & 27.65 (29.03) & 34.33 (36.30) \\ + CoT & 37.70 (99.92) & 44.75 (46.25) & 41.24 (22.42) & 34.67 (65.2) & 37.94 (98.7) & 32.94 (33.99) & 38.20 (39.79) \\ \hline \multicolumn{1}{c|}{_Medical Models_} & \multicolumn{1}{c|}{_Medical Models_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} & \multicolumn{1}{c|}{_} \\ \hline \hline HuangGPT (\(\%\)E) & ✓ & 29.10 (29.58) & 33.66 (34.26) & 27.41 (28.75) & 30.98 (31.47) & 29.44 (0.13) & 25.06 (25.79) & 29.19 (30.00) \\ + CoT & ✓ & 29.90 (30.32) & 34.00 (34.17) & 29.06 (29.35) & 30.92 (31.08) & 27.38 (27.64) & 25.69 (26.05) & 29.49 (29.77) \\ \hline \multicolumn{1}{c|}{MedicalAPIT} & ✓ & 24.60 (25.66) & 30.94 (09.04) & 24.72 (24.84) & 27.17 (23.54) & 25.44 (25.62) & 25.01 (26.46) & 26.03 (26.15) \\ + CoT & ✓ & 24.80 (25.61) & 27.19 (27.98) & 29.20 (24.07) & 24.85 (26.00) & 23.75 (24.77) & 21.06 (21.79) & 24.08 (25.04) \\ \hline ChatGLM2-Cnn & ✓ & 20.20 (21.41) & 22.31 (23.48) & 20.59 (21.58) & 22.67 (23.55) & 20.38 (21.36) & 17.44 (18.08) & 20.60 (21.58) \\ + CoT & ✓ & 19.40 (20.52) & 21.69 (23.56) & 20.00 (21.65) & 22.83 (23.59) & 18.88 (20.44) & 18.56 (19.55) & 20.23 (21.62) \\ \hline ChatGLM-M & ✓ & 21.75 (23.59) & 220.66 (23.27) & 21.84 (22.76) & 21.00 (21.28) & 18.44 (19.27) & 17.50 (18.14) & 20.43 (21.56) \\ + CoT & ✓ & 15.55 (208.9) & 16.25 (22.13) & 17.34 (20.6) & 16.33 (20.65) & 12.63 (71.22) & 12.56 (16.88) & 15.11 (19.79) \\ \hline Bestnas (\(\%\)E) & 21.55 (21.67) & 19.94 (19.99) & 20.94 (27.07) & 22.75 (22.88) & 19.56 (19.83) & 16.81 (16.93) & 20.26 (20.39) \\ + CoT & ✓ & 21.00 (21.10) & 20.66 (20.61) & 20.66 (20.66) & 22.76 (22.72) & 19.25 (19.53) & 16.44 (15.50) & 20.01 (20.13) \\ \hline \multicolumn{1}{c|}{BianQue-2 (\(\overline{\text{BB}}\))} & ✓ & 4.90 (10.95) & 4.19 (10.94) & 4.28 (20.36) & 3.58 (18.11) & 3.31 (16.27) & 3.25 (18.63) & 3.92 (18.87) \\ + CoT & ✓ & 7.85 (19.62) & 6.63 (19.31) & 7.34 (20.75) & 8.33 (20.47) & 6.63 (18.11) & 5.94 (15.03) & 7.12 (18.88) \\ \hline DoctorGLM & ✓ & 2.70 (16.51) & 3.31 (26.36) & 3.84 (20.86) & 3.75 (18.07) & 3.19 (22.99) & 2.25 (18.02) & 3.17 (20.47) \\ + CoT & ✓ & 3.15 (205.05) & 3.13 (26.72) & 3.41 (21.21) of the questions, thereby failing to provide accurate answers. This deficiency resulted in their lower scores in the overall evaluation. **In different categories.** LLMs show varied performance across clinical specialties. Specifically, scores for pharmacist-related questions tend to be lower, while those concerning nursing staff are typically higher. This difference might arise from the foundational knowledge nurses require, which is straightforward, compared to the intricate distinctions in drug names and indications pharmacists deal with. Despite these performance variations among specialties, the models exhibit a consistent trend, suggesting no inherent bias towards any particular domain. These findings are pivotal for our ongoing research and optimization efforts. ### Analysis #### 4.3.1 Do few-shot prompting and CoT help? ProtocolTo investigate the effects of the few-shot prompting and CoT strategies, we perform the three-shot and CoT experiments on CMB-Exam, with the results reported in Appendix C.1. ResultsThe study reveals that the efficacy of both the few-shot approach and the **CoT** strategy in evaluated LLMs largely depends on the model capacities. The CoT strategy, contrary to expectations, often doesn't boost accuracy, especially in knowledge-dense tasks (e.g., medical MCQs in CMB-Exam). It might unintentionally confuse models with irrelevant context, hindering their reasoning. For the **few-shot prompting**, its effectiveness is predominantly evident in situations where the model already demonstrates relatively strong accuracy (e.g., accuracy above 25%). In weaker models, the few-shot prompting can unintentionally harm the results. This can be attributed to two primary factors: first, some models might struggle with processing extensive text; and second, others may need additional refinement to better follow in-context examples. #### 4.3.2 On the Perceived Difficulty ProtocolThere is a sequential career track for Physician, Nurse, Technicians, Pharmacist in China. For example, the career track of a Physician includes Resident Physician, Licensed Assistant Physician, Licensed Physician, Associate Professional Physician, and Advanced Professional Physicians, professional difficulty of which is from low to high. We aims to examine _whether the difficulty degrees perceived by LLMs and humans are consistent_. Specifically, we denote the average zero-shot accuracy of the top five LLMs as the indicator of _perceived difficulty degree from LLMs_; the lower, the more difficult. ResultsAs depicted in Figure 2, the y-axis showcases rising professional levels with the type of examination. The accuracy rates for physicians and nursing models decrease as professional levels increase, except for the residency qualification examination, suggesting it tests nuanced clinical knowledge distinctions 7. Conversely, medical technicians exhibit the opposite trend, with head technician examination accuracy being the highest. This is likely due to its focus on personnel management and communication, which does not fall in medical profession and could be learned from the Figure 2: Accuracy across various clinical medicine fields at different career stages. The accuracies are the Zero-shot average values for TOP-5 models using direct response strategy. massive amount of general corpora. While pharmacist exam results vary, models targeting traditional Chinese medicine consistently score lower than those on Western pharmacology, highlighting the need for specialized models in the Chinese medical domain. ## 5 Experiments on CMB-Clin ### Experimental Setup Prompt constructionEvery prompt comprises two components: a description that may (or may not) encompass conversation history \(D_{i}\), and the question \(Q_{i}\). To integrate the conversation history into the description, we prepend the appropriate roles to each question and solution when working with chat LLMs (all models except MedicalGPT). For non-chat LLMs, specifically MedicalGPT, we prefix "[i for a flip for GPT-4 and ChatGPT (dashed and solid brown lines are parallel, except for a flip at GPT-4 and ChatGPT). Figure 4 shows the linear correlation between automatic evaluations and expert evaluations averaged over three experts and all aspects. All four evaluated aspects show positively correlated trends between expert and GPT-4 evaluation (See Appendix C.2.3). The overall Pearson correlation (Figure 4) is 0.84. The two correlations indicate that the automatic evaluation is highly aligned with expert evaluation. #### 5.3.2 Consistent results with CMB-Exam We compute the spearman correlation between the obtained rankings of CMB-Exam and CMB-Clin, yielding a correlation of 0.89 with a two-tailed p-value of \(2.3e-4\). This suggests a high consistency between the evaluation results on the two datasets. However, it is worth noting that this observation is not due to an equivalence of the evaluated abilities between CMB-Exam and CMB-Clin. We attribute the consistency of results to the speculation that, currently most models are trained for injecting knowledge without hurting their conversation ability. We hope that after being supervised-finetuned on CMB-Exam training set, which consists of enormous multiple-choice questions, a model can still achieve decent scores on CMB-Clin. This objective aligns with our expectation of a doctor: we hope that a doctor is sufficiently informed with medical knowledge and is able to converse with a patient. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline **Model** & **Fluency** & **Relevance** & **Completeness** & **Proficiency** & **Avg.** \\ \hline GPT-4 & **4.97** & **4.53** & 4.12 & **4.45** & **4.52** \\ ChatGPT & 4.96 & 4.47 & **4.17** & 4.42 & 4.51 \\ Baichuan-13B-chat & 4.96 & 4.19 & 3.97 & 4.23 & 4.34 \\ ChatGLM2-6B & 4.86 & 3.76 & 3.51 & 4.00 & 4.03 \\ HuutuoGPT & 4.89 & 3.75 & 3.38 & 3.86 & 3.97 \\ BianQue-2 & 4.86 & 3.52 & 3.02 & 3.60 & 3.75 \\ ChatMed-Consult & 4.88 & 3.08 & 2.67 & 3.30 & 3.48 \\ MedicalGPT & 4.48 & 2.64 & 2.19 & 2.89 & 3.05 \\ DoctorGLM & 4.74 & 2.00 & 1.65 & 2.30 & 2.67 \\ Bentsao & 3.88 & 2.05 & 1.71 & 2.58 & 2.55 \\ ChatGLM-Med & 3.55 & 1.97 & 1.61 & 2.37 & 2.38 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of _automatic_ evaluation using GPT-4 on CMB-Clin. _Avg._ represents the average scores of each model across all aspects. Models are displayed in descending order of _Avg._ in the original table. Figure 3: Rankings by perspective and model. Dashed lines and solid lines are the resulted rankings from expert and GPT-4 evaluation, respectively. For visual clarity, each line is shifted vertically for a small value. A model is better if it has a smaller ranking (a higher position) on the vertical axis. #### 5.3.3 Effects of Decoding Hyper-parameters Figure 5 demonstrates the result under different decoding temperatures. The overall performance drops when the temperature increases from 0 to 1.5. This might be due to the fact that a higher temperature leads to more randomized (diversified) outputs, which is not desired in medicine where precise and definite contents are preferred. However, we find that pairwise spearman correlations under different temperatures are all above 0.87 (See Appendix C.2.4), meaning that the resulted rankings of models are robust to temperature change. This reveals the importance of aligning different temperatures when comparing performance across models. ## 6 Conclusion In conclusion, while LLMs have potential in the realm of medicine, their accurate evaluation remains pivotal for real-world applications. The introduction of the CMB benchmark, tailored to the local cultural environment in China, gives a more contextualized and comprehensive evaluation benchmark. Although not framed as a competitive leaderboard, it serves as a crucial tool for tracking LLM progress in medical domains, particularly within China. This might pave the way for the broader and more effective utilization of LLMs in China's medical landscape. ## Ethical Statement The permission to release the dataThe data utilized in this study primarily originate from publicly accessible mock examination questions, coursework exercises, and summations of commonly misunderstood examination questions. A portion of these items are sourced from the Chinese Medical Question Database8, from whom we received explicit permission and support to include their questions in our evaluation. Footnote 8: [https://www.medtiku.com/](https://www.medtiku.com/) The privacy issueWe have removed all personal information in our benchmark.
2304.06513
Passive Radio Frequency-based 3D Indoor Positioning System via Ensemble Learning
Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers' attention due to their low price, easy and customizable configuration, and non-invasive design. This paper proposes a PRF-based three-dimensional (3D) indoor positioning system (PIPS), which is able to use signals of opportunity (SoOP) for positioning and also capture a scenario signature. PIPS passively monitors SoOPs containing scenario signatures through a single receiver. Moreover, PIPS leverages the Dynamic Data Driven Applications System (DDDAS) framework to devise and customize the sampling frequency, enabling the system to use the most impacted frequency band as the rated frequency band. Various regression methods within three ensemble learning strategies are used to train and predict the receiver position. The PRF spectrum of 60 positions is collected in the experimental scenario, and three criteria are applied to evaluate the performance of PIPS. Experimental results show that the proposed PIPS possesses the advantages of high accuracy, configurability, and robustness.
Liangqi Yuan, Houlin Chen, Robert Ewing, Jia Li
2023-03-25T21:13:00Z
http://arxiv.org/abs/2304.06513v1
# Passive Radio Frequency-based 3D Indoor Positioning System via Ensemble Learning+ ###### Abstract Passive radio frequency (PRF)-based indoor positioning systems (IPS) have attracted researchers' attention due to their low price, easy and customizable configuration, and non-invasive design. This paper proposes a PRF-based three-dimensional (3D) indoor positioning system (PIPS), which is able to use signals of opportunity (SoOP) for positioning and also capture a scenario signature. PIPS passively monitors SoOPs containing scenario signatures through a single receiver. Moreover, PIPS leverages the Dynamic Data Driven Applications System (DDDAS) framework to devise and customize the sampling frequency, enabling the system to use the most impacted frequency band as the rated frequency band. Various regression methods within three ensemble learning strategies are used to train and predict the receiver position. The PRF spectrum of 60 positions is collected in the experimental scenario, and three criteria are applied to evaluate the performance of PIPS. Experimental results show that the proposed PIPS possesses the advantages of high accuracy, configurability, and robustness. Keywords:Indoor positioning system Passive radio frequency Signal of opportunity Ensemble learning Machine learning. ## 1 Introduction Signals of opportunity (SoOP) for implementing indoor positioning system (IPS) has shown progress in recent years [1, 2]. SoOP refers to some non-task signals that are used to achieve specified tasks, such as Wi-Fi, cellular network, broadcasting, and other communication signals for positioning tasks. These communication signals have different frequencies according to different functions. For example, the frequency of broadcast signals is tens to hundreds of MHz, and the frequency of Wi-Fi can reach 5GHz. Each SoOP has different performances for different tasks, which will be affected by the local base stations, experiment scenarios, and task settings. SoOP aim to facilitate high-precision positioning in GPS-shielded environments while avoiding the need for additional signal sources. However, how to use a single receiver for positioning in an environment where the signal source is unknown is still an open problem. Therefore, a passive radio frequency (PRF) system is proposed to integrate these communication signals due to the design of a customizable frequency band. Finding the frequency band most impacted for positioning is the most significant prior. In addition, PRF can capture scenario signatures, including liquids, metal objects, house structures, etc., which has been proven to further improve the performance of a positioning system. Dynamic Data Driven Applications System (DDDAS) frameworks have already shown their application prospects, such as in the fields of environmental science, biosensing, autonomous driving, etc. The application of the DDDAS framework to these domains varies, depending on the input variables and output decisions of the system. Table 1 shows some examples of instantaneous and long-term DDDAS. Currently, most of DDDAS are emphasized to instantaneous DDDAS, which require us to react immediately to dynamic data input. For example, hurricane forecasting is an instantaneous DDDAS, and if it doesn't react in time, there will be some serious consequences. But long-term DDDAS also has its benefits, and there are no serious consequences for not responding immediately, such as an energy analysis DDDAS is used to save consumption. The advantage of long-term DDDAS is dynamic data input, which can effectively reduce consumption, improve accuracy, and enhance robustness. Due to the uncertainty of SoOPs and scenario signature, IPSs need to conform to the paradigm of DDDAS [14, 15]. For the PRF positioning system, the selection of frequency band is a dynamic issue, which is determined according to the scenario signature. Therefore, the computational feedback in DDDAS is required to reconfigure the sensor for frequency band selection. Selecting some frequency bands from the full frequency band can effectively save sampling time, computing resources, and increase the robustness, etc. [16]. Moreover, the customizable frequency band can be used in a variety of different tasks, such as human monitoring, navigation, house structure detection, etc. [17, 18, 19]. Therefore, the PRF-based systems under the DDDAS framework need to dynamically optimize the frequency parameter according to its usage scenarios and task settings to obtain higher adaptability, accuracy, and robustness. Ensemble learning is used as the strategy for the positioning regression task due to its ability to integrate the strengths of multiple algorithms [20, 21]. En \begin{table} \begin{tabular}{|c|c|} \hline Instantaneous & Long-Term \\ \hline Weather forecasting [3] & Energy analysis [9] \\ Atmospheric contaminants [4] & Materials Analysis [10] \\ Wildfires detection [5] & Identification of biomarkers in DNA methylation [11] \\ Autonomous driving [6] & Multimedia content analysis [12] \\ Fly-by-feel aerospace vehicle [7] & Image processing [13] \\ Biohealth outbreak [8] & Our proposed positioning system \\ \hline \end{tabular} \end{table} Table 1: Instantaneous DDDAS vs. Long-Term DDDAS. -semble learning includes three strategies, namely boosting, bagging, and stacking [22], depending on whether the base estimator is parallel or serial [23]. The boosting strategy is a serial strategy where the posterior estimator learns the wrong samples of the prior estimator, which reduces the bias of the model. However, this strategy overemphasizes the wrong samples and thus may lead to larger variance and weaker generalization ability of the model. Both bagging and stacking strategies are parallel structures, which can reduce the variance and enhances the generalization ability. Compared to the bagging strategy, which uses averaging as the final estimator, stacking uses a regressor as the final estimator. Compared to linear or weighted averaging, the model can further reduce model bias by analyzing the decisions of the base estimators. This paper proposes a PRF-based 3D IPS, named PIPS, for the positioning regression task. Within the DDDAS framework, the performance of the PIPS system is enhanced by adaptive frequency band selection, which continues the most impacted frequency band found in the previous work [16]. PRF spectrum data was collected at 60 gridded positions in the scenario. The spectrum data set for positioning is trained in three ensemble learning strategies. Root mean square error (RMSE) is used to evaluate the accuracy of PIPS, coefficient of determination \(R^{2}\) is used to evaluate the reliability, and 95% confidence error (CE) is used to evaluate the optimality. Experiments demonstrate that the proposed PIPS exhibits its potential for accurate object locating tasks. This paper is presented as follows. In Section II, the details and sensor settings of the proposed PIPS are illustrated. The experimental setup and results are shown in Section III. Section IV gives some discussions on the advantages of PIPS under the DDDAS framework prior to the conclusion and future work demonstrated in Section V. ## 2 Frequency-adaptive PIPS PIPS achieves sensing by passively accepting the PRF spectrum in the scenario. Software-defined radio (SDR) is used to control the PRF sensor for data collection, including the frequency band \(\mathbb{B}\), step size \(\Delta\), sampling rate \(R_{s}\), etc. Reasonable selection of the parameters of the PRF sensor in PIPS is crucial. The diagram of frequency band selection by PIPS under the framework of DDDAS is shown in Fig. 1. The DDDAS framework is used to reconfigure the parameters of the PRF sensor, which is achieved through SDR. The parameters of the PRF sensor, especially the center frequency, are dynamically reconfigured to adapt to the signatures of different scenarios. With the support of initial parameters \(\mathbb{B}\), \(\Delta\), and \(R_{s}\), the data set collected by the PRF sensor \(D\in\mathbb{R}^{n\times m}\) and its corresponding position label set \(C\in\mathbb{R}^{n\times 3}\). \(D\) is the PRF spectrum, that is, the average powers collected over the frequency band. Although it is feasible to use the average power corresponding to the full frequency band as the feature vector for positioning, it will greatly increase the sampling time. Therefore, it is necessary to optimize the initial parameters \(\mathbb{B}\), \(\Delta\), and \(R_{s}\) under the DDDAS framework. The proposed PIPS system can be defined as the following function \(f:C\to D\), \[d=f(c;\mathbb{B},\Delta,R_{s}), \tag{1}\] where \(d\) and \(c\) are a pair of samples in \(D\) and \(C\), which also represent a pair of corresponding PRF spectrum and coordinate. Eq. 2 shows the collection of PRF data at the corresponding coordinates given the parameters. By training on the ensemble learning model on the collected data and making predictions, the estimated coordinates can be obtained: \[\hat{c}=f^{-1}(d;\mathbb{B},\Delta,R_{s}). \tag{2}\] The PRF spectral data collected by the PRF sensor in the experimental scenario contains the SoOP and the signature of the experimental scenario. The PRF sensor in PIPS is reconfigured after the adaptive band selection algorithm is used to find the most impacted band for the positioning task. After the \(k\)-th optimization, the collected data set under the optimized parameter \(\mathbb{B}_{k}\), \(\Delta_{k}\), and \(R_{s_{k}}\) is defined as \(D_{k}\in\mathbb{R}^{n\times m_{k}}\). Dynamic reconfiguration may be performed once or multiple times, depending on the properties of the SoOP in the scenario, including received signal strength (RSS), center frequency, integration of multiple signal sources, etc. The dynamic needs of the configuration are mainly changing between different scenarios, implemented tasks, and SoOPs. When the SoOP remains unchanged, its dynamic configuration is only needed once to find the optimized parameters, which can reduce the waste of computing resources while achieving system applicability. The \(\mathbb{B}_{1}(\text{MHz})\in\{91.2,93.6,96.0,98.4,100.8\}\) found in previous work are used in the experiments for preliminary validation of the proposed PIPS. The PRF sensor used in our experiment is RTL-SDR RTL2832U because of its cheap and easy-to-configure characters, as shown in Fig. 2. ## 3 Experiment and Results This section is organized as follows. In the experimental scenario, spectrum data are collected at 60 positions for the frequency band \(\mathbb{B}_{1}\) that has the most impact Figure 1: DDDAS framework reconfigures the parameters \(\mathbb{B}\), \(\Delta\), and \(R_{s}\) of the PRF sensor in PIPS. on the positioning. Using single regressors as a baseline, three ensemble learning strategies of boosting, bagging, and stacking are compared. Three criteria are used as evaluation methods. This section focuses on the setup of experimental scenarios and the comparison and evaluation of strategies and models. ### Experimental Setup Data collection is done in an indoor home scenario, as shown in Fig. 3. In order to avoid the impact of sampling distance on performance and also to better compare with other state-of-the-art technologies, one meter is selected as the sampling distance in the three directions of length, width, and height. According to past experience, some sources that may have an impact on the PRF spectrum are marked in Fig. 3, such as a host computer, operator, TV, Wi-Fi router, printer, etc. The experimental scenario with a length of 6.15 m, a width of 4.30 m, and a height of 2.42 m is used as a preliminary verification of the PRF positioning. We collected 100 samples at each position, and a total of 6000 samples were divided into training and test data sets in a ratio of 0.7 and 0.3. Using scikit-learn, the model was built on TensorFlow and trained with a Nvidia GeForce RTX 3080 GPU. Figure 2: RTL-SDR RTL2832U is used as PRF sensor to collect PRF spectrum. ### Results and Evaluation To better demonstrate the effectiveness of the collected PRF spectrum data for positioning, principal component analysis (PCA) is used to reduce the dimensionality of the PRF spectrum data and visualization. The raw PRF spectrum is 5D since the most impacted frequency band used in data collection are five frequencies, while PCA reduces it to 3D for visualization. Using PCA is just for visualization, while the raw data set is used to train the ensemble learning model for the positioning task. Fig. 4 shows PRF spectrum data dimensionally reduced by PCA. Data at different positions and heights can form a cluster in the PCA space, which can prove that there are differences in data at different positions, which is also the fundamental reason for positioning. For the proposed model, it is required to compare with the baseline in terms of performance and complexity. For ensemble learning models, some single regressors are used as the baseline, including Support Vector Regression (SVR), K Nearest Neighbors Regression (KNR), Gaussian Process Regression (GPR), Decision Trees Regression (DTR), and Multi-layer Perceptron (MLP). The performance is compared by three evaluations: Root mean square error (RMSE), coefficient of determination \(R^{2}\), and 95% CE. RMSE is targeted at applications that require lower average errors but less stringent positioning systems, such as Figure 3: Illustration of an indoor living room scenario is used to collect PRF data at 60 positions. The red and blue antennas are represented as 0 and 1 meters from the bottom of the antenna to the ground, respectively. Other potentially disturbing objects and human are also marked. warehouse patrol robots. RMSE of the test data set can be expressed as \[\text{RMSE}=\sqrt{\frac{\left\|C^{*}-\hat{C^{*}}^{2}\right\|}{n^{*}}}, \tag{3}\] where \(C^{*}\in\mathbb{R}^{n^{*}\times 3}\) is the label of test data set, \(\hat{C^{*}}\) is the estimated label obtained by ensemble model. 95% CE is the corresponding error when the cumulative distribution function of RMSE reaches 95%, which can be expressed as \[95\%\text{CE}=F_{\text{RMSE}}^{-1}(0.95), \tag{4}\] where \(F\) is the cumulative distribution function of RMSE, the 95% CE is aimed at systems that are more critical to accuracy, such as firefighting robots. It requires a higher confidence level to limit the robot's error to a strict value. The time complexity is considered to be equivalent to model fitting time. Coefficient of determination and time complexity are not our main concerns. Since the proposed PIPS is an application system, it is necessary to pay more attention to the customer-oriented performance of the application. Each model was trained with its default parameters for initial comparison. The performance and complexity of some regressors are shown in Table 2. It can be seen from Table 2 that KNR has the best performance, which will be used as the baseline for PIPS to compare with the ensemble learning strategy. Different models under three ensemble learning strategies are used to train on our positioning data set. For serial boosting strategies, there are three main extensions, including Adaptive Boosting Regression (ABR), Gradient Boosting Figure 4: Illustration of PRF spectrum with reduced dimensionality by PCA. The red and blue dots indicate that spectrum data was collected at 0 and 1 m from the ground, respectively. Regression (GBR), and Histogram-based GBR (HGBR). ABR makes the posterior estimator focus more on samples that cannot be solved by the prior estimator through an adaptive weight method. Both GBR and HGBR are ensembles of regression trees that use a loss function to reduce the error of the previous estimator. According to the results in Table 2, we selected four models with different accuracies, namely SVR, KNR, GPR, and DTR, for further analysis. Table 3 shows the model performance under the boosting strategy. It can be seen that the ensemble learning model under the boosting strategy has no advantage in RMSE compared to a single regressor, but it greatly reduces 95% CE, especially for ABR with KNR and DTR as base estimators. This means that most of the samples have errors less than 0.095, but there are also a few samples with large errors that increase the value of RMSE. Boosting strategies are effective in reducing the mode of error. For the bagging strategy, the base estimator is also a crucial parameter. In addition to the general bagging model, Random Forest Regression (RFR) and Extremely Randomized Trees (ERT) as bagging variants and extensions of DTR are also included as part of the comparison. Table 4 shows the performance of the models under the bagging strategy. Through the comparison of Table 2 and Table 4, it can be found that - whether it is KNR with the best accuracy or SVR with poor accuracy, the bagging strategy cannot significantly further improve its accuracy. The final prediction of the bagging strategy will be related to each base estimator, that is, it will also be affected by the base estimator with poor accuracy. The stacking strategy aggregates base estimators through the final estimator, which gives \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Regression & RMSE (m) & \(R^{2}\) & 95\% CE (m) & Time (s) \\ \hline SVR & 1.229 & 0.777 & 2.214 & 1.026 \\ KNR & 0.268 & 0.986 & 0.412 & 0.002 \\ GPR & 0.612 & 0.967 & 1.248 & 1.508 \\ DTR & 0.603 & 0.930 & 1.111 & 0.016 \\ MLP & 1.506 & 0.562 & 2.534 & 2.104 \\ \hline \end{tabular} \end{table} Table 2: Single regressors to implement positioning tasks and serve as baselines for PIPS. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Ensemble Strategy & Base Estimator & RMSE (m) & \(R^{2}\) & 95\% CE (m) & Time (s) \\ \hline ABR & SVR & 0.828 & 0.881 & 1.419 & 88.368 \\ & KNR & 0.324 & 0.985 & 0.095 & 2.859 \\ & GPR & 0.825 & 0.859 & 1.442 & 278.900 \\ & DTR & 0.324 & 0.983 & 0.095 & 2.193 \\ GBR & DTR & 0.807 & 0.879 & 1.575 & 1.698 \\ HGBR & DTR & 0.457 & 0.960 & 1.027 & 1.161 \\ \hline \end{tabular} \end{table} Table 3: Performance of ensemble learning models under the boosting strategy. different weights to base estimators. We use the ten previously mentioned regressors, including the ensemble learning model as the base estimator, and then test the performance of these regressors as the final estimator. The regression results under the stacking strategy are shown in Table 5. The stacking strategy affords the use of any model as the base estimator, so the stacking strategy can also be a strategy that integrates ensemble learning models. The results show that the stacking strategy has an advantage in performance compared to the bagging strategy, which is because the final estimator can adaptively aggregate all the base estimators. However, the stacking strategy is not dominant compared to the boosting strategy. Although stacking is stronger than boosting in RMSE and \(R^{2}\), the time complexity is dozens of times. After experiments, we found that the Stacking strategy gave the best results. Compared to the baseline, the proposed ensemble learning strategy has considerable improvement on 95% CE. In particular, the stacking strategy with DTR as the final estimator can reduce the 95% CE by 92.3%. Although 95% of the samples have relatively low errors, the average RMSE is still high, which means that minority samples with a proportion of 5% or less have considerable errors. These samples may have received interference, such as the movement of the human body, the effect of metal or liquid shielding on the PRF spectrum, \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Ensemble Strategy & Base Estimator & RMSE (m) & \(R^{2}\) & 95\% CE (m) & Time (s) \\ \hline Bagging & SVR & 1.124 & 0.775 & 2.116 & 5.028 \\ & KNR & 0.265 & 0.989 & 0.423 & 0.372 \\ & GPR & 0.623 & 0.928 & 1.323 & 41.264 \\ RFR & DTR & 0.418 & 0.966 & 0.964 & 0.934 \\ ERT & DTR & 0.299 & 0.966 & 0.710 & 0.304 \\ \hline \end{tabular} \end{table} Table 4: Performance of ensemble learning models under the bagging strategy. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Ensemble Strategy & Final Estimator & RMSE (m) & \(R^{2}\) & 95\% CE (m) & Time (s) \\ \hline Stacking & SVR & 0.271 & 0.988 & 0.463 & 97.281 \\ & KNR & 0.259 & 0.990 & 0.446 & 92.678 \\ & GPR & 2.115 & 0.273 & 3.924 & 97.241 \\ & DTR & 0.327 & 0.984 & 0.086 & 93.218 \\ & MLP & 0.263 & 0.990 & 0.459 & 95.106 \\ & ABR & 0.334 & 0.984 & 0.258 & 97.657 \\ & **GBR** & **0.258** & **0.990** & **0.317** & **94.338** \\ & HGBR & 0.254 & 0.990 & 0.371 & 95.478 \\ & RFR & 0.255 & 0.990 & 0.431 & 93.835 \\ & ETR & 0.259 & 0.990 & 0.334 & 93.808 \\ \hline \end{tabular} \end{table} Table 5: Performance of ensemble learning models under the stacking strategy. etc. Therefore, GBR as the final estimator is considered as the global optimal solution, which outperforms the baseline in all aspects. ## 4 Discussion DDDAS is crucial for PIPS, the main purpose of the DDDAS framework is to find the optimal solution for the positioning task in the target scenario. We implement pre-sampling in the target scenario and then use SHAP to analyze the collected samples and find the optimal frequency band, step size, and sampling rate. PIPS under the DDDAS framework has three advantages. Firstly and most importantly, the sampling time is reduced by 98%. We reduced the 400 frequencies to 5 frequencies under the DDDAS framework. If we do not use the DDDAS framework to find the optimal frequency, data collection over the full frequency band will waste a huge amount of time. Secondly, the redeployment time of the sensor is also greatly reduced. The proposed PIPS system also has excellent re-deployment capabilities in new scenarios, thanks to the DDDAS framework on frequency band \(\mathbb{B}\), step size \(\Delta\), sampling rate \(R_{s}\) optimization. To achieve the accuracy and sampling resolution described above, the time resource required for redeployment is around 300 \(s/m^{3}\).The training time is negligible compared to the PRF data sampling time. Thirdly, it can potentially improve accuracy and reliability. The PIPS system uses RSS in the five most sensitive frequencies, especially since this passive RF technology can capture signatures from scenarios such as metal parts in house structures or liquids. So basically, the PRF signal collected in each scenario is unique. On the one hand, there are inevitably some interferences in the full frequency band, including natural noise and artificial signals. These noise signals are random and abrupt, which is not conducive to the stability of a positioning system. On the other hand, we don't want to include any unnecessary features in the samples. In this task, we did not use deep learning but just traditional machine learning. Traditional machine learning cannot adaptively assign weights, so unnecessary and cluttered features obviously affect the accuracy of classification. Therefore, collecting data in the most sensitive frequency bands for positioning can effectively avoid these possible interferences and reduce feature complexity to improve accuracy and reliability. ## 5 Conclusion This paper proposes a PIPS under the DDDAS framework to solve the 3D positioning problem. Three ensemble learning strategies and their various variants and extensions are used to train on the collected data set. The experimental results show that the proposed ensemble learning strategy has an RMSE of 0.258 meters, an \(R^{2}\) of 0.990, and a 95% CE of 0.317 meters, which is much better than the baselines. PIPS under the DDDAS framework is considered a potential application in specific scenarios, such as robot-patrolled factories or warehouses, due to its efficient redeployment and high accuracy. For future work, dimensionality reduction is a potential research direction. The current work is limited to the frequency selection technology. We selected the most sensitive frequency band from the 400 frequencies in the full frequency band under the DDDAS framework. However, dimensionality reduction, while similar in terms of results, has different effects. For the dimensionality reduction method, although it can reduce the complexity of the data, it cannot reduce the sampling time. The benefits of PCA lie in privacy considerations and visualization applications. In internet of things (IoT) applications, performing PCA processing locally can reduce the dimension of data so that customer privacy can be protected after uploading to the cloud. PCA is able to reduce multi-dimensional data to three-dimensional or two-dimensional to enable visualization applications. #### 4.0.1 Acknowledgements Thanks to Dr. Erik Blasch for concept development and co-authoring the paper. This research is partially supported by the AFOSR grant FA9550-21-1-0224. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.
2306.07074
Using a neural network approach to accelerate disequilibrium chemistry calculations in exoplanet atmospheres
In this era of exoplanet characterisation with JWST, the need for a fast implementation of classical forward models to understand the chemical and physical processes in exoplanet atmospheres is more important than ever. Notably, the time-dependent ordinary differential equations to be solved by chemical kinetics codes are very time-consuming to compute. In this study, we focus on the implementation of neural networks to replace mathematical frameworks in one-dimensional chemical kinetics codes. Using the gravity profile, temperature-pressure profiles, initial mixing ratios, and stellar flux of a sample of hot-Jupiters atmospheres as free parameters, the neural network is built to predict the mixing ratio outputs in steady state. The architecture of the network is composed of individual autoencoders for each input variable to reduce the input dimensionality, which is then used as the input training data for an LSTM-like neural network. Results show that the autoencoders for the mixing ratios, stellar spectra, and pressure profiles are exceedingly successful in encoding and decoding the data. Our results show that in 90% of the cases, the fully trained model is able to predict the evolved mixing ratios of the species in the hot-Jupiter atmosphere simulations. The fully trained model is ~1000 times faster than the simulations done with the forward, chemical kinetics model while making accurate predictions.
Julius L. A. M. Hendrix, Amy J. Louca, Yamila Miguel
2023-06-12T12:39:21Z
http://arxiv.org/abs/2306.07074v1
Using a neural network approach to accelerate disequilibrium chemistry calculations in exoplanet atmospheres ###### Abstract In this era of exoplanet characterisation with JWST, the need for a fast implementation of classical forward models to understand the chemical and physical processes in exoplanet atmospheres is more important than ever. Notably, the time-dependent ordinary differential equations to be solved by chemical kinetics codes are very time-consuming to compute. In this study, we focus on the implementation of neural networks to replace mathematical frameworks in one-dimensional chemical kinetics codes. Using the gravity profile, temperature-pressure profiles, initial mixing ratios and stellar flux of a sample of hot-Jupiter atmospheres as free parameters, the neural network is built to predict the mixing ratio outputs in steady state. The architecture of the network is composed of individual autoencoders for each input variable to reduce the input dimensionality, which is then used as the input training data for an LSTM-like neural network. Results show that the autoencoders for the mixing ratios, stellar spectra, and pressure profiles are exceedingly successful in encoding and decoding the data. Our results show that in 90% of the cases, the fully trained model is able to predict the evolved mixing ratios of the species in the hot-Jupiter atmosphere simulations. The fully trained model is \(\sim 10^{3}\) times faster than the simulations done with the forward, chemical kinetics model while making accurate predictions. keywords: planets and satellites: gaseous planets - planets and satellites: atmospheres - exoplanets ## 1 Introduction There are two methods commonly used for calculating the abundance of different species in an atmosphere: thermochemical equilibrium and chemical kinetics (Bahn & Zukoski, 1960; Zeleznik & Gordon, 1968). Thermochemical equilibrium calculations treat each species independently and do not require an extensive list of reactions between different species. Consequently, this method is fast for estimating the abundance of different species in an exoplanet atmosphere and has been widely used in the community (e.g., Stock et al., 2018; Woitke et al., 2018). However, the atmospheres of exoplanets are dynamic environments. Both physical and chemical processes can alter the compositions and thermal structures of the atmosphere. In particular, atmospheric processes like photochemistry, mixing and condensation of different species can affect atmospheric abundances, deviating the concentrations observed from what would be found by chemical equilibrium calculations (Cooper & Showman, 2006; Swain et al., 2008; Moses et al., 2011; Kawashima & Min, 2021; Roudier et al., 2021; Baxter et al., 2021). For example, the recent detection of SO\({}_{2}\)(Feinstein et al., 2022; Ahrer et al., 2022; Alderson et al., 2022; Rusatmarkov et al., 2022) and the determination of this species as direct evidence of photo-chemical processes shaping the atmosphere of WASP 39b (Tsai et al., 2022), suggest that certain exoplanet atmospheres are in disequilibrium and we need chemical disequilibrium models using chemical kinetics to correctly interpret the observations. Chemical kinetics codes consider the effects that lead to a non-equilibrium state in the atmosphere. These codes incorporate a wide range of atmospheric processes such as the radiation from the host start that can drive the dissociation of molecules -or photochemistry-, the mixing of species at different pressures due to the planet's winds, or the diffusion of species, and calculate the one-dimensional abundances of species in exoplanetary atmospheres (e.g. Moses et al., 2011; Venot et al., 2012; Miguel & Kaltenegger, 2014; Tsai et al., 2017; Hobbs et al., 2019). However, To calculate the abundance of different species using chemical kinetics, a system of coupled differential equations involving all the species must be solved, and prior knowledge of reaction rates and a reaction list is necessary to estimate the production and loss of each species. Therefore, as more species and reactions are incorporated into the chemical networks, the complexity of these simulations increases, and so does the computational cost these simulations require. The result is that chemical kinetics codes have long computational times, and can not be used by more detailed calculations (e.g. circulation models) or as a fast way of interpreting observations (by retrieval codes), which are usually subject to simplifications. For the past few decades, the use of machine learning techniques, specifically neural networks (NN), has become more prevalent in research fields outside of computer science. Within astronomy, neural networks have been used for applications like image processing (Dattilo et al., 2019), adaptive optics (Landman et al., 2021), exoplanet detection (Shallue and Vanderburg, 2018), exoplanetary atmospheric retrieval (Cobb et al., 2019) and chemical modelling (Holdship et al., 2021), and more traditional machine learning techniques have been used for applications like exoplanetary atmospheric retrieval (Nixon and Madhusudhan, 2020) and chemistry modelling of protoplanetary disks (Smirnov-Pinchukov et al., 2022). Trained neural networks are fast to use, so a neural network trained to accurately reproduce the outcomes of chemical kinetics codes could greatly reduce computational time. Such a neural network could simulate a large amount of atmospheric conditions in a short period of time, which is for example useful for atmospheric retrievals from observational constraints. It could also be incorporated into a multi-dimensional atmospheric simulation that connects a multitude of individual one-dimensional simulations by the implementation of atmospheric mixing and other global processes. In this study, we investigate the feasibility of machine learning techniques for speeding up a one-dimensional chemical kinetics code. To this end, we perform calculations on a fiducial giant planet as an example to show how this technique can be used to bring the best of these two worlds: the detailed information of chemical kinetics calculations and the speed of Neural Networks techniques. In the next section, we explain in more detail how we obtain the dataset and the specifics of the architectures used. The results of our networks are presented in the following section (section 3), which are discussed afterwards in section 4. Finally, we summarise and conclude our findings in section 5. ## 2 Methods ### Chemical Kinetics Chemical kinetics is the most realistic way of calculating abundances and is necessary, particularly at low temperatures (T < 2000 K) and pressures (P < 10 - 100 bars), where timescales of processes such as mixing in the atmosphere are shorter than chemical equilibrium and dominate the chemistry and abundances in the atmosphere. We make use of the one-dimensional chemical kinetics code VULCAN (Tsai et al., 2017, 2021), to create a large dataset on the atmospheres of gaseous exoplanets. The code is validated for hot-Jupiter atmospheres from 500 K to 2500 K. VULCAN calculates a set of mass differential equations: \[\frac{\partial n_{i}}{\partial t}=\mathcal{P}_{i}-\mathcal{L}_{i}-\frac{ \partial\Phi_{i}}{\partial z}, \tag{1}\] where \(n_{i}\) is the number density of the species \(i\), \(t\) is the time, \(\mathcal{P}_{i}\) and \(\mathcal{L}_{i}\) are the production and loss rates of the \(i\)-th species, and \(\Phi_{i}\) its transport flux that includes the effects of dynamics caused by convection and turbulence in the atmosphere. For a more complete derivation of this equation from the general diffusion equation, we refer the reader to Hu et al. (2012). VULCAN starts from initial atmospheric abundances calculated using the chemical equilibrium chemistry code _FastChem_(Stock et al., 2018), although we note that the final disequilibrium abundances are not affected by the choice of initial values adopted Tsai et al. (2017), and further evolves these abundances by solving a set of Eulerian continuity equations that includes various physical processes (e.g. vertical mixing and photochemistry). To solve these partial differential equations, VULCAN numerically transforms them into a set of stiff ordinary differential equations (ODEs). These ODEs are solved using the _Rosenbrock_ method, which is described in detail in the appendix of Tsai et al. (2017). In this study, we make use of machine learning techniques to solve these equations and hence speed up the process. ### Building the dataset #### Parameter Space To construct the dataset, we vary the following parameters: 1. **Planet mass, M** [M\({}_{\rm J}\)]: within the range [0.5, 20] M\({}_{\rm J}\). 2. **Orbit radius, r** [AU]: within the range [0.01, 0.5] AU. 3. **Stellar radius, R\({}_{\star}\)** [R\({}_{\odot}\)]: within the range [1, 1.5] R\({}_{\odot}\). Other parameters such as surface gravity, irradiation temperature, and stellar effective temperature are derived from these free parameters. 1. **Planet radius** [R\({}_{\rm Jup}\)]: This is derived from the planet mass using the relation from Chen and Kipping (2017), shown in Equation 2, where \(R\) is the planet radius and \(M\) is the planet mass: \[\frac{R}{R_{\oplus}}=17.78\left(\frac{M}{M_{\oplus}}\right)^{-0.044}.\] (2) We note that our aim is to present the results for a simple general case, and the mass-radius relation we use is suitable for this purpose. However, we must emphasize that the relation between mass and radius for giant exoplanets is not unique and depends on various factors, such as the mass of metals, core mass, irradiation received by the planet, and their effect on the inflation of radius. All of these factors can impact the evolution path of giant planets and their final radius, leading to a dispersion in the mass and radius relation. 1. **Temperature-pressure profile**: As our aim is to demonstrate the use of neural networks for calculating non-equilibrium chemical abundances in a general case, we have utilized an analytical, non-inverted temperature-pressure profile from Heng et al. (2014). While these analytical profiles are simplistic, they are widely used in the literature to explore general cases and are suitable for our purposes. However, for calculating the chemistry of a real planet, more detailed calculations that take into account the opacities of different species and their abundances in the atmosphere should be included. The assumptions for this calculation are \(T_{int}=120\) K, \(\kappa_{L}=0.1\), \(\kappa_{S}=0.02\), \(\beta_{S}=1\) and \(\beta_{L}=1\), based on the default values included in the VULCAN code Tsai et al. (2017). The pressure profile is constructed within the range [\(10^{-2}\), \(10^{9}\)] dyne cm\({}^{-2}\). This calculation is an important step, as it determines whether the set of parameters is valid for the dataset. If any part of the temperature profile falls outside of the range [500, 2500] K, the temperature range for which VULCAN is validated, the example is rejected from the dataset. 2. **Stellar flux**: the stellar spectra used for the dataset have two sources: the Measurements of the Ultraviolet Spectra Characteristics of Low-mass Exoplanetary Systems (MUSCLES) collaboration (France et al., 2016; Youngblood et al., 2016; Loyd et al., 2016) and the PHOENIX Stellar and Planetary Atmosphere Code (Baron et al., 2010). The MUSCLES database contains observations from M- and K-dwarf exoplanet host stars in the optical, UV, and X-ray regime, and is used for stars with an effective temperature lower than 6000 K. For effective temperatures of 6000 K and above, stellar spectra are generated by the PHOENIX model. Flux values below \(10^{-14}\) erg nm\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\) are cut-off. The remaining parameters of the VULCAN configuration files are kept constant throughout the dataset. Eddy- and molecular diffusion are both taken into account as fixed parameters. For the eddy diffusion constant, we make use of a constant of \(K_{zz}=10^{10}\) cm\({}^{2}\)/s. The molecular diffusion constant is taken for a hydrogen-dominated gas as described in Banks and Kockarts (1973). As standard chemical network, we make use of VULCAN's reduced default N-C-H-O network that includes photochemistry. We assume experimental abundances for the hot-Jupiters, and we make use of 150 pressure levels for the height layers. The output of VULCAN is saved every 10 steps. In total, 13291 valid configurations are generated within the parameter space. #### 2.2.2 Formatting In order to limit the computation times when training the network, the input-output pairs do not contain all of the information supplied by VULCAN. For the inputs, a selection of six properties is made. These properties are extracted from VULCAN before time integration starts, so they can be interpreted as the initial conditions of the simulation. The six properties are: 1. **Initial mixing ratios**: the initial mixing ratios of the species in the simulation. These mixing ratios are calculated by VULCAN using FastChem (Stock et al., 2018). The shape of the array containing the mixing ratios is (69, 150), as the mixing ratios are defined for 69 species for 150 height layers each. 2. **Temperature profile**: the temperature profile as calculated by the analytical expression from Heng et al. (2014). The temperature is defined for every height layer, so it has a shape of (150). 3. **Pressure profile**: the pressure profile that is calculated as part of the temperature profile calculation. It is of the same shape, (150). 4. **Gravitational profile**: the gravitational acceleration per height layer. It has the shape (150,). 5. **Stellar flux component**: one of the two components that make up the stellar spectrum contains the flux values. This is generated from either the MUSCLES database and/or the PHOENIX model and interpolated to a shape of (2500,). 6. **Stellar wavelength component**: the second component of the stellar spectrum contains the wavelengths corresponding to the flux values. It has the same shape, (2500,). For the outputs, we make use of the time-dependent mixing ratios. Because not every simulation takes the same amount of time to converge to a solution, the number of saved abundances differs per VULCAN simulations. To include the information contained in the evolution of the abundances through time, 10 sets of abundances, including the steady-state abundances, are saved in each output. This set of abundances is evenly spaced through time, so the simulation time between abundances will vary for different VULCAN simulation runs. Before the abundances are saved, they are converted to mixing ratios. The shape of the outputs is (10, 69, 150). #### 2.2.3 Data Standardisation The inputs and outputs of the various components differ by several orders in magnitude. To ensure that the neural network trained on the data set is not biased towards higher-valued parameters, the data has to be standardised. First, the distributions of the properties are standardised according to Equation 3: \[p_{s}=\frac{\log_{10}(p)-\mu}{\sigma}, \tag{3}\] with \[\mu=\frac{1}{n}\sum_{i=0}^{n}\log_{10}(p_{i}), \tag{4}\] and \[\sigma=\sqrt{\frac{1}{n}\sum_{i=0}^{n}\left(\log_{10}(p_{i})-\mu\right)^{2}}, \tag{5}\] where \(p\) is the property to be scaled, \(n\) is the size of the dataset and \(p_{s}\) is the standardised property. After standardisation, the properties are normalised in the range [0, 1]: \[p_{s,n}=\frac{p_{s}-\min(p_{s})}{\max(p_{s})-\min(p_{s})}, \tag{6}\] where \(p_{s,n}\) is the final normalised property. Once the input properties are normalised, the output mixing ratios are normalised with the same scaling parameters as were used for the input mixing ratios. When the trained neural network is presented with an input for which to predict the mixing ratios, it only has information about the scaling parameters of the inputs. To be able to unnormalise the outputs, they need to be scaled with the same scaling parameters. ### Model Architecture #### 2.3.1 Autoencoder Structure The input of each configuration within the dataset consists of roughly 15800 values. To speed up the training process and complexity of the neural network we make use of an _autoencoder_ (AE) for reducing the dimensionality of the examples in the dataset. In previous studies this approach has been shown an effective way to reduce dimensionality within chemical kinetics (e.g. Grassi et al., 2022). An autoencoder consists of two collaborating neural networks: an _encoder_ and a _decoder_. The task of the encoder is to reduce the dimensionality of the input data by extracting characterising features from the example and encoding them in a lower dimensionality representation called the _latent representation_. The task of the decoder is to take the latent representation and use it to reconstruct the original input data with as little loss of information as possible. The encoder and decoder are trained simultaneously, and no restraints are placed on the way the autoencoder uses its _latent space_, apart from the size of the latent representations. As is discussed in Section 2.2.2, the inputs of the model consist of six properties. Because these properties do not share the same shape, we cannot encode and decode them using a single autoencoder. Instead, we construct six unique autoencoders, one for each property of the model inputs. Figure 1 shows an overview of the process of encoding the initial conditions. The decoding process is not shown but is symmetrical to the encoding process. Each encoder conceals a specific property into a corresponding latent representation. To get the latent presentation of the entire input example, \(l_{i}\), the property latent representations \(\{l_{MR},l_{F},l_{W},l_{T},l_{P},l_{G}\}\) are concatenated. When decoding the latent representation of the input, the latent vector \(l_{i}\) is split back into the different property latent representations, and given to that property's decoder. Every encoder-decoder pair is trained separately. The hyperparameters of each autoencoder are optimized by trial and error. A summary of each set of hyperparameters is shown in table 1. ### Mixing Ratios Autoencoder The mixing ratios are the largest contributor to the size of the model inputs. Compressing each of the species' mixing ratios efficiently reduces the size of the input latent representation, \(l_{i}\), by a substantial amount. Because of the limited size of the training dataset, a compromise has to be made to successfully train this autoencoder. Rather than concurrently encoding all 69 species for each example, each species is encoded individually. As a result, the training dataset expands by a factor of 69, while disregarding any potential correlations in species abundances during the encoding procedure. Figure 2 shows the application of such an autoencoder: for a given input, each of the 69 species' mixing ratios is encoded into corresponding latent vectors \(\{l_{1},l_{2},l_{3},...,l_{69}\}\). The concatenation of these 69 latent vectors then makes up the latent representation of the mixing ratios \(l_{MR}\). All encoders and decoders are multilayer perceptron (MLP) neural networks. For the mixing ratio autoencoder (MRAE), the encoder and decoder both consist of 7 fully connected layers, followed by hyperbolic tangent activation functions. The encoder input layer has a size of 150 and the output layer has a size of 30. The hidden layers have a size of 256. Adversely, the decoder has an input layer size of 30 and an output layer size of 150. The compression factor of the MRAE is therefore \(150/30=5\). To train the MRAE, the dataset is split into a train dataset (70%), a validation dataset (20%), and a test dataset (10%). To increase the size of the dataset, the MRAE is trained on a shuffled set1 of the mixing ratios of both the inputs and the output of the chemical kinetics simulations. The performance of the autoencoder is measured using the loss function in equation 7: Footnote 1: Using a random sampler function within the PyTorch package. \[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}\left(\frac{p_{i}-a_{i}}{a_{i}}\right)^{ 2}. \tag{7}\] Figure 1: The different properties and their corresponding encoders that together encode the input data. The decoding process is symmetrical to this encoding process, where every property has a corresponding property decoder. Figure 2: A more detailed sketch of the architecture of the mixing ratio autoencoder. MR \(i\) denotes the mixing ratio of a certain species, \(i\), for all height layers, and \(l_{i}\) denotes the encoded mixing ratios for species \(i\). where \(\mathcal{L}\) is the loss, \(N\) is the number of elements in the actual/predicted vector, and \(p_{i}\) and \(a_{i}\) are \(i\)-th elements of the predicted and actual vectors, respectively. The MRAE is optimised using the Adam optimiser (Kingma and Ba, 2014), with a learning rate of \(10^{-5}\). A batch size of 32 is used, and the model is trained for 200 epochs. These hyperparameters can also be found in table 1. ### Atmospheric profile Autoencoders The temperature, pressure, and gravity profiles all have the same shape of (150,), so their autoencoders can use the same architecture. Moreover, the atmospheric profiles share their shape with the mixing ratios of individual species (i.e. the height layers in the atmosphere). Therefore, a very similar neural network structure to that of the MRAE is used for the atmospheric profile autoencoders. The encoder input layer shape, the decoder output layer shape, and the hidden layer shapes are taken directly from the MRAE for all atmospheric profile autoencoders. An important parameter to tune for each atmospheric profile autoencoder separately is the size of the latent representations. The pressure profile is set logarithmically for all examples in the dataset. By taking the logarithm of the pressures the spacing becomes linear, \(\log(P_{i})-\log(P_{i+1})\). Theoretically, we then only need two values to fully describe the pressure profile: the pressure at the first and last height layers. To encode these values, no autoencoder is needed. One could take this one step further, and provide input parameters like mass and radius, from which the pressure and gravity profile are dependent, directly to the core model as inputs. While this more specialised approach is suitable for these input parameters, it is not generalisable to other input parameters. To keep the model architecture more general and adaptable to different input parameters, an autoencoder is used nonetheless. The size of the pressure profile autoencoder (PAE) latent representations is set to 2. This corresponds to a compression factor of \(150/2=75\). The temperature and gravity profiles are not linear. For both the temperature autoencoder (TAE) and gravity autoencoder (GAE), a latent representation size of 30 is used. This corresponds to a compression factor of \(150/30=5\), the same as for the MRAE. All profile autoencoders are evaluated using the loss function previously defined in equation 7 and are optimised using the Adam optimiser. The TAE and GAE use a learning rate of \(10^{-5}\), and the PAE uses a learning rate of \(10^{-6}\). All profile autoencoders are trained with a batch size of 4, for 100 epochs (see also table 1). ### Stellar Spectrum Autoencoders After the mixing ratios, the stellar spectrum components contribute predominantly to the input data size. The stellar spectrum is comprised of a flux and a wavelength component. These components share the same shape, so one NN structure can be used for both autoencoders. The structure of the encoder and decoder is similar to that of the MRAE: a 7-layer, fully connected MLP, with hyperbolic tangent activation functions after each layer. The encoder input layer and decoder output layer have a size of 2500, and the hidden layers have a size of 1024. Similarly to the PAE, the wavelength bins are spaced logarithmically. Again only two values are needed to fully describe the wavelength range. The latent representation size for the wavelength autoencoder (WAE) is, therefore, also 2. The compression factor for this network is \(2500/2=1250\). The flux autoencoder (FAE) has a latent representation size of 256, which gives it a compression factor of \(2500/256\approx 10\). Both autoencoders are evaluated using the loss function from equation 7. They are optimised using the Adam optimiser, the WAE with a learning rate of \(10^{-7}\), and the FAE with a learning rate of \(10^{-5}\). They are both trained for 200 epochs with batches of 4 examples (see also table 1). #### 2.3.2 Core Network As mentioned before, the outputs are also large in dimensionality. Because the outputs contain mixing ratios for all species, for 10-time steps (Section 2.2), they can be encoded using the MRAE. Figure 3 shows how the autoencoder would encode both the inputs and the last time step of the outputs to their latent representations \(l_{i}\) and \(l_{o}\), respectively. Note that even though the autoencoder is shown twice in this figure, the same autoencoder is used to encode both the inputs and the outputs. In the middle of the figure, connecting the two latent spaces, a second neural network called the _core network_ is located. The function of the core network is to learn a mapping between the latent representations of the inputs and the evolved outputs. The design of the core network takes advantage of some of the characteristics of VULCAN. From Section 2.1 we know that VULCAN solves ODEs for specific atmospheric configurations over a simulated period of time. To impart this sense of time in the core neural network, a _Long-Short Term Memory_ (LSTM) is used as the base of the design. The LSTM was chosen for its proven performance in numerous applications, from stellar variability (e.g. Jamal and Bloom, 2020) to Core-collapse supernovae search (e.g. Iess et al., 2023) and solar radio spectrum classification (e.g. Xu et al., 2019), as well as the ease of implementation. The LSTM has known shortcomings like the vanishing gradient problem and long training times when dealing with long sequences. However, with the short sequence length used with our model (i.e. 10 timesteps), these shortcomings are not considered problematic for this proof of concept. The input of the core network is not sequential in nature. With some changes, we can use the LSTM in a 'one-to-many' configuration. In this configuration, the initial output of the LSTM \(h_{0}\) is given to an MLP. This MLP produces a vector with the same shape as the initial input \(x_{0}\), which can be interpreted as the 'evolved' input \(x_{1}\). This evolved input is fed back into the LSTM to produce \(h_{1}\), from which the MLP produces \(x_{2}\), and so forth. This can be repeated for an arbitrary number of steps. Figure 3: An overview of the model architecture. The core neural network maps between the latent representations of the VULCAN inputs \(l_{i}\) and outputs \(l_{o}\). The design of the core network is visualised in Figure 4. We interpret the latent representation of the inputs \(l_{i}\) as the initial value \(x_{0}\). The LSTM and MLP configuration produces 9 intermediary 'evolved' latent representations \(\{x_{1},...,x_{9}\}\) before arriving at the final evolved latent representation \(x_{10}\). We interpret this latent representation as the prediction of the latent representation of the evolved output \(I_{o}\). #### Training When the core model predicts a sequence of 10 latent representations, it is essentially traversing the latent space. We can guide the network to learn to traverse the latent space similarly to how VULCAN simulations evolve by using the sequence of outputs saved in the dataset (Section 2.2). We do this in two ways: first, we construct a loss function that not only depends on the accuracy of the prediction of the latent representation of the final output \(I_{o}\), but also on the accuracy of the intermittent latent representation predictions: \[\mathcal{L}=\sum_{t=1}^{10}\left(\frac{1}{N}\sum_{i=1}^{N}\left(p_{t,i}-a_{t, t}\right)^{2}\right), \tag{8}\] where \(\mathcal{L}\) is the loss, \(N\) is the number of elements in the actual/predicted vector, \(p_{t,i}\) is the \(i\)-th element of the latent representation prediction vector at time step \(t\), and \(a_{t,i}\) is the \(i\)-th element of the latent representation vector of the output at time step \(t\). With this notation, \(a_{10}=I_{o}\). By training a network with this loss function, we force the core network to evolve the latent mixing ratios similarly to how VULCAN evolves mixing ratios. It should be noted that the latent representation of the inputs \(l_{i}\) is larger than the latent representation of the outputs \(I_{o}\), as it contains more properties than just mixing ratios. The predicted latent representations \(x_{t}\) are therefore also larger than \(I_{o}\). To account for this, we only look at the elements corresponding to the encoded mixing ratios in \(l_{i}\) when comparing the predicted latent representations \(x_{t}\) and the output mixing ratios \(a_{t}/I_{o}\). To further incentivise the core network to adhere to VULCAN's evolution patterns, we can intercept the predicted latent representations \(x_{t}\) before they get fed back into the LSTM, and replace them with the latent representation of the actual output of the corresponding time step. This way, the core network is always learning from latent representations that follow VULCAN's evolution, even if the network is predicting poorly. This is only done during the training of the network when the true VULCAN outputs are known. During validation and testing, the predicted latent representations \(x_{t}\) are not altered. The core model LSTM has a hidden- and cell size of 4096. The MLP has only two layers: an input layer of size 4096, and an output layer of the same size as the latent representation of the inputs \(l_{i}\), followed by a hyperbolic tangent function. It is optimised with the Adam optimiser, with a learning rate of \(10^{-4}\) and a batch size of 8. It is trained for 100 epochs. #### 2.3.3 Deployment When the trained model is deployed on the validation- and test dataset, the first step is encoding the inputs into their latent representation \(l_{i}\) using the encoder part of the autoencoder (top left \begin{table} \begin{tabular}{l c c c c c} **model** & **hidden size** & **latent size** & **optimiser** & **learning rate** & **batch size** & **epochs** \\ \hline \hline MRAE & 256 & 30 & Adam & \(10^{-5}\) & 32 & 200 \\ PAE & 256 & 2 & Adam & \(10^{-6}\) & 4 & 100 \\ TAE & 256 & 30 & Adam & \(10^{-5}\) & 4 & 100 \\ GAE & 256 & 30 & Adam & \(10^{-5}\) & 4 & 100 \\ FAE & 1024 & 256 & Adam & \(10^{-5}\) & 4 & 200 \\ WAE & 1024 & 2 & Adam & \(10^{-7}\) & 4 & 200 \\ \end{tabular} \end{table} Table 1: Hyperparameters for the property autoencoders. Each autoencoder has an encoder and decoder neural network. These are MLPs, consisting of 7 fully connected layer with hyperbolic tangent activation functions. Figure 4: The design of the core network. It consists of a one-to-many LSTM + MLP configuration that is run for 10 steps. section in figure 3). The core network then predicts the latent representation of the evolved VULCAN output \(I_{o}\) by traversing the latent space in 10 steps (centre section in figure 3). The prediction of the latent representation of the VULCAN output is then decoded by the decoder part of the autoencoder to obtain the predicted mixing ratios (bottom right section in figure 3). ## 3 Results ### Autoencoders #### 3.1.1 Mixing Ratio Autoencoder The top row of figure 5 shows the reconstructed mixing ratio values against their actual value (left plot), for all examples from the test dataset. For the entire range of mixing ratios, the majority of the reconstructions lie within an order of magnitude of the diagonal line that marks perfect reconstructions, with an R-squared value of \(R^{2}=0.9997\). The right plot of the top row of figure 5 shows the reconstruction error of the mixing ratios in logarithmic space. This scale is chosen because the autoencoders are trained in log space (see section 2.2). The solid line shows the median and the dashed lines show the 5th and 95th percentiles. From the right figure, we can see that 90% of the reconstructions have an error between -0.39 and 0.40 orders of magnitude. #### 3.1.2 Flux Autoencoder The middle row of figure 5 (left) shows the reconstructed flux values against their actual value. All reconstructed flux values are within 0.5 order magnitude of the graph diagonal. At fluxes with values around \(10^{6}\) erg m\({}^{-1}\) cm\({}^{-2}\) s\({}^{-1}\), the FAE is slightly underpredicting the actual flux values. From the reconstruction error plot (right) we can see that 90% of the reconstructions have an error between -0.024 and 0.031 orders of magnitude. In this figure, we see a distinct underprediction of a small number of examples, which are the high flux values we see being underpredicted. #### 3.1.3 Wavelength Autoencoder The bottom row of figure 5 (left) shows the reconstructed wavelength values against their actual value. All reconstructed wavelength values are close to the actual values, deviating less than \(\sim 10\) nm from the graph diagonal. We can see that for wavelengths with values around 650 nm, the FAE has a tendency to underpredict. From the reconstruction error of the wavelength values plot (right), we can see that 90% of the reconstructions have an error between -0.001 and 0.001 orders of magnitude. The slight underprediction of higher wavelength values is also visible in this figure. #### 3.1.4 Pressure Profile Autoencoder The top row of figure 6 (left) shows the reconstructed pressure values against their actual value. All reconstructed pressure values are well within 0.1 order magnitude of the graph diagonal. From the reconstruction error plot of the pressure values (right) we can see that 90% of the reconstructions have an error between -0.0003 and 0.0003 orders of magnitude. This figure also shows a very minor underprediction of some pressure values, which correspond to pressure values of \(\sim 10^{3}\) bar. #### 3.1.5 Temperature Profile Autoencoder The middle row of figure 6 (left) shows the reconstructed temperature values against their actual value, for samples from the test dataset. It is immediately obvious that this autoencoder cannot accurately reconstruct the temperature profiles. A fraction of temperatures in the range \(\sim 750\) K \(<\) T \(\lesssim 1400\) K are reconstructed close to the graph diagonal, but the FAE is largely overpredicting temperatures below \(\sim 750\) K and underpredicting temperatures above \(\sim 750\) K. The histogram of reconstruction errors of the temperature values (right) shows under- and over-prediction. 90% of the reconstructions have an error between 15.8 K and 754.97 K. Most predictions outside this range are underpredictions. #### 3.1.6 Gravity Profile Autoencoder The GAE shows similar behaviour to the FAE. The bottom row of figure 6 (left) shows the reconstructed gravity values against their actual value, for samples from the test dataset. Gravity values below \(\sim 4000\) cm s\({}^{-2}\) are underpredicted by the autoencoder, while values above \(\sim 4000\) cm s\({}^{-2}\) are underpredicted. Only values around \(\sim 4000\) cm s\({}^{-2}\) are predicted accurately by the GAE. In the reconstruction errors plot of the GAE of figure 6 (right) we can see that gravity values are consistently being over- and underpredicted. 90% of the reconstructions have an error between 197.46 and 4939.6 cm s\({}^{-2}\). Most predictions outside this range are over-predictions. ### Core Network Because the FAE and GAE do not accurately reconstruct the temperature and gravity profiles, these profiles were not encoded for the final model. Instead, they were put directly in the latent representations of the inputs. This way, no information contained in these profiles is lost. The left plot in figure 7 shows the mixing ratios predicted by the trained neural network model against the actual mixing ratios for the test dataset. The histogram shows that most of the model predictions lie within \(\sim 1\) order magnitude of the diagonal of the graph. A notable exception is the predictions for the few mixing ratios with values lower than \(\sim 10^{-44}\), for which the model overpredicts. These species can be neglected since they are not abundant enough to play a big role in the chemistry or to show features in the observable spectra. Figure 7 (right) shows the mixing ratio prediction error of the neural network model, in log-space. The solid line shows the median and the dashed lines show the 5th and 95th percentiles. From the figure, we can see that 90% of the model predictions have an error between -0.66 and 0.65 orders magnitude. Outside of this range, the model does not show a clear tendency to either over- or underpredict. Figure 8 shows selected examples (best, typical, and worst cases) of predictions by the neural network compared with the output of the VULCAN model, for a selection of seven species. The best case (top panel) shows a prediction that is almost indistinguishable from the actual mixing ratios. The examples in the typical case (middle panel) and worst case (lower panel) show larger prediction errors. In the typical case, CO\({}_{2}\), CO\({}_{3}\), and HCN have the largest prediction errors in the lower atmosphere, though still negligible. The worst case shows the largest prediction errors, with H having prediction errors of up to almost 1 order of magnitude in the upper atmosphere. Notable is that this case has very strong photochemistry in the upper atmosphere as it's positioned nearby its host star. To compare the computational efficiency of VULCAN and the Figure 5: The reconstructed against the actual input values (left column), and the reconstruction error in log space (right column) for the mixing ratios (top row), stellar flux (middle row), and the wavelengths (bottom row). The diagonal dashed line in the reconstructed vs. actual mixing ratios plot shows the performance of a perfectly reconstructing model. Here the colour represents the number of examples within each bin. In the reconstruction error figure, the solid line shows the median value, and the dashed lines show the 5th and 95th percentiles. The R\({}^{2}\) values of each reconstruction plot are shown in the left column. Figure 6: The reconstructed against the actual input values (left column), and the reconstruction error in log space (right column) for the pressure profile (top row), temperature profile (middle row), and the gravity profile (bottom row). Note that the reconstruction errors plot of the temperature and gravity values are calculated in linear space. neural network model, the computational time to calculate or predict every example in the full dataset was recorded. The results are presented in table 2. It should be noted that the VULCAN simulations were run on similar, but older hardware than the neural network model. The median computational times show a \(\sim 7.5\cdot 10^{3}\times\) decrease in computational time for the neural network model. The longest computational time required by the neural network model still shows a \(\sim 10^{3}\times\) decrease in computational time compared to the fastest VULCAN simulation. ## 4 Discussion In this study, we successfully used autoencoders to extract most of the characterising input features and encode them into latent representations for the mixing ratios, stellar flux, wavelengths, and pressure profiles.Within these four groups, the largest prediction errors stem from the MRAE due to the high variability in input values, as opposed to the other input sources. We included initial and evolved mixing ratios of 69 species over 150 height layers. Additionally, the mixing ratio profiles among species differed significantly from one another (e.g. CH\({}_{4}\) and CO in figure 8). This made the complexity, of extracting and encoding the fundamental input features, highest for this particular autoencoder. In opposition, the variety in the flux from stellar spectra was much less. We obtained the stellar spectra from either the MUSCLES database or generated them using the PHOENIX model. The spectra from these different sources were quite distinct from each other in the EUV (0.5 - 200 nm). The PHOENIX models assume the spectra to follow blackbodies, while, in reality, M- and K stars have shown to be highly active in the EUV (Reiners and Basri (2008)), as was observed by the MUSCLES collaboration. Nonetheless, the spectra within each method seemed largely similar, which made it more straightforward for the FAE to learn how to accurately reconstruct them. For both the PAE and the WAE, the profiles were linearly spaced in logarithmic space, which made it easy for the autoencoders to learn how to encode these parameters. It is remarkable, however, that the WAE was not able to perfectly reproduce the wavelengths. A solution would be to make use of a handcrafted algorithm that encodes merely the first and last elements in the array. It is recommended to make use of such an algorithm for future use. Finally, the temperature- and gravity profile autoencoders were not successful at encoding and reconstructing their inputs. Both autoencoders produced the same solutions for each input example. The limited data set size and large variations in the temperature and gravity example cases could explain the autoencoders to be prone to errors. Future studies could focus on improving these specific autoencoders by performing root cause analysis. However, a more specialised approach to encoding the pressure and gravity profiles would be to provide hyperparameters, such as the planet mass and radius, directly to the core model. Such an approach negates the need to train autoencoders for these input parameters. The prediction of the core network (LSTM) is within one order of magnitude for the majority (>90%) of the predictions. These errors are comparable with the discrepancies between different chemical kinetics codes (Venot et al., 2012). However, the accuracy of predictions of different examples varies. This inconsistency can arise due to some bias within the data set. Example cases similar to the best-case scenario (see 8) were more prevalent in the data set, causing the core network to produce better predictions of this type of hot-Jupiters. Additionally, by plotting the loss of each validation case against input parameters (see figure 9) it becomes apparent that some specific system parameters perform better than others. From figure 9, we see that planets with smaller orbit radii seem to have worse predictions. One explanation could be that these planets endure more irradiation from their host star, ensuring photochemistry being the dominant process in the upper atmosphere. The abrupt and severe \begin{table} \begin{tabular}{c c c c} **code** & **median** & **minimum** & **maximum** \\ \hline \hline VULCAN & 5994.3 s & 1236.7 s & 102223.0 s \\ NN model & 0.77 s & 0.73 s & 0.93 s \\ \end{tabular} \end{table} Table 2: Median, minimum and maximum running times of VULCAN and the neural network model for all configurations in the dataset. VULCAN runs were performed on a single CPU core using an Intel(R) Xeon(R) CPU E5-4620 0 0 2.26GHz. The neural network model was run on a single core using an Intel(R) Xeon(R) W-1250 CPU @ 3.36GHz Figure 7: The LSTM predicted mixing ratios plotted against the actual mixing ratios (left) and the LSTM mixing ratio prediction error in log space (right). The dashed diagonal line in the left plot shows the performance of a perfectly predicting model and the colour of each bin represents the number of predictions. The solid line in the right plot shows the median value, and the dashed lines show the 5th and 95th percentiles. Figure 8: The mixing ratios per height layer for the best (top), typical (middle), and the worst (bottom) case of the validation set. The planet parameters for each case are given at the top of the plot. The solid lines show the actual mixing ratios as calculated by VULCAN, and the dashed lines show the neural network model predictions. changes in abundances for some species due to photodissociation in the upper atmosphere could be difficult for the core network to learn with a limited dataset as provided in this study. Also noteworthy is the correlation between the planetary mass and the performance of the core network. Higher-mass planets tend to have lower losses as compared to lower-mass planets. Future work could focus on improving the prediction losses of the chemistry profiles for lower-mass planets and planets that orbit their host star close in. We also showed that the trained model consistently over-predicts mixing ratios that have a value lower than \(10^{-44}\). This can again be explained by the lack of examples that have such lower values. Species with mixing ratios this low are small contributors to the atmospheric composition nevertheless and are not expected to affect forward models. Finally, we want to note that the hyperparameters used in this study have been found by trial and error and have not been proven to be the most optimal values. Future studies could focus on a hyperparameter search for each individual autoencoder and core network to find the most optimal parameters. Due to all mentioned caveats, there is room for improvement in future work. Here we detail some of the aspects that are out of the scope of this paper, but we will be looking into them in future publications. Evidently, a larger size of the data set, with which the network is trained, is expected to improve the results significantly. In order to train a neural network to be more generalised and less biased, a more diverse and extensive data set should be created. Free parameters that can be taken into account, which were not explored in this study, are variables such as the eddy diffusion coefficient 2, condensation, and composition of the atmosphere (e.g. varying metallicity and the C/O ratio). Footnote 2: Note that vertical mixing is taken into account in every simulation, but is kept constant throughout the dataset. Another approach to improve possibly the results is to change the model itself. The traditional autoencoders can be replaced with _variational autoencoders_, VAEs (Kingma and Welling, 2013). These types of autoencoders are based on Bayesian statistics. It is possible to regulate the latent space such that similar input examples have similar latent representations that lie close to each other within the latent space. The core network then might be able to learn how to traverse a regulated latent space and predict more accurately. The core network itself can be improved by, for example, including more time steps within the LSTM. Adding more time steps will ensure that the network predicts the solutions in a more similar way as VULCAN integrates toward the solution. A disadvantage of this is that the training time for the network will increase. Lastly, the recurrent neural network architecture could be changed to a _transformer_ design. Recently, the transformer neural network architecture, proposed by Vaswani et al. (2017), revolutionised the field of sequence transactions within machine learning. By using a so-called _attention_ mechanism, the transformer neural network outperforms recurrent neural networks in accuracy and efficiency. Because of the similarity between transformer and recurrent neural network applications, the core model may perform better when changing to this new type of architecture. However, because transformers require tokenized inputs, the autoencoders will also have to be changed to produce the expected outputs. The implementation of a transformer would therefore increase the complexity of the entire model and should be done carefully. A different direction in approach would be, for example, to use interpolation methods in the already existing data set. The limitation of such methods is the ease of distribution. The size of the data set used in this study is 600GB, as opposed to a size of 3GB for the weights of the neural network used in this study. ## 5 Summary & Conclusions In this study, we investigated the ability of a neural network to replace the time-dependent ordinary differential equations in the chemical kinetics code VULCAN (Tsai et al., 2017, 2021). The aim of this research was to explore the LSTM architecture for solving ordinary differential equations that include vertical mixing and photo-chemistry. We first created a data set that contains the in- and outputs of VULCAN simulations of hot-Jupiter atmospheres. We made use of the planetary mass (0.5 - 20 \(M_{J}\)), the semi-major axis (0.01 - 0.5 AU), and the stellar radius (1 - 1.5 \(R_{\odot}\)) as free parameters. Other parameters for the VULCAN configurations were derived either from analytical relations or kept constant throughout the data set. The input of the data set comprises the initial mixing ratios, the stellar spectrum, the temperature- and pressure profiles, and the gravity profiles. Note that the neural network trained in this study is limited to the chosen free parameters and can not be used for atmospheric models that include e.g. condensation. The outputs of the data set contain the mixing ratios of the species in the atmosphere, taken from 10-time steps (including the steady state) during the VULCAN simulation. This data was used to train a neural network that consists of two parts: the _autoencoder_ network and the _core_ network. The autoencoder was used to reduce the dimensionality of the input and output data from the data set by encoding them into lower dimensionality _latent representations_. The autoencoder network consisted of six smaller autoencoders, designed and trained to encode and decode the mixing ratios, flux, wavelengths, and temperature-, pressure-, and gravity profiles to and from their respective latent representations. The total input latent representation was the concatenation of these 6 smaller ones. The core network was designed to have an LSTM-based architecture and it mapped between the latent representation of the inputs to the encoded evolved output by traversing the _latent space_ in ten steps. During the training, the latent representations at these ten steps were compared to the ten sets of mixing ratios saved in the outputs of the data set to ensure that the core network is evolving the Figure 9: The loss as a function of the semi-major axis of each validation case. The colour represents the planet mass in \(\mathbf{M_{J}}\) and the size of each scatter point represents the size of the host star which ranges between 1 \(M_{\odot}\) and 1.5 \(M_{\odot}\). The loss is calculated by making use of eq. 7 latent representation in a similar fashion as the VULCAN simulation evolves the mixing ratios. To summarise, we found that: * the mixing ratios, flux, wavelengths, and pressure profile autoencoders were able to efficiently encode and accurately reconstruct their respective input properties * the autoencoders were not able to encode and decode the temperature- and gravity profiles successfully. These autoencoders were, therefore, not used and instead, these profiles were put directly into the latent representation of the inputs * the fully trained model (i.e. including the core network) was able to predict the mixing ratios of the species with the errors in the range [-0.66, 0.65] orders magnitude for 90% of the cases. Due to imbalances in the dataset, the model is biased to more accurately solve for some examples as compared to others * the fully trained model is \(\sim 10^{3}\) times faster than the VULCAN simulations Overall, this study has shown that machine learning is a suitable approach to accelerate chemical kinetics codes for modelling exoplanet atmospheres. ## Data Availability All simulated data created in this study will be shared upon reasonable request to the corresponding author. The code and results are publicly available on github.com/JuliusHendrix/MRP.
2303.13807
PFT-SSR: Parallax Fusion Transformer for Stereo Image Super-Resolution
Stereo image super-resolution aims to boost the performance of image super-resolution by exploiting the supplementary information provided by binocular systems. Although previous methods have achieved promising results, they did not fully utilize the information of cross-view and intra-view. To further unleash the potential of binocular images, in this letter, we propose a novel Transformerbased parallax fusion module called Parallax Fusion Transformer (PFT). PFT employs a Cross-view Fusion Transformer (CVFT) to utilize cross-view information and an Intra-view Refinement Transformer (IVRT) for intra-view feature refinement. Meanwhile, we adopted the Swin Transformer as the backbone for feature extraction and SR reconstruction to form a pure Transformer architecture called PFT-SSR. Extensive experiments and ablation studies show that PFT-SSR achieves competitive results and outperforms most SOTA methods. Source code is available at https://github.com/MIVRC/PFT-PyTorch.
Hansheng Guo, Juncheng Li, Guangwei Gao, Zhi Li, Tieyong Zeng
2023-03-24T05:04:52Z
http://arxiv.org/abs/2303.13807v1
# PFT-SSR: PARALLAX FUSION TRANSFORMER FOR STEREO IMAGE SUPER-RESOLUTION ###### Abstract Stereo image super-resolution aims to boost the performance of image super-resolution by exploiting the supplementary information provided by binocular systems. Although previous methods have achieved promising results, they did not fully utilize the information of cross-view and intra-view. To further unleash the potential of binocular images, in this letter, we propose a novel Transformer-based parallax fusion module called Parallax Fusion Transformer (PFT). PFT employs a Cross-view Fusion Transformer (CVFT) to utilize cross-view information and an Intra-view Refinement Transformer (IVRT) for intra-view feature refinement. Meanwhile, we adopted the Swin Transformer as the backbone for feature extraction and SR reconstruction to form a pure Transformer architecture called PFT-SSR. Extensive experiments and ablation studies show that PFT-SSR achieves competitive results and outperforms most SOTA methods. All code will be available. Hansheng Guo\({}^{1}\) Juncheng Li\({}^{2,3*}\) Guangwei Gao\({}^{4}\) Zhi Li\({}^{5}\) Tieyong Zeng\({}^{1*}\)\({}^{1}\) The Chinese University of Hong Kong, Hong Kong, China; \({}^{2}\)Shanghai University, Shanghai, China \({}^{3}\)Jiangsu Key Laboratory of Image and Video Understanding for Social Safety, Nanjing, China \({}^{4}\)Nanjing University of Posts and Telecommunications, Nanjing, China \({}^{5}\)East China Normal University, Shanghai, China Stereo Image Super-Resolution, Parallax Fusion Transformer, Stereo Cross Attention, SSR. ## 1 Introduction Binocular cameras have been widely employed to improve the perception capabilities of vision systems in devices such as self-driving vehicles and smartphones. With the rapid development of binocular cameras, stereo image super-resolution (SSR) is becoming increasingly popular in academia and industry. Specifically, SSR attempts to reconstruct a high-resolution (HR) image from a pair of low-resolution (LR) images. With the help of additional information from a pair of binocular images at the same physical location, making full use of the information from both two images is crucial for stereo image super-resolution (SSR). The easiest way to implement stereo image SR is to perform single image SR (SISR) methods [1, 2, 3, 4, 5, 6] on stereo image pairs, respectively. These approaches, however, neglected the cross-view information between the pair of images and are incapable of reconstructing high-quality images. To address this problem, current strategies have focused on building novel cross-view feature aggregation modules, loss functions, and so on, to improve the efficiency with which image pair interaction features are used. For example, [7] first combined depth estimation and image resolution tasks with multiple image inputs. After that, StereoSR [8] took the lead in introducing CNN into Stereo SR. iPASSR [9] suggested a symmetric bi-directional parallax attention module (biPAM) and an inline occlusion handling scheme as its cross view interaction module to exploit symmetry cues for stereo image SR. Recently, several more advanced strategies for improving Stereo SR performance have been introduced. For instance, NAFSSR [10] designed a new CNN-based backbone NAFNet [11] and proposed a novel Stereo Cross Attention Module (SCAM) as parallax fusion block. These network topologies typically included a CNN backbone for obtaining intra-view information and a parallax fusion module for combining cross-view attention. Since the existence of parallax, we discovered that it is also highly crucial for cross-picture features and intra-picture features to promote each other in the process of binocular feature fusion. However, these two processes in existing works are often relatively independent, which is not conducive to the full use of image features. Meanwhile, the quality of the input features is vital for image fusion efficiency. However, existing works never consider the degree of match between the backbone networks and parallax fusion blocks. Therefore, the combination of these two pieces will be sub-optimal. In this work, we address the aforementioned problems by introducing the Transformer to stereo image SR. Recently, Transformer demonstrated strong performance in various low-level tasks [12, 13, 14], which can learn global information of images to further improve model performance. However, directly merging current CNN-based parallax fusion modules (PFM) and Transformer will not result in outstanding performance. This is because the CNN-based parallax fusion modules and Transformer have different properties, leading to PFM that cannot fully utilize the features from the Transformer backbone. To address this issue, we designed a new parallax fusion module, named Parallax Fusion Transformer (PFT). PFT contains a Stereo Cross Attention Module (SCAM) and a Feature Refining Module (FRM). Among them, the SCAM gets the cross-view attention and FRM will fuse the cross-view feature with the local window features. The cross-view features and intra-view features (local window features) will enhance each other to get a better representation for image super-resolution task. With the help of PFT, the proposed model can well-adapt the deep features with the parallax feature fusion blocks to fully utilize the representational potential of the Transformer. The contributions of this letter can be summarized as follows: 1) We propose a novel Parallax Fusion Transformer (PFT) layer with a Cross-view Fusion Transformer (CVFT) and an Intra-view Refinement Transformer (IVRT). 2) Based on the proposed PFT, we design a pure Transformer network (named PFT-SSR) to further improve the feature extraction ability of Transformer-based networks. 3) Extensive experiments have illustrated the effectiveness of PFT-SSR. ## 2 Methodology In this paper, we propose a Parallax Fusion Transformer for Stereo Image Super-Resolution, called PFT-SSR. As shown in Fig. 1, the proposed PFT-SSR consists of three parts: stereo feature extraction, feature interaction, and SR image reconstruction. For Stereo SR, the model takes two images \(x^{L}_{LR}\), \(x^{R}_{LR}\in R^{B\times C_{in}\times H\times W}\) as inputs and then outputs \(x^{L}_{HR}\), \(x^{R}_{HR}\in R^{B\times C_{out}\times S*H\times S*W}\). Among them, \(B\), \(C_{in}\), \(C_{out}\), \(H\), and \(W\) are the input batch size, the number of channels, height, and weight, respectively. Meanwhile, \(S\) is the up-scaling factor, which is used to control the size of the output images. Specifically, we first use two convolutional layers to extract shallow features of the input images respectively. After that, we further extract the deeper feature representations with SwinIR [13] backbone, which contains three consequent Residual Swin Transformer Blocks (RSTBs) \[I^{L}_{d}=f_{ex}(f_{s}(x^{L}_{LR})),\quad I^{R}_{d}=f_{ex}(f_{s}(x^{R}_{LR})). \tag{1}\] Then, the extracted features are fed into the proposed Parallax Fusion Transformers (PFT) for cross-view interaction and intra-view refinement \[I^{L}_{f},I^{R}_{f}=f_{PFT}(I^{R}_{d},I^{L}_{d}). \tag{2}\] With fused features, we apply RSTBs again to obtain the refined features, with a residual connection from the shallow image feature (ignored in formula for simplicity). \[I^{L}_{r}=f_{cov}(f_{re}(I^{L}_{f})),\quad I^{R}_{r}=f_{cov}(f_{re}(I^{R}_{f})). \tag{3}\] Finally, a Reconstruction module that contains a single convolutional layer and a PixelShuffle layer is used to reconstruct the final SR images. ### Swin Transformer Backbone In this work, we use Swin Transformer Blocks [15] to build the backbone of our network. Specifically, a Swin Transformer Layer firstly reshapes the input feature map \(I_{in}\) to \(\frac{HW}{M^{2}}\times M^{2}\times C\) and performs standard self-attention locally on each window. For each of \(\frac{HW}{M^{2}}\) feature maps, let input be \(X\in R^{M^{2}\times C}\), then query, key, and value should be \[Q=XP_{Q},\quad K=XP_{K},\quad V=XP_{V}, \tag{4}\] where \(P_{Q}\), \(P_{K}\), and \(P_{V}\) are linear projection matrices. Then, the attention matrix is calculated within the local windows \[Attention(Q,K,V)=SoftMax(QK^{T}/\sqrt{d}+B)V, \tag{5}\] Figure 1: The complete architecture of the proposed Parallax Fusion Transformer for Stereo Image Super-Resolution (PFT-SSR). This is a dual-stream network and interacts through an interaction module. **Due to page limit, please zoom in to see details.** where \(B\) is the positional encoding for Transformer. The model also apply an MLP with two fully connected layers and GELU non-linearity on the attention matrix for feature transformations. Meanwhile, the LayerNorm [16] layer is added before both Attention Block and MLP with residual connection. Though local attention can greatly reduce the amount of computation, there is no connection across local windows. To solve this problem, Swin Transformer proposed a shifted window mechanism to shifts the feature map by \((\lfloor\frac{M}{2}\rfloor,\lfloor\frac{M}{2}\rfloor)\) pixels before partitioning. The process can be expressed as \[X=MSA(LN(X))+X,X=MLP(LN(X))+X \tag{6}\] where regular partitioning and shift partitioning are used alternately before each MSA. With the help of this backbone, our model can extract sufficient useful image features. ### Parallax Fusion Transformer In order to make full use of the features of the left and right images, we propose a Parallax Fusion Transformer (PFT). As shown in Fig. 1, PFT contains 4 PFT blocks, and each PFT block consists of 6 PFT layers and a convolutional layer. Meanwhile, each PFT layer has two different Transformer blocks, i.e. Cross-view Fusion Transformer (CVFT) and Intra-view Refinement Transformer (IVRT). Among them, CVFT adopts stereo cross-attention module (SCAM [10]) to learn the features of another view and IVRT takes the local-window Transformer to better merge features from the other view to its feature map. Specifically, we first apply CVFT to achieve cross-view attention via SCAM. However, using single-head SCAM to get the cross-view information cannot adapt to different parallax. Therefore, we further use IVRT to make cross-view information from the other branch better interact with intra-view features. With this 'Attention-Refine' paradigm, our PFT-SSR shows a compelling effect on cross-view attention. **Cross-view Fusion Transformer (CVFT)**: The core component of CVFT is SCAM, and the whole process of SCAM is shown in Fig. 3. Given input image features \(X_{L},X_{R}\in R^{H\times W\times C}\), we first perform layer normalization to get scaled features. Due to the nature of stereo images, we use the same \(Q\) and \(K\) for representing intra-view features. Then, we get cross-view attention both from right to left and from left to right by \[\begin{split} F_{R\to L}&=Attention(T_{1}^{L} \overline{X_{L}},T_{1}^{R}\overline{X_{R}},T_{2}^{R}\overline{X_{R}}),\\ F_{L\to R}&=Attention(T_{1}^{R}\overline{X_{R}},T_{1}^ {L}\overline{X_{L}},T_{2}^{L}\overline{X_{L}}),\end{split} \tag{7}\] where \(Attention\) is defined same as Eq. (5). Besides, \(T_{1}^{L}\), \(T_{1}^{R}\), \(T_{2}^{L}\), and \(T_{2}^{R}\) are linear projection matrices. After getting the cross-view attention feature, we use a weighted residual connection to merge it to the corresponding image feature, which are formulated as \[Y_{L}=\alpha_{L}F_{R\to L}+X_{L},Y_{R}=\alpha_{R}F_{L\to R}+X_{R}, \tag{8}\] where \(\alpha_{L}\) and \(\alpha_{R}\) are learnable scalars. After observing the corrected features, we apply MLP and LayerNorm to get the final outputs and the whole process can be expressed as \[X=SCAM(X)+X,X=MLP(LN(X))+X. \tag{9}\] **Intra-view Refinement Transformer (IVRT)**: One key difficulty of SSR is the different parallax brought by various Figure 3: The architecture of stereo cross attention module. Figure 2: Visual results (x4) achieved by different methods on Flickr1024. stereo systems. Although SCAM shows great cross-view attention ability, it cannot adapt various parallax. After observing this, we used a Transformer with local-window attention for feature refinement. Regular partitioning is adopted before the MSA so that the features after the interaction of the two views can be further fused and enhanced, which is helpful for the final SR image reconstruction. ## 3 Experiment ### Experimental Settings 800 images from Flickr1024 [21] and 60 images from Middlebury [22] are chosen for training. To make the Middlebury dataset matches the spatial resolution of the Flickr1024 dataset, we perform bicubic downsampling by a factor of 2 on each image. And then, we use bicubic downsampling to these GT images by the factors of 2 and 4 to get the input images. We follow previous works [9, 10, 20] on this setting to make comparison fair. During training, we use the L1 loss function for supervision, PSNR and SSIM as quantitative metrics to make easy comparison with previous methods. These metrics are calculated on RGB color space with a pair of stereo images. To evaluate SR results, we use KITTI 2012 [23], KITTI 2015 [24], Middlebury [22], and Flickr1024 [21] for test. ### Comparison to state-of-the-art methods We compare our proposed PFT-SSR with several state-of-the-art methods, including SISR methods (e.g., EDSR [3], RCAN [17]) and stereo image SR methods (e.g., StereoSR [18], PASSRnet [19], iPASSR [9], and SSRDE-FNett [20]). According to TABLE 1, we can clearly observe that our PFT-SSR achieves outstanding results and outperforms most other SOTA methods, especially on Flickr102. Meanwhile, we also show the qualitative comparisons in Figs. 2. Obviously, our PFT-SSR can reconstruct more accurate SR images with more accurate edges and texture details. This fully demonstrates the effectiveness of the proposed PFT-SSR. ### Ablation Study Cross-view interaction is the key part in Stereo SR. In this part, we do ablation on the choice of this technology to show the strong stereo image fusion ability of the proposed PFT. We use Swin Transformer [15] Blocks as backbones and take the same number of Swin Transformer, biPAM [9], and our proposed PFT as the cross-view interaction module in this part. According to TABLE 2, it is obviously that the proposed PFT can improve the model performance more effectively, which fully illustrates the effectiveness of PFT. ## 4 Conclusion In this paper, we proposed a PFT-SSR for stereo image super-resolution, which contains a well-designed Parallax Fusion Transformer (PFT). PFT consists of a Cross-view Fusion Transformer (CVFT) and an Intra-view Refinement Transformer (IVRT), specially designed for cross-view interaction. It is worth mentioning that PFT can better merge different parallaxes to utilize the features of the left and right images fully. Meanwhile, PFT can also better adapt to the current popular Transformer-based backbone. Extensive experiments show that PFT-SSR outperforms most current models and achieves promising outcomes. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Scale} & \multicolumn{3}{c}{_Left_} & \multicolumn{3}{c}{\((Left+Right)\) / 2} \\ \cline{3-10} & & KITTI 2012 & KITTI 2015 & Middlebury & KITTI 2012 & KITTI 2015 & Middlebury & Flickr1024 \\ \hline EDSR [3] & \(\times\)2 & 30.83/0.9199 & 29.94/0.9231 & 34.84/0.9489 & 30.96/0.9228 & 30.73/0.9335 & 34.95/0.9492 & 28.66/0.9087 \\ RCAN [17] & \(\times\)2 & 30.88/0.9202 & 29.97/0.9231 & 34.80/0.9482 & 31.02/0.9232 & 30.77/0.9336 & 34.90/0.9486 & 28.63/0.9082 \\ StereoSR [18] & \(\times\)2 & 29.42/0.9040 & 28.53/0.9038 & 33.15/0.9343 & 29.51/0.9073 & 29.33/0.9168 & 33.23/0.9348 & 25.96/0.8599 \\ PASSRnet [19] & \(\times\)2 & 30.68/0.9159 & 29.81/0.9191 & 34.13/0.9421 & 30.81/0.9190 & 30.60/0.9300 & 34.23/0.9422 & 28.38/0.9038 \\ iPASSR [9] & \(\times\)2 & 30.97/0.9210 & 30.01/0.9234 & 34.41/0.9454 & 31.11/0.9240 & 30.81/0.9340 & 34.51/0.9454 & 28.60/0.9097 \\ SSRDE-FNet [20] & \(\times\)2 & 31.08/**0.9224** & 30.10/**0.9245** & 35.02/0.9508 & 31.23/**0.9254** & 30.90/**0.9352** & 35.09/0.9511 & 28.85/**0.9132** \\ PFT-SSR (Ous) & \(\times\)**32** & **31.15**/0.9166 & **30.16**/0.9187 & **35.08/0.9516** & **31.29**/0.9195 & **30.96**/0.9306** & **35.21**/**0.9520** & **29.09**/0.9049 \\ \hline EDSR [3] & \(\times\)4 & 26.26/0.7954 & 25.38/0.7811 & 29.15/0.8383 & 26.35/0.8015 & 26.04/0.8039 & 29.23/0.8397 & 23.46/0.7285 \\ RCAN [17] & \(\times\)4 & 26.36/0.7968 & 25.53/0.7836 & 29.20/0.8381 & 26.44/0.8029 & 26.22/0.8068 & 29.30/0.8397 & 23.48/0.7286 \\ StereoSR [18] & \(\times\)4 & 24.49/0.7502 & 23.67/0.7273 & 27.70/0.8306 & 24.53/0.7555 & 24.21/0.7511 & 27.64/0.8022 & 21.70/0.6460 \\ PASSRnet [19] & \(\times\)4 & 26.26/0.7919 & 25.41/0.7772 & 28.61/0.8232 & 26.43/0.7981 & 26.08/0.8002 & 28.72/0.8326 & 23.31/0.7195 \\ SRRes\({}^{*}\)SAM & \(\times\)4 & 26.35/0.7957 & 25.55/0.7825 & 28.76/0.8287 & 26.44/0.8018 & 26.22/0.8054 & 28.83/0.8290 & 23.27/0.7233 \\ iPASSR [9] & \(\times\)4 & 26.47/0.7993 & 25.61/0.7850 & 29.07/0.8363 & 26.56/0.8053 & 26.32/0.8084 & 29.16/0.8367 & 23.44/0.7287 \\ SSRDE-FNet [20] & \(\times\)4 & 26.61/**0.8028** & 25.74/**0.7884** & 29.29/0.8407 & 26.70/**0.8082** & 26.45/**0.8118** & 29.38/0.8411 & 23.59/**0.7352** \\ PFT-SSR (Ours) & \(\times\)4 & **26.64**/0.7913 & **25.76**/0.7775 & **29.58**/**0.8418** & **26.77**/0.7998 & **26.54**/0.8083 & **29.74**/**0.8426** & **23.89**/0.7277 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison on different datasets. PSNR/SSIM values achieved on both the left images (i.e., _Left_) and a pair of stereo images (i.e., _(Left_ + _Right_) / 2) are reported. Among them, the best results are **highlighted**. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Backbone** & **Module** & **PSNR (x4)** & **SSIM (x4)** \\ \hline Swin Transformer & None & 23.54 & 0.7120 \\ Swin Transformer & RSTB (SwinIR) & 23.65 & 0.7164 \\ Swin Transformer & BiPAM & 23.42 & 0.7068 \\ Swin Transformer & PFT (Ours) & **23.83** & **0.7268** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on PFT under Flickr1024.
2310.05185
Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction
Beyond traditional binary relational facts, n-ary relational knowledge graphs (NKGs) are comprised of n-ary relational facts containing more than two entities, which are closer to real-world facts with broader applications. However, the construction of NKGs still significantly relies on manual labor, and n-ary relation extraction still remains at a course-grained level, which is always in a single schema and fixed arity of entities. To address these restrictions, we propose Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction. We introduce a span-tuple classification approach with hetero-ordered merging to accomplish fine-grained n-ary relation extraction in different arity. Furthermore, Text2NKG supports four typical NKG schemas: hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema, with high flexibility and practicality. Experimental results demonstrate that Text2NKG outperforms the previous state-of-the-art model by nearly 20\% points in the $F_1$ scores on the fine-grained n-ary relation extraction benchmark in the hyper-relational schema. Our code and datasets are publicly available.
Haoran Luo, Haihong E, Yuhao Yang, Tianyu Yao, Yikai Guo, Zichen Tang, Wentai Zhang, Kaiyang Wan, Shiyao Peng, Meina Song, Wei Lin
2023-10-08T14:47:13Z
http://arxiv.org/abs/2310.05185v2
# Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction ###### Abstract Beyond traditional binary relational facts, n-ary relational knowledge graphs (NKGs) are comprised of n-ary relational facts containing more than two entities, which are closer to real-world facts with broader applications. However, the construction of NKGs still significantly relies on manual labor, and n-ary relation extraction still remains at a course-grained level, which is always in a single schema and fixed arity of entities. To address these restrictions, we propose Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction. We introduce a span-tuple classification approach with hetero-ordered merging to accomplish fine-grained n-ary relation extraction in different arity. Furthermore, Text2NKG supports four typical NKG schemas: hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema, with high flexibility and practicality. Experimental results demonstrate that Text2NKG outperforms the previous state-of-the-art model by nearly 20% points in the \(F_{1}\) scores on the fine-grained n-ary relation extraction benchmark in the hyper-relational schema. Our code and datasets are publicly available at [https://github.com/LHRLAB/Text2NKG](https://github.com/LHRLAB/Text2NKG). ## 1 Introduction Modern knowledge graphs, such as Freebase Bollacker et al. (2008), Google Knowledge Vault Dong et al. (2014), and Wikidata Vrandecic & Krotzsch (2014), convert unstructured knowledge to structured multi-relational graphs with various applications in question-and-answer Yih et al. (2015), query-answering Arakelyan et al. (2021), logical reasoning Chen et al. (2022), and recommendation systems Zhang et al. (2016). Traditional knowledge graphs consist of triple-based facts (\(subject\), \(relation\), \(object\)) with two entities and one relation between them Bordes et al. (2013); Balazevic et al. (2019). However, real-world facts tend to contain more than two entities, which are called n-ary relational facts (\(n\geq 2\)), which cannot be represented by merging binary relations. For example, considering the statement: "Einstein received his Bachelor degree in Mathematics and his Doctorate degree in Physics.", if broken down into binary relations, we can't merge them again effectively because we can't determine whether Einstein's Doctorate major was in Physics or Mathematics, which necessitates the use of N-ary relational Knowledge Graphs (NKG) to represent such information, like Figure 1: An example of NKG construction. (Einstein, degree, Doctorate, major, Physics). As shown in Figure 1, an NKG consists of numerous n-ary relational facts with richer knowledge representation and a wider application capability. Each NKG has a schema to represent the structure of every n-ary relational fact in the NKG. For example, Wikidata utilizes n-ary relational facts with hyper-relational schema (Rosso et al., 2020; Galkin et al., 2020; Wang et al., 2021a), i.e., \((s,r,o,\{(k_{i},v_{i})\}_{i=1}^{n-2})\) adds \((n-2)\) key-value pairs to the main triple to represent auxiliary information, forming an n-ary relational fact with n entities. In addition to the hyper-relational schema, the n-ary relational facts of NKG also have event-based schema \(r,\{(k_{i},v_{i})\}_{i=1}^{n}\)) (Guan et al., 2022; Lu et al., 2021), role-based schema (\(\{(k_{i},v_{i})\}_{i=1}^{n}\)) (Guan et al., 2019; Liu et al., 2021) and hypergraph-based schema (\(r,\{v_{i}\}_{i=1}^{n}\)) (Wen et al., 2016; Fatemi et al., 2021) for different scenarios. Extracting these n-ary relational facts from textual knowledge is called n-ary relation extraction, which is the key step in NKG construction. Taking a real-world textual fact "Einstein received his Doctorate degree in Physics from the University of Zurich." as an example, through n-ary relation extraction, we can extract a four-arity structured span-tuple for entities (Einstein, University of Zurich, Doctorate, Physics) with an answer label-list for relations accordingly as a 4-ary relational fact from the sentence as shown in Figure 2. However, most existing NKG, such as JF17K (Wen et al., 2016), Wikipeople (Guan et al., 2019), WD50K (Galkin et al., 2020), EventKG (Guan et al., 2022), etc., are constructed manually but not automatically. The key step of knowledge graph construction is relation extraction, but most relation extraction methods target traditional binary relational facts (Wang and Lu, 2020; Zhong and Chen, 2021; Ye et al., 2022). The n-ary relation extraction methods are currently always focused on the course-grained extraction with solid keys (Jia et al., 2019; Zhuang et al., 2022), but are not competent for fine-grained NKG construction with various n-ary relations. Recently, CubeRE (Chia et al., 2022) proposes the cube-filling method, the only fine-grained n-ary relation extraction method. Nevertheless, it can only perform hyper-relational extraction with limited accuracy and cannot cover other useful NKG schemas. To address these challenges, we propose a novel n-ary relation extraction framework, Text2NKG, which automates the generation of n-ary relational facts from natural language text for NKG construction. Text2NKG proposes a span-tuple multi-label classification method with hetero merging, which converts n-ary relation extraction into a multi-label classification problem for span-tuples consisting of all arrangements of three entities in a sentence. The number of labels is determined by the number of relations in the selected NKG schema. Text2NKG can be applied to all NKG schemas, with hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema provided as examples, which have a wide range of applications. In addition, we extend the current n-ary relation extraction benchmark HyperRED (Chia et al., 2022), which is only in the hyper-relational schema, to four NKG schemas. We've done sufficient n-ary relation extraction experiments on HyperRED, and the experimental results show that Text2NKG achieves nearly 20 percentage points ahead of the existing state-of-the-art model CubeRE in \(F_{1}\) scores of hyper-relational extraction. We also compared the results of Text2NKG in four schemas to verify applications. Figure 2: An example of fine-grained n-ary relation extraction in four NKG schemas. Related Work N-ary relational Knowledge Graph.An n-ary relational knowledge graph (NKG) consists of n-ary relational facts, which contain \(n\) entities (\(n\geq 2\)) and several relations. The n-ary relational facts are necessary and cannot be replaced by combinations of some binary relational facts because we cannot distinguish which binary relations are combined to represent the n-ary relational fact in the whole KG. Therefore, NKG utilizes a schema in every n-ary relational fact locally and a hypergraph representation globally. Firstly, the simplest NKG schema is hypergraph-based. Wen et al. (2016) found that over 30% of Freebase (Bollacker et al., 2008) entities participate facts with more than two entities, first defined n-ary relations mathematically and used star-to-clique conversion to convert triple-based facts representing n-ary relational facts into the first NKG dataset JF17K in hypergraph-based schema (\(r,\{v_{i}\}_{i=1}^{n}\)). Fatemi et al. (2021) proposed FB-AUTO and M-FB15K with the same hypergraph-based schema. Secondly, Guan et al. (2019) introduced role information for n-ary relational facts and extracted Wikipeople, the first NKG dataset in role-based schema (\(\{(k_{i},v_{i})\}_{i=1}^{n}\)), composed of role-value pairs. Thirdly, Wikidata (Vrandecic and Krotzsch, 2014), the largest knowledge base, utilizes an NKG schema based on hyper-relation (\(s,r,o,\{(k_{i},v_{i})\}_{i=1}^{n-2}\)), which adds auxiliary key-value pairs to the main triple. Galkin et al. (2020) first proposed an NKG dataset in hyper-relational schema WD50K. Fourthly, as Guan et al. (2022) pointed out, events are also n-ary relational facts. One basic event representation has an event type, a trigger, and several key-value pairs (Lu et al., 2021). Regarding the event type as the main relation, the (trigger: value) as one of the key-value pairs, and the arguments as the rest key-value pairs, we can obtain an event-based NKG schema (\(r,\{(k_{i},v_{i})\}_{i=1}^{n}\)). Based on four common NKG schemas, we propose Text2NKG, the first method for extraction of structured n-ary relational facts from natural language text, which improves NKG representation and application. N-ary Relation Extraction.Relation extraction is an important part of knowledge graph construction, directly affecting the quality, scale, and application of KGs. While most of the current n-ary relation extraction for NKG construction depends on manual construction (Wen et al., 2016; Guan et al., 2019; Galkin et al., 2020) but not automated methods. Most automated relation extraction methods target the extraction of traditional binary relational facts. For example, Wang and Lu (2020) proposes a table-filling method for binary relation extraction, and Zhong and Chen (2021); Ye et al. (2022) propose span-based relation extraction methods with levitated marker and packed levitated marker, respectively. For n-ary relation automated extraction, some approaches (Jia et al., 2019; Jain et al., 2020; Viswanathan et al., 2021) treat n-ary relation extraction as a binary classification problem and predict whether the composition of n-ary information in a document is valid or not. However, these methods extract n-ary information in fixed arity, which are not flexible. Moreover, most of these methods are based on the course-grained level with solid keys, which is not competent for fine-grained NKG construction with various n-ary relations. Recently, Chia et al. (2022) proposes the only automated n-ary relation extraction method, CubeRE, which extends the table-filling extraction method to n-ary relation extraction with cube-filling. However, it can only model hyper-relational schema with limited extraction accuracy. In this paper, we propose the first fine-grained n-ary relation extraction framework Text2NKG for NKG construction in four example schemas, proposing a span-tuple multi-label classification method with hetero-ordered merging to improve the accuracy of hyper-relational extraction substantially. ## 3 Preliminaries **Formulation of NKG.** An NKG \(\mathcal{G}=\{\mathcal{E},\mathcal{R},\mathcal{F}\}\) consists of an entity set \(\mathcal{E}\), a relation set \(\mathcal{R}\), and an n-ary fact (n\(\geq\)2) set \(\mathcal{F}\). Each n-ary fact \(f^{n}\in\mathcal{F}\) consists of entities \(\in\)\(\mathcal{E}\) and relations \(\in\)\(\mathcal{R}\). For hyper-relational schema (Rosso et al., 2020): \(f^{n}_{hr}=(e_{1},r_{1},e_{2},\{r_{i-1},e_{i}\}_{i=3}^{n})\) where \(\{e_{i}\}_{i=1}^{n}\in\mathcal{E}\), \(\{r_{i}\}_{i=1}^{n-1}\in\mathcal{R}\). For event-based schema (Lu et al., 2021): \(f^{n}_{ev}=(r_{1},\{r_{i+1},e_{i}\}_{i=1}^{n})\), where \(\{e_{i}\}_{i=1}^{n}\in\mathcal{E}\), \(\{r_{i}\}_{i=1}^{n+1}\in\mathcal{R}\). For role-based schema (Guan et al., 2019): \(f^{n}_{ro}=(\{r_{i},e_{i}\}_{i=1}^{n-2})\), where \(\{e_{i}\}_{i=1}^{n}\in\mathcal{E}\), \(\{r_{i}\}_{i=1}^{n}\in\mathcal{R}\). For hypergraph-based schema (Wen et al., 2016): \(f^{n}_{hg}=(r_{1},\{e_{i}\}_{i=1}^{n})\), where \(\{e_{i}\}_{i=1}^{n}\in\mathcal{E}\), \(r_{1}\in\mathcal{R}\). Problem Definition.Given an input sentence with \(l\) words \(s=\{w_{1},w_{2},...,w_{l}\}\), an entity \(e\) is a consecutive span of words: \(e=\{w_{p},w_{p+1},...,w_{q}\}\in\mathcal{E}_{s}\), where \(p,q\in\{1,...,l\}\), and \(\mathcal{E}_{s}=\{e_{j}\}_{j=1}^{m}\) is the entity set of all \(m\) entities in the sentence. The output of n-ary relation extraction, \(R()\), is a set of n-ary relational facts \(\mathcal{F}_{s}\) in given NKG schema in \(\{f_{hr}^{n},f_{ev}^{n},f_{ho}^{n}\}\). Specifically, each n-ary relational fact \(f^{n}\in\mathcal{F}_{s}\) is extracted by multi-label classification of one of the ordered span-tuple for \(n\) entities \([e_{i}]_{i=1}^{n}\in\mathcal{E}_{s}\), forming an answer label-list for \(n_{r}\) relations \([r_{i}]_{i=1}^{n_{r}}\in\mathcal{R}\), where \(n\) is the arity of the extracted n-ary relational fact, and \(n_{r}\) is the number of answer relations in the fact, which is determined by the given NKG schema: \(R([e_{i}]_{i=1}^{n})=[r_{i}]_{i=1}^{n-1}\), when \(f^{n}=f_{ho}^{n}\), \(R([e_{i}]_{i=1}^{n})=[r_{i}]_{i=1}^{n+1}\) when \(f^{n}=f_{ev}^{n}\), \(R([e_{i}]_{i=1}^{n})=[r_{i}]_{i=1}^{n}\) when \(f^{n}=f_{ro}^{n}\), and \(R([e_{i}]_{i=1}^{n})=[r_{1}]\) when \(f^{n}=f_{hg}^{n}\). ## 4 Methodology In this section, we first introduce the overview of the Text2NKG framework, followed by the span-tuple multi-label classification, training strategy, hetero-ordered merging, and output merging. ### Overview of Text2NKG Text2NKG is a fine-grained n-ary relation extraction framework built for n-ary relational knowledge graph (NKG) construction. The input to Text2NKG is natural language text tokens labeled with entity span in sentence units. We **extract 3-ary facts as an atomic unit** and then **merge them into n-ary facts** later to realize n-ary extraction of arbitrary arity. Because if using binary facts, merging them into n-ary facts based on shared elements within these facts will lead to misunderstandings as analyzed in Section 1. On the other hand, using facts with four entities or more makes it challenging to judge which of the included 3-ary facts can be extracted as independent facts. Specifically, inspired by Ye et al. (2022), Text2NKG first encodes the entities using BERT-based Encoder (Devlin et al., 2019) with a packaged levitated marker for embedding. Then, each arrangement of ordered span-tuple with three entity embeddings will be classified with multiple labels, and the framework will be learned by the weighted cross-entropy with a null-label bias. In the decoding stage, in order to filter the n-ary relational facts whose entity compositions have isomorphic hetero-ordered characteristics, Text2NKG proposes a hetero-ordered merging strategy to merge the label probabilities of \(3!=6\) arrangement cases of span-tuples composed of the same entities and filter out the output 3-ary relational facts existing non-conforming relations. Finally, Text2NKG combines the output 3-ary relational facts to form the final n-ary relational facts with output merging. Figure 3: An overview of Text2NKG extracting n-ary relation facts from a natural language sentence in hyper-relational NKG schema for an example. ### Span-tuple Multi-label Classification For the given sentence token \(s=\{w_{1},w_{2},...,w_{l}\}\) and the set of entities \(\mathcal{E}_{s}\), in order to perform fine-grained n-ary relation extraction, we need first to encode a span-tuple (\(e_{1},e_{2},e_{3}\)) consisting of every arrangement of three ordered entities, where \(e_{1},e_{2},e_{3}\in\mathcal{E}_{s}\). Due to the high time complexity of training every span-tuple as one training item, inspired by Ye et al. (2022), we achieve the reduction of training items by using packed levitated markers that pack one training item with each entity in \(\mathcal{E}_{s}\) separately. Specifically, in each packed training item, a pair of solid tokens, [S] and [/S], are added before and after the packed entity \(e_{S}=\{w_{p_{S}},...,w_{q_{S}}\}\), and (\(|\mathcal{E}_{s}|-1\)) pairs of levitated markers, [L] and [/L], according to other entities in \(\mathcal{E}_{s}\), are added with the same position embeddings as the beginning and end of their corresponding entities span \(e_{L_{i}}=\{w_{p_{L_{i}}},...,w_{q_{L_{i}}}\}\) to form the input token \(\mathbf{X}\): \[\begin{split}\mathbf{X}=&\{w_{1},...,[S],w_{p_{S}},...,w_{q_{S}},[/S],...,\\ w_{p_{L_{i}}}\cup[L],...,w_{q_{L_{i}}}\cup[/L],...,w_{l}\}.\end{split} \tag{1}\] We encode such token by the BERT-based pre-trained model encoder (Devlin et al., 2019): \[\{h_{1},h_{2},...,h_{t}\}=\text{BERT}(\mathbf{X}), \tag{2}\] where \(t=|\mathbf{X}|\) is the input token length, \(\{h_{i}\}_{i=1}^{t}\in\mathbb{R}^{d}\), and \(d\) is embedding size. There are several span-tuples (\(A,B,C\)) in a training item. The embedding of first entity \(h_{A}\in\mathbb{R}^{2d}\) in the span-tuple is obtained by concat embedding of the solid markers, [S] and [/S], and the embeddings of second and third entities \(h_{B},h_{C}\in\mathbb{R}^{2d}\) are obtained by concat embeddings of levitated markers, [L] and [/L] with all \(A_{m-1}^{2}\) arrangement of any other two entities in \(\mathcal{E}_{s}\). Thus, we obtain the embedding representation of the three entities to form \(A_{m-1}^{2}\) span-tuples in one training item. Therefore, every input sentence contains \(m\) training items with \(mA_{m-1}^{2}=A_{m}^{3}\) span-tuples for any ordered arrangement of three entities. We then define \(3\times n_{r}\) linear classifiers \(\{\text{FNN}_{i}^{k}\}_{i=1}^{n_{r}},k=1,2,3\) to classify the span-tuples for multiple-label classification. Each classifier targets the prediction of one relation \(r_{i}\), thus obtaining a probability lists \((\mathbf{P}_{i})_{i=1}^{n_{r}}\) with all relations in given relation set \(\mathcal{R}\) plus a null-label: \[\mathbf{P}_{i}=\text{FNN}_{i}^{l}(h_{A})+\text{FNN}_{i}^{2}(h_{B})+\text{FNN} _{i}^{3}(h_{C}), \tag{3}\] where \(\text{FNN}_{i}^{k}\in\mathbb{R}^{2d\times(|\mathcal{R}|+1)}\), and \(\mathbf{P}_{i}\in\mathbb{R}^{(|\mathcal{R}|+1)}\). ### Training Strategy In order to train the \(n_{r}\) classifiers for each relation prediction more accurately, we performed data augmentation strategy in terms of span-tuples. Taking the hyper-relational schema as an example, given a hyper-relational fact (\(A,r_{1},B,r_{2},C\)), we consider swapping the head and tail entities and changing the main relation to the corresponding inverse relation (\(B,r_{1}^{-1},A,r_{2},C\)), as well as swapping the tail entities and auxiliary values, and swapping the main relation and the auxiliary key (\(A,r_{2},C,r_{1},B\)) also as labeled training span-tuple cases. Thus \(R_{hr}(A,B,C)=(r_{1},r_{2})\) can be augmented with \(3!=6\) orders of span-tuples: \[\begin{cases}R_{hr}(A,B,C)=(r_{1},r_{2}),\\ R_{hr}(B,A,C)=(r_{1}^{-1},r_{2}),\\ R_{hr}(A,C,B)=(r_{2},r_{1}),\\ R_{hr}(B,C,A)=(r_{2},r_{1}^{-1}),\\ R_{hr}(C,A,B)=(r_{2}^{-1},r_{1}),\\ R_{hr}(C,B,A)=(r_{1},r_{2}^{-1}).\end{cases} \tag{4}\] For other schemas, we can also obtain 6 fully-arranged cases of labeled span-tuples in a similar way, as described in Appendix A. If no n-ary relational fact exists between the three entities of span-tuples, then relation labels are set as null-label. Since most cases of span-tuple are null-label, we set a weight hyperparameter \(\alpha\in(0,1]\) between the null-labels and other labels to balance the learning of the null-label. We jointly trained the classifiers for each relations by cross-entropy loss \(\mathcal{L}\) with a null-label weight bias \(\mathbf{W}_{\alpha}\): \[\mathcal{L}=-\sum_{i=1}^{n_{r}}\mathbf{W}_{\alpha}\log\left(\frac{\exp\left( \mathbf{P}_{i}[r_{i}]\right)}{\sum_{j=1}^{|\mathcal{R}|+1}\exp\left(\mathbf{P} _{ij}\right)}\right), \tag{5}\] where \(\mathbf{W}_{\alpha}=[\alpha,1.0,1.0,...1.0]\in\mathbb{R}^{(|\mathcal{R}|+1)}\). ### Hetero-ordered Merging In the decoding stage, since Text2NKG labels all 6 different arrangement of the same entity composition, we design a hetero-ordered merging strategy to merge the corresponding labels of these 6 hetero-ordered span-tuples into one to generate non-repetitive n-ary relational facts unsupervisedly. For hyper-relational schema (\(n_{r}=2\)), we combine the predicted probabilities of two labels \(\mathbf{P}_{1},\mathbf{P}_{2}\) in 6 orders to \((A,B,C)\) order as follows: \[\begin{cases}\mathbf{P}_{1}=\mathbf{P}_{1}^{(ABC)}+I(\mathbf{P}_{1}^{(BAC)}) +\mathbf{P}_{2}^{(ACB)}\\ \qquad+I(\mathbf{P}_{2}^{(BCA)})+\mathbf{P}_{2}^{(CAB)}+\mathbf{P}_{1}^{(CBA)},\\ \mathbf{P}_{2}=\mathbf{P}_{2}^{(ABC)}+\mathbf{P}_{2}^{(BAC)}+\mathbf{P}_{1}^{( ACB)}\\ \qquad+\mathbf{P}_{1}^{(BCA)}+I(\mathbf{P}_{1}^{(CAB)})+I(\mathbf{P}_{2}^{(CBA)} ),\end{cases} \tag{6}\] where \(I()\) is a function for swapping the predicted probability of relations and the corresponding inverse relations. Then, we take the maximum probability to obtain labels \(r_{1},r_{2}\), forming a 3-ary relational fact (\(A,r_{1},B,r_{2},C\)) and filter it out if there are null-labels in \((r_{1},r_{2})\). If there are inverse relation labels in \((r_{1},r_{2})\), we can also transform the order of entities and relations as equation 4. For event-based schema, role-based schema, and hypergraph-based schema, all can be generated by hetero-ordered merging according to this idea, shown in Appendix B. ### Output Merging After hetero-ordered merging, we merge the output 3-ary relational facts to form higher-arity facts, with hyper-relational schema based on the same main triple, event-based schema based on the same main relation (event type), role-based schema based on the same key-value pairs, and hypergraph-based schema based on the same hyperedge relation. This way, we can **unsupervisedly** obtain n-ary relational facts **with dynamic number of arity numbers** for NKG construction. ## 5 Experiments This section presents the experimental setup, results, and analysis. We answer the following research questions (RQs): **RQ1**: Does Text2NKG outperform other fine-grained n-ary relation extraction methods? **RQ2**: Whether Text2NKG can cover NKG construction for various schemas? **RQ3**: Does the main components of Text2NKG work? **RQ4**: How does the null-label bias hyperparameter in Text2NKG affect performance? **RQ5**: Can Text2NKG get complete n-ary relational facts in different arity? **RQ6**: How does Text2NKG perform in specific case study? **RQ7**: What is the future development of Text2NKG in the era of large language models? ### Experimental Setup Datasets.The HyperRED (Chia et al., 2022) dataset is the only existing dataset for extracting n-ary relations with annotated extracted entities. Therefore, we expand the HyperRED dataset to four \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**\#Ent**} & \multirow{2}{*}{**\#R\_hr**} & \multirow{2}{*}{**\#R\_av**} & \multirow{2}{*}{**\#R\_ro**} & \multirow{2}{*}{**\#R\_bg**} & \multicolumn{2}{c}{**All**} & \multicolumn{2}{c}{**Train**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{6-12} & & & & & \#Sentence & & \#Fact & & \#Sentence & & \#Fact & & \#Sentence & \#Fact & & \#Sentence & \#Fact \\ \hline HyperRED & 40,293 & 106 & 232 & 168 & 62 & 44,840 & 45,994 & 39,840 & 39,978 & 1,000 & 1,220 & 4,000 & 4,796 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics, where the columns indicate the number of entities, relations with four schema, sentences and n-ary relational facts in all sets, train set, dev set, and test set,respectively. schemas as standard fine-grained n-ary relation extraction benchmarks and conduct experiments on them. The statistics of the HyperRED with four schemas are shown in Table 1, and the construction detail is in Appendix C. Baselines.We compare Text2NKG against **Generative Baseline**(Lewis et al., 2020), **Pipeline Baseline**(Wang et al., 2021), and **CubeRE**(Chia et al., 2022) in fine-grained n-ary relation extraction task of hyper-relational schema. For n-ary relation extraction in the other three schemas, we compared Text2NKG with event extraction models such as **Text2Event**(Lu et al., 2021), **UIE**(Lu et al., 2022), and **LasUIE**(Fei et al., 2022). Furthermore, we utilized different prompts to test the currently most advanced large-scale pre-trained language models **ChatGPT**(Wei et al., 2023) and **GPT-4**(OpenAI, 2023) in an unsupervised manner, specifically for the extraction performance across the four schemas. The detailed baseline settings can be found in Appendix D. Ablations.To evaluate the significance of Text2NKG's three main components, data augmentation (DA), null-label weight hyperparameter (\(\alpha\)), and hetero-ordered merging (HM), we obtain three simplified model variants by removing any one component from the model (**Text2NKG w/o DA, Text2NKG w/o \(\alpha\)**, and **Text2NKG w/o HM**) for comparison. Evaluation Metrics.We use the \(F_{1}\) score with precision and recall to evaluate the dev set and the test set. For a predicted n-ary relational fact to be considered correct, the entire fact must match the ground facts completely. Hyperparameters and Environment.We train 10 epochs on HyperRED using the Adam optimizer. All experiments were done on a single NVIDIA A100 GPU, and all experimental results were derived by averaging five random seed experiments. Appendix E shows Text2NKG's optimal hyperparameter settings. Appendix F shows training details. \begin{table} \begin{tabular}{l c c c c|c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**PLM**} & \multicolumn{3}{c}{**HyperRED : Hyper-relational / Dev**} & \multicolumn{3}{c}{**HyperRED : Hyper-relational / Test**} \\ \cline{3-8} & & Precision & Recall & \(F_{1}\) & Precision & Recall & \(F_{1}\) \\ \hline \multicolumn{8}{c}{**Unsupervised Method**} \\ \hline ChaGPT & gpt-3.5-turbo (\(\approx\)175dB) & 12.0583 & 11.2764 & 11.6542 & 11.4021 & 10.9134 & 11.1524 \\ GPT-4 & gpt-4 (\(\approx\)1760dB) & 15.7324 & 15.2377 & 15.4811 & 15.8187 & 15.4824 & 15.6487 \\ \hline \multicolumn{8}{c}{**Supervised Method**} \\ \hline Generative Baseline & & 63.79 \(\pm\) 0.27 & 39.94 \(\pm\) 0.68 & 61.80 \(\pm\) 0.37 & 64.60 \(\pm\) 0.47 & 59.67 \(\pm\) 0.35 & 62.03 \(\pm\) 0.21 \\ Pipeline Baseline & & 69.23 \(\pm\) 0.30 & 58.21 \(\pm\) 0.57 & 63.24 \(\pm\) 0.44 & 69.00 \(\pm\) 0.48 & 57.55 \(\pm\) 0.19 & 62.75 \(\pm\) 0.29 \\ CubeRE & & 66.14 \(\pm\) 0.88 & 64.39 \(\pm\) 1.23 & 65.23 \(\pm\) 0.82 & 65.82 \(\pm\) 0.84 & 64.28 \(\pm\) 0.25 & 65.04 \(\pm\) 0.29 \\ Text2NKG w/o DA & BERT-base (1100dB) & 16.02 \(\pm\) 0.20 & 72.08 \(\pm\) 0.68 & 74.01 \(\pm\) 0.55 & 73.55 \(\pm\) 0.81 & 70.63 \(\pm\) 1.40 & 72.06 \(\pm\) 0.34 \\ Text2NKG w/o DA & & 88.77 \(\pm\) 0.85 & 73.39 \(\pm\) 0.47 & 83.26 \(\pm\) 0.70 & 88.09 \(\pm\) 0.69 & 76.64 \(\pm\) 0.45 & 81.97 \(\pm\) 0.58 \\ Text2NKG w/o HM & & 61.74 \(\pm\) 0.34 & 76.97 \(\pm\) 0.44 & 68.52 \(\pm\) 0.69 & 61.07 \(\pm\) 0.73 & 76.16 \(\pm\) 0.59 & 67.72 \(\pm\) 0.48 \\ Text2NKG (ours) & & **91.26 \(\pm\) 0.69** & **79.36 \(\pm\) 0.51** & **84.89 \(\pm\) 0.44** & **90.77 \(\pm\) 0.60** & **77.53 \(\pm\) 0.32** & **83.63 \(\pm\) 0.63** \\ \hline Generative Baseline & & 67.08 \(\pm\) 0.49 & 67.33 \(\pm\) 0.78 & 66.40 \(\pm\) 0.47 & 67.17 \(\pm\) 0.40 & 54.60 \(\pm\) 0.58 & 65.84 \(\pm\) 0.25 \\ Pipeline Baseline & & 70.58 \(\pm\) 0.78 & 66.56 \(\pm\) 0.66 & 65.82 \(\pm\) 0.32 & 69.21 \(\pm\) 0.55 & 64.71 \(\pm\) 0.24 & 66.65 \(\pm\) 0.28 \\ CubeRE & & 68.75 \(\pm\) 0.82 & 68.88 \(\pm\) 1.03 & 68.81 \(\pm\) 0.46 & 66.39 \(\pm\) 0.96 & 67.12 \(\pm\) 0.69 & 66.75 \(\pm\) 0.28 \\ Text2NKG (ours) & & **91.90 \(\pm\) 0.79** & **79.43 \(\pm\) 0.42** & **85.21 \(\pm\) 0.69** & **91.66 \(\pm\) 0.81** & **77.64 \(\pm\) 0.46** & **83.81 \(\pm\) 0.54** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of Text2NKG with other baselines in the hyper-relational extraction on HyperRED. Results of the supervised baseline models are mainly taken from the original paper (Chia et al., 2022). The best results in each metric are in **bold**. \begin{table} \begin{tabular}{l c c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**PLM**} & \multicolumn{3}{c}{**HyperRED : Event-based**} & \multicolumn{3}{c}{**HyperRED : Role-based**} & \multicolumn{3}{c}{**HyperRED : Hypergraph-based**} \\ \cline{3-10} & & Precision & Recall & \(F_{1}\) & Precision & Recall & \(F_{1}\) & Precision & Recall & \(F_{1}\) \\ \hline \multicolumn{8}{c}{**Unsupervised Method**} \\ \hline ChaGPT & gpt-3.5-turbo (\(\approx\)175dB) & 10.4678 & 11.1628 & 10.8041 & 11.4387 & 10.4203 & 10.9058 & 11.2998 & 11.7852 & 11.5373 \\ GPT-4 & gpt-4 (\(\approx\)1760dB) & 13.3681 & 14.6701 & 13.98818 & 13.6397 & 12.5355 & 13.0643 & 13.0907 & 13.6701 & 13.7341 \\ \hline \multicolumn{8}{c}{**Supervised Method**} \\ \hline Tex2Event & & & 73.94 & 70.56 & 72.21 & 72.73 & 68.65 & 70.52 & 73.68 & 70.57 & 71.98 \\ UIE & T5-base (220M) & 76.51 & 73.02 & 74.72 & 72.17 & 69.684 & 70.98 & 72.03 & 68.74 & 70.34 \\ LaSUE & & 79.62 & 78.04 & 78.82 & 77.01 & 74.26 & 75.61 & 76.21 & 73.75 & 74.96 \\ Text2NKG (ours) & BERT-base (110M) & **86.20** & **79.25** & **82.58** & **86.72** & **78.94** & **82.64** & **83.53** & **86.59** & **85.03** \\ \hline \multicolumn{8}{c}{**Text2Event**} \\ \hline UIE & & 75.58 & 72.39 & 75.79 & 73.21 & 70.85 & 72.01 & 75.28 & 72.73 & 73.98 \\ UIE & T5-large (770M) & 79.38 & 74.69 & 76.66 & 74.47 & 71.84 & 73.14 & 74.57 & 71.93 & 73.22 \\ LaSUE & & 81.29 & 79.54 & 80.40 & 79.37 & 76.63 & 7 ### Main Results (RQ1) The experimental results of proposed Text2NKG and other baselines with both BERT-base and BERT-large encoders can be found in Table 2 for the fine-grained n-ary relation extraction in hyper-relational schema. We can observe that Text2NKG shows a huge improvement over the existing optimal model CubeRE on both the dev and test datasets of HyperRED. The \(F_{1}\) score is improved by 19.66 percentage points in the dev set and 18.60 percentage points in the test set with the same BERT-base encoder, and 16.40 percentage points in the dev set and 17.06 percentage points in the test set with the same BERT-large encoder, reflecting Text2NKG's excellent performance. Figure 4(a) and 4(b) intuitively show the changes of evaluation metrics and answers of facts in the dev set during the training of Text2NKG. It is worth noting that Text2NKG exceeds 90% in precision accuracy, which proves that the model can obtain very accurate n-ary relational facts and provides a good guarantee for the quality of fine-grained NKG construction. ### Results on Various NKG Schemas (RQ2) As shown in Table 3, besides hyper-relational schema, Text2NKG also accomplishes the tasks of fine-grained n-ary relation extraction in three other different NKG schemas on HyperRED, which demonstrates good utility. In the added tasks of n-ary relation extraction for event-based, role-based, and hypergraph-based schemas, since no model has done similar experiments at present, we used event extraction or unified extraction methods such as Text2Event (Lu et al., 2021), UIE (Lu et al., 2022), and LasUIE (Fei et al., 2022) for comparison. We found that Text2NKG still works best in these schemas, which demonstrates good versatility. ### Ablation Study (RQ3) Data augmentation (DA), null-label weight hyperparameter (\(\alpha\)), and hetero-ordered merging (HM) are the three main components of Text2NKG. For the different Text2NKG variants as shown in Table 2, it can be observed that DA, \(\alpha\), and HM all contribute to the accurate results of our complete model. By comparing the differences, we find that HM is most effective by combining the probabilities of labels of different orders, followed by DA and \(\alpha\). ### Analysis of Null-label Weight Hyperparameters (RQ4) We compared the effect for different null-label weight hyperparameters (\(\alpha\)). As shown in Figure 4(c), the larger the \(\alpha\), the greater the learning weight of null-label compared with other tables, the more relations are predicted as null-label. After filtering out the facts having null-label, fewer facts are extracted, so the precision is generally higher, and the recall is generally lower. The smaller the \(\alpha\), the more relations are predicted as non-null labels, thus extracting more n-ary relation facts, so the recall is generally higher, and the precision is generally lower. Comparing the results of \(F_{1}\) values for different \(\alpha\), it is found that \(\alpha=0.01\) works best. When applied in practice, the hyperparameter \(\alpha\) can be adjusted according to specific needs to obtain the best results. Figure 4: (a) Precision, Recall, and \(F_{1}\) changes in the dev set during the training of Text2NKG. (b) The changes of the number of true facts, the number of predicted facts, and the number of predicted accurate facts during the training of Text2NKG. (c) Precision, Recall, and \(F_{1}\) results on different null-label hyperparameter (\(\alpha\)) settings. (d) The changes of the number of extracted n-ary relation extraction in different arity. ### Analysis of N-ary Relation Extraction in Different Arity (RQ5) We use output merging to address the dynamic changes in the number of elements in n-ary relational facts. For instance, in the hyper-relational fact (Einstein, educated_at, University of Zurich, degree: Doctorate degree, major: Physics), the Text2NKG algorithm allows us to extract two 3-ary atomic facts: (Einstein, educated_at, University of Zurich, degree: Doctorate degree) and (Einstein, educated_at, University of Zurich, major: Physics). These are then merged based on the same primary triple (Einstein, educated_at, University of Zurich) to form a 4-ary fact. The same principle applies to facts of higher artities. Figure 4(d) shows the number of n-ary relational facts extracted after output merging and the number of the answer facts in different arity during training of Text2NKG on the dev set. We find that, as the training proceeds, the final output of Text2NKG converges to the correct answer in terms of the number of complete n-ary relational facts in each arity, achieving implementation of n-ary relation extraction in indefinite arity unsupervised, with good scalability. ### Case Study (RQ6) Figure 5 shows a case study of n-ary relation extraction by a trained Text2NKG. For a natural language sentence, "He was born in Skirpenbeck, near York and attended Pocklin.", four structured n-ary relation extraction can be obtained by Text2NKG according to the requirements. Taking the hyper-relational schema for an example, Text2NKG can successfully extract one n-ary relational fact consisting of a main triple [He, educated at, Pocklington], and two auxiliary key-value pairs {start time:1936}, {end time:1943}. This intuitively validates the practical performance of Text2NKG on the fine-grained n-ary relation extraction to better contribute to the NKG construction. ### Comparison with ChatGPT and GPT-4 (RQ7) As shown in Table 2 and Table 3, we compared the extraction effects under four NKG schemas of the supervised Text2NKG with the unsupervised ChatGPT and GPT-4. We found that, these large language models cannot accurately distinguish the closely related relations in the fine-grained NKG relation repository, resulting in their F1 scores ranging around 10%-15%, which is much lower than the performance of Text2NKG. On the other hand, the limitation of Text2NKG is that its performance is confined within the realm of supervised training. Therefore, in future improvements and practical applications, we suggest combining small supervised models with large unsupervised models to better balance solving the cold-start and fine-grained accuracy problems in NKG construction. Appendix G. ## 6 Conclusion In this paper, we propose Text2NKG, a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph (NKG) construction. Experimental results show that Text2NKG outperforms other baselines on fine-grained n-ary relation extraction tasks, with nearly 20 percentage points improvement in \(F_{1}\) scores. Moreover, Text2NKG supports n-ary relation extraction in four schemas: hyper-relational, event-based, role-based, and hypergraph-based. Meanwhile, we extend HyperRED dataset to a fine-grained n-ary relation extraction benchmark in four schemas. Figure 5: Case study of Text2NKG’s n-ary relation extraction with four schemas on HyperRED. #### Acknowledgments This work is supported by the National Science Foundation of China (Grant No. 62176026) and the Beijing Natural Science Foundation (M22009). This work is also supported by the BUPT Excellent Ph.D. Students Foundation and the BUPT Postgraduate Innovation & Entrepreneurship Project led by Haoran Luo.
2306.17267
Fast and Robust State Estimation and Tracking via Hierarchical Learning
Fast and reliable state estimation and tracking are essential for real-time situation awareness in Cyber-Physical Systems (CPS) operating in tactical environments or complicated civilian environments. Traditional centralized solutions do not scale well whereas existing fully distributed solutions over large networks suffer slow convergence, and are vulnerable to a wide spectrum of communication failures. In this paper, we aim to speed up the convergence and enhance the resilience of state estimation and tracking for large-scale networks using a simple hierarchical system architecture. We propose two ``consensus + innovation'' algorithms, both of which rely on a novel hierarchical push-sum consensus component. We characterize their convergence rates under a linear local observation model and minimal technical assumptions. We numerically validate our algorithms through simulation studies of underwater acoustic networks and large-scale synthetic networks.
Connor Mclaughlin, Matthew Ding, Deniz Erdogmus, Lili Su
2023-06-29T19:07:17Z
http://arxiv.org/abs/2306.17267v2
# Fast and Robust State Estimation and Tracking via Hierarchical Learning ###### Abstract Fully distributed estimation and tracking solutions to large-scale multi-agent networks suffer slow convergence and are vulnerable to network failures. In this paper, we aim to speed up the convergence and enhance the resilience of state estimation and tracking using a simple hierarchical system architecture wherein agents are clusters into smaller networks, and a parameter server exists to aid the information exchanges among networks. The information exchange among networks is expensive and occurs only once in a while. We propose two "consensus + innovation" algorithms for the state estimation and tracking problems, respectively. In both algorithms, we use a novel hierarchical push-sum consensus component. For the state estimation, we use dual averaging as the local innovation component. State tracking is much harder to tackle in the presence of dropping-link failures and the standard integration of the consensus and innovation approaches are no longer applicable. Moreover, dual averaging is no longer feasible. Our algorithm introduces a pair of additional variables per link and ensure the relevant local variables evolve according to the state dynamics, and use projected local gradient descent as the local innovation component. We also characterize the convergence rates of both of the algorithms under linear local observation model and minimal technical assumptions. We numerically validate our algorithm through simulation of both state estimation and tracking problems. ## I Introduction The state estimation problem determines the internal state of a system using observations, while the tracking problem does the same with a time-varying system. One real-life application of distributed state estimation is using meters to estimate voltage phasors within smart power grids [1, 2]. In the past couple of decades, fully distributed solutions have attracted much attention. However, as the scale of the multi-agent network increases, existing fully distributed solutions start to lag behind due to crucial real-world challenges such as slow information propagation and network communication failures. In this paper, we use hierarchical learning to improve both the convergence speed and robustness. Hierarchical algorithms have been considered in literature [4, 6, 8, 19]. Focusing on electric power systems, a master-slave architecture was considered in early works [6, 19], which decomposed a large-scale composite system into subsystems that work in orchestra with a central server. Carefully examining the dynamics in electric power systems, both [19] and [6] proposed two-level hierarchical algorithms with well-calibrated local pre-processing in the first level and effective one-shot aggregation in the second level. Different from [6, 19], we go beyond electric power systems and our algorithms do not require complicated pre-processing. Compared with a single large network, using hierarchical system architecture to speed up convergence was considered [4, 8]. Epstein et al. [4] studied the simpler problem of average consensus, and mathematically analyzed the benefits in convergence speed. However, the consensus errors of the method in [4] do not decay to zero as \(t\rightarrow\infty\); there is a non-diminishing term in their error convergence rate (see [4, Theorem 8] for details). This is because once the execution of their algorithm moves away from the lowest layer, there is no further drift in reducing the residual errors of that layer. As can be seen from our analysis, for consensus-based distributed optimization algorithms to work, it is crucial to guarantee consensus errors quickly decay to zero because that as the algorithm executes, the consensus errors for the (stochastic) gradients in each round will be accumulated. Hence, non-diminishing consensus errors will lead to the accumulative errors blowing up to \(\infty\). Hou and Zheng [8] used a hierarchical "clustered" view for average consensus. Nodes are clustered into small groups, and at each iteration receive estimates from peers within the same group as well as group information of other groups. Here, the group information is defined as a weighted combination of the local estimates at the group members. Unfortunately, such information is often expensive to obtain. Fujimori et al. [5] considered achieving consensus with local nonlinear controls wherein the notion of consensus considered departs from the classical average consensus. Specifically, with their methods, the value that each node agrees on may not be the average of the initial conditions/values; see the numerical results in [5] for an instance in this regard. Wang et al. [21] considered consensus tracking problem and designed a well-calibrated fusion strategy at the central server. Yet, this method only works when on average a non-trivial portion of the states are observable locally, which does not hold in our setup. On the technical side, [13] is closet to our work in that it considered the general distributed optimization in the presence of packet-dropping links, agent activation asynchrony, and network communication delay. They used similar algorithmic techniques as in [17] to achieve resilience. Different from [13], we consider the concrete state estimation and tracking problem and less harsh network environment (i.e., synchronous updates and no communication delay). Nevertheless, we manage to relax the strong-convexity assumption in [13] and consider time-varying global objectives. To the best of our knowledge, algorithm resilience in hierarchical system against non-benign network failures is largely overlooked. ### _Contributions_ We consider a simple hierarchical system architecture (depicted in Fig.1) wherein the agents are decomposed into \(M\) small networks, and a parameter server exists to aid the information exchanges among networks. The information exchange across networks is expensive and should occur only once in a while. To incorporate robustness, we focus on the challenging communication failures wherein a communication link may drop the transmitted messages unexpectedly and without notifying the sender. We do not impose any statistical patterns on the link failures; instead, we only require a link to function properly at least once during a time window. To the best of our knowledge, no existing relevant work considered such strong link failures. We propose two hierarchical "consensus + innovation" algorithms for the state estimation and tracking problems, respectively. Under our algorithms, the parameter server does simple averaging over a small subset of agents. The state estimation problem can be viewed as a special case of the tracking problem. Nevertheless, in this paper, we study the state estimation and the general tracking problems separately; our results of the state estimation problem gives a better convergence rate. In both of these algorithms, we use a novel hierarchical push-sum consensus component wherein for each subnetwork only a designated agent needs to exchange messages with the parameter server and such exchange only occurs once in a while. For the state estimation, we use dual averaging as the local innovation component. State tracking is much harder to tackle in the presence of dropping-link failures and the standard integration of the consensus and dual averaging is no longer feasible. Our algorithm introduces a pair of additional variables per link and ensure the relevant local variables evolve according to the state dynamics, and use projected local gradient descent as the local innovation component. We also characterize the convergence rates of both of the algorithms under linear local observation model and minimal technical assumptions. Finally, we provide simulation results to illustrate the robustness and communication efficiency of our method. ## II Problem Formulation ### _System Model_ We consider a hierarchical system architecture in which the agents are clusters into \(M\) sub-networks, and a parameter server (PS) exists to aid the information exchanges among sub-networks. Similar system architecture is adopted in the literature [11, 12, 20]. The connection among each multi-agent network \(S_{i}\) is time-varying and is formally represented by graphs \(G(\mathcal{V}_{i},\mathcal{E}_{i}[t])\), where \(\mathcal{V}_{i}=\{v_{1}^{i},\cdots,v_{n_{i}}^{i}\}\) is node set and \(\mathcal{E}_{i}[t]\) is the set of all directed edges. Specifically, there exists \(\mathcal{E}_{i}\) such that \(\mathcal{E}_{i}[t]\subseteq\mathcal{E}_{i}\) for each \(t\). Let \(N:=\sum_{i=1}^{M}n_{i}\). Agents in the same sub-network can exchange messages subject to the given communication network \(G(\mathcal{V}_{i},\mathcal{E}_{i}[t])\) at time \(t\). No messages can be exchanged directly between agents in different sub-networks. In addition, the PS has the freedom in querying and pushing messages to any agent. Nevertheless, such message exchange is costly and needs to be sparse. For an arbitrary agent \(j\) in network \(S_{i}\), let \(\mathcal{I}_{j}^{i}[t]=\{k\mid(k,j)\in\mathcal{E}_{i}[t]\}\) and \(\mathcal{O}_{j}^{i}[t]=\{k\mid(j,k)\in\mathcal{E}_{i}[t]\}\), respectively, be the sets incoming and outgoing neighbors to agent \(j\). For notational convenience, we denote \(d_{j}^{i}[t]=\left|O_{j}^{i}[t]\right|\). Throughout this paper, we use the terminology "node" and "agent" interchangeably. ### _Threat Model_ We follow the network fault model adopted in [15] to consider packet-dropping link failures. Specifically, any communication link may unexpectedly drop a packet transmitted through it, and the sender is unaware of such packet lost. If a link successfully delivers messages at communication round \(t\), we say this link is _operational_ at round \(t\). **Assumption 1**: _We assume that a link in \(\mathcal{E}_{i}\) is operational at least once every \(B\) communication round, for some positive constant \(B\) for each \(i=1,\cdots,M\)._ **Remark 1**: _As observed in [13, 15, 18], the above threat model is much harder to tackle compared with the ones wherein each agent is aware of the message delivery status. Though knowing real-time knowledge of out-going degree is reasonable, in harsh and versatile deployment environments such as undersea, the communication channels between two neighboring entities may suffer strong interference, leading to rapidly changing channel conditions and, consequently, unsuccessful message delivery._ ### _State Dynamics and Local Observation Models_ State dynamicsEach of the agent is interested in learning the \(d\)-dimensional state of a moving target \(w^{*}[t]\) that follows the dynamics \[w^{*}[t]=Aw^{*}[t-1]\quad\forall\ t\geq 1, \tag{1}\] where \(A\in\mathbb{R}^{d\times d}\) is known to each agent, and \(w^{*}[t]\in\mathcal{W}\subseteq\mathbb{R}^{d}\). For example, each autonomous vehicle needs to keep track of neighboring vehicles, where the global state contains the statuses of the spatial position, the velocity, and Fig. 1: The System Architecture of Hierarchical Learning the acceleration of the target. Clearly, the state dynamic is approximately a known linear matrix. An interesting special case of Eq.(1) is when \(A=I\), under which the target state is time-invariant, i.e., \(w^{*}[t]=w^{*}[0]\) for all \(t\). This special case is often referred to as the state estimation problem [2]. Local observationIn every iteration \(t\), each agent locally takes measurements of the underlying truth \(w^{*}[t]\). We focus on the a linear observation model which is commonly adopted in literature [9, 14, 16]: For a specific agent \(j\in\mathcal{V}_{i}\) at time \(t\): \[y_{j}^{i}[t]:=H_{j}^{i}w^{*}[t]+\xi_{j}^{i}[t] \tag{2}\] where \(H_{j}^{i}\) is the local observation matrix, and \(\xi_{j}^{i}[t]\) is the observation noise. We assume that the observation noise \(\xi_{j}^{i}[t]\) is independent across time \(t\) and across agents \(j\). In addition, \(\left[(\xi_{j}^{i}[t])^{\top}\xi_{j}^{i}[t]\right]\leq\sigma_{j}^{i}\). In practice, the observation matrix \(H_{j}^{i}\) is often fat. Thus, to correctly estimate/track \(w^{*}[t]\), agents must collaborate with others. ## III Hierarchical Average Consensus in the Presence of Packet-dropping Failures We introduce an average consensus algorithm (Algorithm 1), which extends our prior work [15] to the hierarchical system architecture. The algorithm in [15] can be viewed as a variant of that in [17], wherein finite-time convergence is not guaranteed and the corresponding rates are missing. Up to line 13 in Algorithm 1 is the parallel execution of the fast robust push-sum [15] over the \(M\) subnetworks. Lines 14-23 describe the novel information fusion cross the subnetworks, which only occurs once every \(\Gamma\) iterations. Similar to the standard Push-Sum [10], in addition to the primary variable \(z_{j}^{i}\), each agent \(j\) keeps a mass variable \(m_{j}^{i}\) to correct the possible bias caused by the graph structure, and uses the ratio \(z_{j}^{i}/m_{j}^{i}\) to estimate the average consensus. The correctness of push-sum relies crucially on mass preservation (i.e., \(\sum_{i=1}^{M}\sum_{j=1}^{n_{j}}m_{j}^{i}[t]=N\)) holds for all \(t\). The variables \(\sigma\), \(\vec{\sigma}\), \(\rho\), and \(\vec{\rho}\) are introduced to recover the dropped messages. Specifically, \(\sigma_{j}^{i}[t]\) and \(\widetilde{\sigma}_{j}^{i}[t]\) are used to record how much value and mass that agent \(j\) (in subnetwork \(i\)) have been sent to each of the outgoing neighbor of agent \(j\) up to time \(t\). Corresponding, \(\rho_{j^{\prime}j}^{i}[t]\) and \(\widetilde{\rho}_{j^{\prime}j}^{i}[t]\) are used to record how much value and mass have been received by agent \(j\) through the link \((j^{\prime}j)\in\mathcal{E}_{i}\). On the technical side, we use augmented graphs (detailed in Definition 1 of Appendix A) to show convergence. To control the trajectory smoothness of the \(z_{j}^{i}/m_{j}^{i}\) (at both normal agents and virtual agents), in each iteration, both \(z\) and \(m\) are updated twice in lines 11 to 13. ``` 1Initialization: For each sub-network \(i=1,\cdots,M\): \(z_{j}^{i}[0]=w_{j}^{i}\in\mathbb{R}^{d}\), \(m_{j}^{i}[0]=1\in\mathbb{R}\), \(\sigma_{j}^{i}[0]=\mathbf{0}\in\mathbb{R}^{d}\), \(\widetilde{\sigma}_{j}^{i}[0]=0\in\mathbb{R}\), and \(\rho_{j^{\prime}j}^{i}[0]=\mathbf{0}\in\mathbb{R}^{d}\), \(\widetilde{\rho}_{j^{\prime}j}^{i}[0]=0\in\mathbb{R}\) for each incoming link, i.e., \(j^{\prime}\in\mathcal{I}_{j}^{i}\). 2 In parallel, each agent in parallel does: 3for\(t\geq 1\)do 4\(\sigma_{j}^{i+}[t]\gets\sigma_{j}^{i}[t-1]+\frac{z_{j}^{i}[t-1]}{d_{j}[ t]+1}\), \(\widetilde{\sigma}_{j}^{i+}[t]\leftarrow\widetilde{\sigma}_{j}^{i}[t-1]+\frac{m_{j} ^{i}[t-1]}{d_{j}^{i}[t]+1}\); 5 Broadcast \(\left(\sigma_{j}^{i+}[t],\widetilde{\sigma}_{j}^{i+}[t]\right)\) to outgoing neighbors; 6foreach incoming link \((j^{\prime},j)\in\mathcal{E}_{i}[t]\)do 7ifmessage \(\left(\sigma_{j^{\prime}j}^{i+}[t],\widetilde{\sigma}_{j^{\prime}}^{i+}[t]\right)\) is receivedthen 8\(\rho_{j^{\prime}j}^{i}[t]\leftarrow\sigma_{j^{\prime}}^{i+}[t]\), \(\widetilde{\rho}_{j^{\prime}j}^{i}[t]\leftarrow\widetilde{\sigma}_{j^{\prime}} ^{i+}[t]\); 9else 10\(\rho_{j^{\prime}j}^{i}[t]\leftarrow\rho_{j^{\prime}j}^{i}[t-1]\), \(\widetilde{\rho}_{j^{\prime}j}^{i}[t]\leftarrow\widetilde{\rho}_{j^{\prime}j} ^{i}[t-1]\); 11\(z_{j}^{i+}[t]\leftarrow\frac{z_{j}^{i}[t-1]}{d_{j}^{i}[t]+1}+\sum_{j^{\prime} \in\mathcal{I}_{j}^{i}[t]}\left(\rho_{j^{\prime}j}^{i}[t]-\rho_{j^{\prime}j}^{i }[t-1]\right)\); 12\(m_{j}^{i+}[t]\leftarrow\frac{m_{j}^{i}[t-1]}{d_{j}^{i}[t]+1}+\sum_{j^{\prime} \in\mathcal{I}_{j}^{i}[t]}(\widetilde{\rho}_{j^{\prime}j}^{i}[t]-\widetilde{ \rho}_{j^{\prime}j}^{i}[t-1])\). 13\(\sigma_{j}^{i}[t]\leftarrow\sigma_{j}^{i+}[t]+\frac{z_{j}^{i+}[t]}{d_{j}^{i}[t ]+1}\), \(\widetilde{\sigma}_{j}^{i}[t]\leftarrow\widetilde{\sigma}_{j}^{i+}[t]+\frac{m_{j} ^{i}[t]}{d_{j}^{i}[t]+1}\), \(z_{j}^{i}[t]\leftarrow\frac{z_{j}^{i+}[t]}{d_{j}^{i}[t]+1}\), \(m_{j}^{i}[t]\leftarrow\frac{m_{j}^{i}[t]}{d_{j}^{i}[t]+1}\); 14if\(j\) is a designated agent of network \(S_{i}\)then 15if\(t\mod\Gamma=0\)then 16 Send \(\frac{1}{2}z_{j}^{i}[t]\) and \(\frac{1}{2}m_{j}^{i}[t]\) to the PS; 17 Upon receiving messages from the PS do update \(z_{j}^{i}[t]\leftarrow\frac{1}{2}z_{j}^{i}[t]+\frac{1}{2M}\sum_{i=1}^{M}z_{i_{0} }^{i}[t]\); 18\(m_{j}^{i}[t]\leftarrow\frac{1}{2}m_{j}^{i}[t]+\frac{1}{2M}\sum_{i=1}^{M}m_{i_{0} }^{i}[t]\); 19if\(t\mod\Gamma=0\)then 20 The PS does the following: 21 Wait to receive \(z_{0}^{i}[t]\) and \(m_{i_{0}}^{i}[t]\) from each designated agent of the \(M\) networks; 22 Compute and send \(\frac{1}{M}\sum_{i=1}^{M}\frac{1}{2}z_{i_{0}}^{i}[t]\) and \(\frac{1}{M}\sum_{i=1}^{M}\frac{1}{2}m_{i_{0}}^{i}[t]\) to all designated agents \(i_{0}\) for \(i=1,\cdots,M\). ``` **Algorithm 1**Hierarchical Push-Sum (HPS) **Assumption 2**: _Each network \((\mathcal{V}_{i},\mathcal{E}_{i})\) is strongly connected for \(i=1,\cdots,M\)._ Denote the diameter of \(G(\mathcal{V}_{i},\mathcal{E}_{i})\) as \(D_{i}\). Let \(D^{*}:=\max_{i\in[M]}D_{i}\). Let \(\beta_{i}=\frac{1}{\max_{j\in\mathcal{V}_{i}}(d_{j}^{i}]+1)^{2}}\). **Theorem 1**: _Choose \(\Gamma=BD^{*}\). Suppose that Assumptions 1 and 2 hold, and that \(t\geq 2\Gamma\). Then_ \[\left\|\frac{z_{j}^{i}[t]}{m_{j}^{i}[t]}-\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_ {i}}w_{j}^{i}\right\|_{2}\leq\frac{4M^{2}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\left\|w_{j}^ {i}\right\|_{2}\gamma^{\lfloor\frac{t}{2\Gamma}\rfloor-1}}{\left(\min_{i\in[M]} \beta_{i}\right)^{2D^{*}B}N},\] _where \(\gamma=1-\frac{1}{4M^{2}}\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\)._ Henceforth, for ease of exposition, we adopt the simplification that \(\lfloor t/2\Gamma\rfloor-\lceil r/2\Gamma\rceil=(t-r)/2\Gamma\). Such simplification does not affect the order of convergence rate. Exact expression can be recovered while a straightforward bookkeeping of the floor and ceiling in the calculation. Theorem 1 says that, despite packet-dropping link failures and sparse communication between the networks and the PS, the consensus error \(\left\|\frac{z_{j}^{i}[t]}{m_{j}^{i}[t]}-\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{ i}}w_{j}^{i}\right\|_{2}\) decays to 0 exponentially fast. Clearly, the more reliable the network (i.e. smaller \(B\)) and the more frequent across networks information fusion (i.e. smaller \(\Gamma\)), the faster the convergence rate. **Remark 2**: _Partitioning the agents into \(M\) subnetworks immediately leads to smaller network diameters \(D^{*}\). Hence, compared with a gigantic single network, the term \(\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\) for the \(M\) sub-networks is significantly larger, i.e., faster convergence._ **Remark 3**: _It turns out that our bound in Theorem 1 is loose in quantifying the total number of global communication. Specifically, for any given \(\epsilon>0\), to reduce the error to \(O(\epsilon)\), based on the bound in Theorem 1, it takes \(t\geq\Omega\left(\Gamma\log\epsilon/\log\gamma\right)\), i.e., larger \(\Gamma\) leads to slower convergence to \(O(\epsilon)\). However, our preliminary simulation results (presented in Fig. 3) indicate that if the cost of global communication is significant enough, then a larger \(\Gamma\) may converges faster in terms of total communication delay._ ## IV State Estimation In this section, we study the special case of Eq.(1) when \(A=I\), i.e., the state estimation problem. We use a "consensus" + "innovation" approach with Algorithm 1 as the consensus component and use dual averaging as the innovation component. Specifically, we add the following lines of pseudo code right after line 12 inside the outer **for**-loop in Algorithm 1 - at the end of the for-loop: \[\begin{array}{|l|}\hline\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit _Remark 4:_ Choosing \(\eta[t]=\frac{1}{\sqrt{t}}\), it becomes \[\left\|\widehat{v}_{j}^{i}[T]-w^{*}\right\|_{2}^{2}\leq\frac{1}{ \lambda_{\min}}\left(\frac{NL_{0}^{2}}{2\sqrt{T}}+\frac{N}{\sqrt{T}}R^{2}\right.\] \[+\left.\frac{4M^{2}L_{0}^{2}\gamma^{\frac{1}{2k}}}{\left(1-\gamma ^{\frac{1}{2k}}\right)\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}}\frac{1}{ \sqrt{T}}+4NLR\sqrt{\frac{\log\frac{1}{2}}{T}}.\right)\] ## V State Tracking In the state tracking problem, the agents try to collaboratively track \(w^{*}[t]\). We present the full description of our algorithm in Algorithm 2. Our algorithm uses projected gradient descent as the local innovation component. In each round, in line 4, each agent gets a new observation \(y_{j}^{i}[t]\). However, the local stochastic gradient is computed on the measurement obtained in the previous round \(t-1\). As can be seen from our analysis, we use such one step setback to align the impacts of the global dynamics \(A\) with the relevant parameters evolution. In addition, in line 18, we apply \(A\) to local \(z\) sequence update. Recall that if a link does not function properly, the sent value and mass are stored in virtual nodes (the nodes that correspond to the edges). Hence, we apply \(A\) to the auxiliary variables \(\sigma\) and \(\rho\) as well. Specifically, we update \(\rho_{j^{\prime}j}^{i}\) and \(\widetilde{\rho}_{j^{\prime}j}^{i}\) twice - the first time in lines 9-12, and the second time in lines 16 and 17. In line 15, we apply \(A\) to the original update of \(\sigma\). Notably, we apply \(A\) to \(z\) and auxiliary variables that are relevant to \(z\) only; we do not apply \(A\) to the mass update. In line 19, \(\prod_{\mathcal{W}}[\cdot]\) is an operator that projects any given \(w\in\mathbb{R}^{d}\) onto \(\mathcal{W}\). This projection is used to ensure boundedness of stochastic gradients. In addition to the aforementioned assumptions, the following assumptions will be used in our analysis. **Assumption 6**: _The global linear dynamic matrix \(A\) is positive semi-definite with \(\left\|A\right\|_{2}\leq 1\)._ Notably, \(\left\|\boldsymbol{I}\right\|_{2}=1\), satisfying Assumption 6. **Assumption 7**: _The set \(\mathcal{W}\) contains \(w^{*}[t]\) for all \(t\)._ We define \(\bar{z}=-\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}z_{j}^{i}[t]\), which differs from standard aggregation by a "-" sign. **Theorem 3**: _Suppose that Assumptions 1-5 hold. Choose \(\eta[t]=\frac{1}{\lambda_{i}t}\) for \(t\geq 1\), and \(\eta[0]=\frac{1}{\lambda_{1}}\). Suppose that Assumption 6 holds. Let \(b=\left\|A\right\|_{2}\gamma^{\frac{1}{T}}\). Let \(t_{0}=\frac{2}{\log 1/b}\log\left(\frac{2}{\log 1/b}\right)\). Then_ \[\left\|w_{j}^{i}[t]-\bar{z}[t]\right\|_{2}\leq\begin{cases}\frac{ 16M^{2}L_{0}}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\lambda_{1}(1-b)t },&\text{if }\ t\geq t_{0}\\ \frac{4M^{2}L_{0}t_{0}}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\lambda_ {1}},&\text{if }\ t<t_{0}\end{cases} \tag{7}\] Moreover, when \(T\geq t_{0}\), for any given \(\delta\in(0,1)\), the following holds with probability at least \((1-\delta)\): \[\left\|w_{j}^{i}[T]-w^{*}[t]\right\|_{2}\leq\left\|w_{0}^{*}\right\|_ {2}\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\frac{1}{T}\] \[\qquad+\frac{\exp\left(\lambda_{1}/\lambda_{d}\right)4M^{2}L_{0}t _{0}^{2}}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D\cdot B}\lambda_{1}}\frac{1} {T}\] \[\qquad+\frac{32M^{2}L_{0}\exp\left(\lambda_{1}/\lambda_{d}\right) }{\left(\min_{i\in[M]}\beta_{i}\right)^{2D\cdot B}\lambda_{1}(1-b)}\frac{\log( T+1)}{T}\] \[\qquad+\frac{2B_{0}\exp\left(\lambda_{1}/\lambda_{d}\right)}{ \lambda_{1}}\sqrt{\frac{d}{2t}\log(d/\delta)}, \tag{8}\] where \(\lambda_{1}\geq\cdots\geq\lambda_{d}>0\) are the eigenvalues of \(K\). Proofs can be found in Appendix C. **Remark 5**: _For sufficiently large \(T\), the dominance term in the upper bound of Eq.(8) is \(\sqrt{\frac{d}{2T}\log(d/\delta)}B_{0}\log T\), which arises from the observation noise._ ## VI Numerical Results To better illustrate the benefits of our method, we present simulated results below. We consider a 16-node network with a single parameter server node as shown in figure 2. We model the communication delay between a pair of nodes as a Poisson random variable. Communication within each of the three network clusters is inexpensive, with a cost \(\lambda=\lambda_{local}\). Communication between any individual agent and the parameter server (node 0) is more expensive, with a cost \(\lambda=c\lambda_{local}\). For our experiments we select \(c=2\), that is to say the expected delay in communication is twice as long with the parameter server as it is for local communications. We consider two separate observation models for state estimation and tracking, respectively. For state estimation, our ground truth state is a drawn from a 12-dimensional zero-mean Gaussian with identity covariance. For tracking, we similarly use a 2-dimensional Gaussian vector representing the X and Y coordinates of a moving object, having global dynamics \(A=aI\), with \(a=0.99\). Each agent has an identity observation matrix and i.i.d. Gaussian noise with variance 0.2 added to each dimension of the observation. We use a step size of 0.1 for all experiments. In figure 3, we plot the average state estimation L2 error over time for various drop-link conditions \(B\) and hierarchical synchronization frequency \(\Gamma\). Observe first that our algorithm converges similarly with and without the addition of the noise introduced by dropped links (\(B>1\)). Second, the hierarchical decomposition (\(\Gamma>1\)) significantly speeds up the convergence compared to a "Full Graph" structure (\(\Gamma\) = 1), as we avoid the expensive synchronization delay with the parameter server. For state tracking, we plot the average state estimate across agents within each sub-network over time in figure 4, where darker points indicate more recent positions in time. Fig. 4: State Tracking Simulation. Fig. 3: Graph of Cost for Varying Network and System Configurations. Fig. 2: Simulated Network Structure. ## Appendix A Hierarchical Consensus ### _Matrix Construction_ The analysis of Theorem 1 relies on the notion of augmented graphs and a compact matrix representation of the dynamics of \(z\) and \(m\) over those augmented graphs. **Definition 1** (Augmented Graph): _[_17_]_ _Given a graph \(G(\mathcal{V},\mathcal{E})\), the augmented graph \(G^{a}(\mathcal{V}^{a},\mathcal{E}^{a})\) is constructed as:_ 1. \(\mathcal{V}^{a}=\mathcal{V}\cup\mathcal{E}\)_:_ \(|\mathcal{E}|\) _virtual agents are introduced, each of which represents a link in_ \(G(\mathcal{V},\mathcal{E})\)_. Let_ \(n_{j^{\prime}j}\) _be the virtual agent corresponding to edge_ \((j^{\prime},j)\)_._ 2. \(\mathcal{E}^{a}\triangleq\mathcal{E}\cup\left\{\left(j^{\prime},n_{j^{\prime} j}\right),(n_{j^{\prime}j},j),\ \forall\ (j^{\prime},j)\in\mathcal{E}\right\}\)_._ An example can be found in Fig. 5. In the original graph (left), the node and edge sets are \(\mathcal{V}=\{1,2,3\}\) and \(\mathcal{E}=\{(1,2),(2,1),(1,3),(3,2)\}\), respectively. The four green nodes in the corresponding augmented graph (right) are the virtual agents and the dashed arrows indicate the added links. We study the information flow in the augmented graphs rather than in the original systems. When a message is not successfully delivered over a link, our Algorithm 1 uses a well-calibrated mechanism to recover such a message and to convert it into a delayed message. Intuitively, we can treat the delayed message as the ones that are first sent to virtual nodes, and then are held for at most \(B-1\) iterations, and finally are released to the destination node. For each subnetwork \(S_{i}\), let \(m_{i}:=|\mathcal{E}_{i}|\) denote the number of edges. Let \(\widetilde{N}:=\sum_{i=1}^{M}\left(n_{i}+m_{i}\right)\). Thus, we construct a matrix \(\boldsymbol{M}[t]\in\mathbb{R}^{\widetilde{N}\times\widetilde{N}}\) as follows. Non-global fusion iterationsFix a network \(S_{i}\). Fix \(t\) be arbitrary iteration such that \(t\mod\Gamma\neq 0\). The matrix construction is the same as that in [15]. For completeness, we present the construction as follows. For each link \((j,j^{\prime})\in\mathcal{E}_{i}[t]\), and \(t\geq 1\), as \[\mathsf{B}^{i}_{(j,j^{\prime})}[t]\triangleq\left\{\begin{array}{ll}1,& \text{if }(j,j^{\prime})\in\mathcal{E}_{i}[t]\text{ and is reliable at time }t;\\ 0,&\text{otherwise.}\end{array}\right. \tag{9}\] Recall that \(z^{i}_{j}\) and \(m^{i}_{j}\) are the value and mass for \(j\in\mathcal{V}^{i}\). For each \((j,j^{\prime})\in\mathcal{E}_{i}[t]\), \[z^{i}_{n_{j^{\prime}j}}[t]\triangleq\sigma^{i}_{j^{\prime}}[t]-\rho^{i}_{j^{ \prime}j}[t],\quad m^{i}_{n_{j^{\prime}j}}[t]\triangleq\widetilde{\sigma}^{i} _{j^{\prime}}[t]-\widetilde{\rho}^{i}_{j^{\prime}j}[t], \tag{10}\] with \(z^{i}_{n_{j^{\prime}j}}[0]=\mathbf{0}\in\mathbb{R}^{d}\) and \(m^{i}_{n_{j^{\prime}j}}[0]=0\in\mathbb{R}\). Intuitively, \(z^{i}_{n_{j^{\prime}j}}[t]\) and \(m^{i}_{n_{j^{\prime}j}}[t]\) are the value and weight that agent \(j^{\prime}\) tries to send to agent \(j\) not not successfully be delivered. Let \[\mathbf{M}_{j,j}[t]\triangleq\frac{1}{\left(d^{i}_{j}[t]+1\right) ^{2}},\] \[\mathbf{M}_{j,j^{\prime}}[t]\triangleq\frac{\mathsf{B}^{i}_{(j^{ \prime},j)}[t]}{\left(d^{i}_{j}[t]+1\right)\left(d^{i}_{j^{\prime}}[t]+1\right) },\ \forall\ j^{\prime}\in\mathcal{I}^{i}_{j}[t],\] \[\mathbf{M}_{j,n_{j^{\prime}j}}[t]\triangleq\frac{\mathsf{B}^{i}_{(j ^{\prime},j)}[t]}{d^{i}_{j}[t]+1},\ \forall\ j^{\prime}\in\mathcal{I}^{i}_{j}[t],\] \[\mathbf{M}_{n_{j^{\prime}j},j^{\prime}}[t]\triangleq\frac{1}{ \left(d^{i}_{j^{\prime}}[t]+1\right)^{2}}+\frac{1-\mathsf{B}^{i}_{(j^{\prime},j )}[t]}{d^{i}_{j^{\prime}}[t]+1},\] \[\mathbf{M}_{n_{j^{\prime}j},k}[t]=\frac{\mathsf{B}^{i}_{(k,j^{ \prime})}[t]}{\left(d^{i}_{k}[t]+1\right)\left(d^{i}_{j^{\prime}}[t]+1\right)}, \quad\forall\ k\in\mathcal{I}^{i}_{j^{\prime}}[t],\] \[\mathbf{M}_{n_{j^{\prime}j},n_{k^{\prime}j}}[t]\triangleq\frac{ \mathsf{B}^{i}_{(k,j^{\prime})}[t]}{d^{i}_{j^{\prime}}[t]+1},\ \ \forall\ k\in\mathcal{I}^{i}_{j^{\prime}}[t];\] \[\mathbf{M}_{n_{j^{\prime}j}n_{j^{\prime}j}}[t]\triangleq 1-\mathsf{B}^{i}_{(j^{\prime},j)}[t].\] and any other entry in \(\mathbf{M}[t]\) be zero. It is easy to check that the obtained matrix \(\mathbf{M}[t]\) is column stochastic, and that \[\boldsymbol{z}[t]=\left(\boldsymbol{M}[t]\otimes\boldsymbol{I} \right)\boldsymbol{z}[t-1],\quad\forall\ t\mod\Gamma\neq 0,\] where \(\boldsymbol{z}[t]\in\mathbb{R}^{\widetilde{N}d}\) is the vector that stacks all the local \(z\)'s. The update of the weight vector has the same matrix form since the update of value \(z\) and weight \(m\) are identical. Global fusion iterationsFix \(t\) be arbitrary iteration such that \(t\mod\Gamma=0\). We construct matrix \(\boldsymbol{M}\) in two steps. We let \(\boldsymbol{\bar{M}}\) denote the matrix constructed the same way as above. Let \(\boldsymbol{F}\) be the matrix that captures the mass push among the designated agents under the coordination of the parameter server. Specifically, \[\boldsymbol{F}_{j_{0},j_{0}} =\frac{M+1}{2M}\quad\text{ for each designated agent }j_{0};\] \[\boldsymbol{F}_{j_{0},j^{\prime}_{0}} =\frac{1}{2M}\quad\text{ for distinct designated agents }j_{0},j^{\prime}_{0},\] with all the other entries being zeros. Henceforth, we refer to matrix \(\boldsymbol{F}\) as hierarchical fusion matrix. Clearly, \(\boldsymbol{F}\) is a doubly-stochastic matrix. Hence, we define \(\boldsymbol{M}\) as \[\boldsymbol{M}[t]=\boldsymbol{F}\boldsymbol{\bar{M}}[t]. \tag{11}\] It is easy to see that the dynamics of \(\boldsymbol{z}[t]\) for \(t\mod\Gamma=0\) also obey \[\boldsymbol{z}[t]=\left(\boldsymbol{M}[t]\otimes\boldsymbol{I} \right)\boldsymbol{z}[t-1].\] OverallWe have \[\boldsymbol{z}[t] =\left(\boldsymbol{M}[t]\otimes\boldsymbol{I}\right)\left( \boldsymbol{M}[t-1]\otimes\boldsymbol{I}\right)\boldsymbol{z}[t-2]\] \[=\left(\boldsymbol{M}[t]\otimes\boldsymbol{I}\right)\left( \boldsymbol{M}[t-1]\otimes\boldsymbol{I}\right)\cdots\left(\boldsymbol{M}[1] \otimes\boldsymbol{I}\right)\boldsymbol{z}[0].\] Fig. 5: Augmented graph example That is, the evolution of \(z\) is controlled by the matrix product \(\mathbf{M}[t]\mathbf{M}[t-1]\cdots\mathbf{M}[1]\). In general, let \(\mathbf{\Psi}(r,t)\) be the product of \(t-r+1\) matrices \[\mathbf{\Psi}(r,t)\triangleq\prod_{\tau=r}^{t}\ \mathbf{M}^{\top}[\tau]=\mathbf{M}^{ \top}[r]\mathbf{M}^{\top}[r+1]\cdots\mathbf{M}^{\top}[t],\] where \(r\leq t\) with \(\mathbf{\Psi}(t+1,t)\triangleq\mathbf{I}\) by convention, i.e., \(\mathbf{\Psi}(r,t)=\left(\mathbf{M}[t]\mathbf{M}[t-1]\cdots\mathbf{M}[1]\right)^{\top}\). Notably, \(\mathbf{M}^{\top}[\tau]\) is row-stochastic for each \(\tau\) of interest. Without loss of generality, let us fix an arbitrary bijection between \(\{N+1,\cdots,\widetilde{N}\}\) and \((j,j^{\prime})\in\mathcal{E}_{i}\) for \(i=1,\cdots,M\). For \(j\in\mathcal{V}_{i}\), we have \[z_{j}^{i}[t]=\sum_{j^{\prime}=1}^{\widetilde{N}}z_{j^{\prime}}[0]\mathbf{\Psi}_{j ^{\prime}j}(1,t)=\sum_{j^{\prime}=1}^{\widetilde{N}}w_{j^{\prime}}^{i}\mathbf{ \Psi}_{j^{\prime}j}(1,t), \tag{12}\] where the last equality holds due to \(z_{j}[0]=w_{j}^{i}\) for \(j=1,\cdots,N\) and \(z_{j}[0]=0\in\mathbb{R}^{d}\) when \(j>N\), i.e., when \(j\) corresponds to an edge. ### _Auxiliary Lemmas_ To show the convergence of Algorithm 1, we investigate the convergence behavior of \(\mathbf{\Psi}(r,t)\) (where \(r\leq t\)) using ergodic coefficients and some celebrated results obtained by Hajnal [7]. The remaining proof follows the same line as that in [17, 18], and is presented below for completeness. Given a row stochastic matrix \(\mathbf{A}\), coefficients of ergodicity \(\delta(\mathbf{A})\) is defined as: \[\delta(\mathbf{A}) \triangleq\max_{j}\ \max_{i_{1},i_{2}}\ \ \left|\mathbf{A}_{i_{1}j}-\mathbf{A}_{i_{2}j}\right|, \tag{13}\] \[\lambda(\mathbf{A}) \triangleq 1-\min_{i_{1},i_{2}}\sum_{j}\min\{\mathbf{A}_{i_{1}j}, \mathbf{A}_{i_{2}j}\}. \tag{14}\] **Proposition 3**: _[_7_]_ _For any \(p\) square row stochastic matrices \(\mathbf{Q}[1],\mathbf{Q}[2],\ldots\mathbf{Q}[p]\), it holds that_ \[\delta(\mathbf{Q}[1]\mathbf{Q}[2]\ldots\mathbf{Q}[p])\ \leq\ \Pi_{k=1}^{p}\ \lambda( \mathbf{Q}[k]). \tag{15}\] Proposition 3 implies that if \(\lambda(\mathbf{Q}[k])\leq 1-c\) for some \(c>0\) and for all \(1\leq k\leq p\), then \(\delta(\mathbf{Q}[1],\mathbf{Q}[2]\cdots\mathbf{Q}[p])\) goes to zero exponentially fast as \(p\) increases. The following lemmas are useful in the analysis of our follow-up algorithms. **Lemma 1**: _For \(r\leq t\) such that \(\lfloor t/2\Gamma\rfloor-\lceil r/2\Gamma\rceil\geq 0\), it holds that \(\delta\left(\mathbf{\Psi}(r,t)\right)\leq\gamma^{(t-r)/2\Gamma}\), where \(\gamma=1-\frac{1}{4M^{2}}\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\) as per Theorem 2._ The following rewriting holds: \[\mathbf{\Psi}(r,t)=\mathbf{\Psi}(r,\Gamma\lceil r/\Gamma\rceil)\left( \prod_{k=\lceil r/\Gamma\rceil}^{\lfloor t/\Gamma\rfloor-1}\mathbf{\Psi}(k\Gamma+ 1,(k+1)\Gamma)\right)\\ \times\mathbf{\Psi}(\Gamma\lfloor t/\Gamma\rfloor+1,\,t).\] By Proposition 3, we have \[\delta\left(\mathbf{\Psi}(r,t)\right) \leq\prod_{k=\lceil r/2\Gamma\rceil}^{\lfloor t/2\Gamma\rfloor-1 }\lambda\left(\mathbf{\Psi}(2k\Gamma+1,2(k+1)\Gamma)\right)\] \[\leq\gamma^{(t-r)/2\Gamma},\] where \(\gamma:=1-\frac{1}{4M^{2}}\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\). **Lemma 2**: _Let \(D^{*}:=\max_{i\in[M]}D_{i}\). Choose \(\Gamma=BD^{*}\). Suppose that \(t-r+1\geq 2\Gamma\). Then every entry of the matrix product \(\mathbf{\Psi}(r,t)\) is lower bounded by \(\frac{1}{4M^{2}}\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\)._ Observe that \[\mathbf{\Psi}(r,t)=\mathbf{M}^{\top}[r]\cdots\mathbf{M}^{\top}[t-D_{i}B]\cdots\mathbf{M}^{ \top}[t].\] By Assumptions 1 and 2, each subnetwork \(S^{i}\) is strongly connected and each link is reliable at least once during \(B\) consecutive iterations, it is true that every entry in the \(i\)-th block of matrix product \(\mathbf{M}^{\top}[t-D_{i}B+1]\cdots\mathbf{M}^{\top}[t]\) is lower bounded by \(\beta_{i}^{D_{i}B}\). We know that each of the remained matrices in \(\mathbf{\Psi}(r,t)\) is row-stochastic. Hence, every entry in the block \(i\) of \(\mathbf{\Psi}(r,t)\) is lower bounded by \(\beta_{i}^{D_{i}B}\). By the construction of the fusion matrix \(\mathbf{F}\) and the existence of self-loops, we know that during consecutive \(2\Gamma\) iterations, from any node \(j\), we can reach any other node in the hierarchical FL system. Let \(j\) and \(j^{\prime}\) be two arbitrary nodes possibly in different subnetworks. Let \(S_{i}\) and \(S_{i^{\prime}}\) be the subnetworks that \(j\) and \(j^{\prime}\) are in. It holds that \[\mathbf{\Psi}_{j,j^{\prime}}(t-2\Gamma+1,t) \geq\frac{1}{2M}\beta_{i^{\prime}}^{D_{i^{\prime}}B}\frac{1}{2M} \beta_{i}^{2D_{i}B}\] \[\geq\frac{1}{4M^{2}}(\min_{i\in[M]}\beta_{i})^{2}D^{*}B,\] proving the lemma. ### _Proof of Theorem 1_ Notably, the update of the mass vector is \(\mathbf{m}[t]=\left(\mathbf{M}[t]\cdots\mathbf{M}[1]\right)\mathbf{m}[0]=\mathbf{\Psi}(1,t)\mathbf{m}[0],\) where \(m_{j}[0]=1\) if \(j\leq N\) and \(m_{j}[0]=0\) otherwise. \[\left\|\frac{z_{j}^{i}[t]}{m_{j}^{i}[t]}-\frac{1}{N}\sum_{i=1}^{M }\sum_{j=1}^{n_{i}}w_{j}^{i}\right\|_{2}\\ =\left\|\frac{N\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{ \prime}}^{j}\mathbf{\Psi}_{j^{\prime}j}(1,t)}{N\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}} \mathbf{\Psi}_{j^{\prime}j}(1,t)}\\ -\frac{\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{\prime}}^{i }\sum_{i=1}^{M}\sum_{k=1}^{n_{i}}\mathbf{\Psi}_{k,j}(1,t)}{N\sum_{i=1}^{M}\sum_{j^{ \prime}=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}\right\|\\ =\left\|\frac{\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{ \prime}}^{i}\sum_{i=1}^{M}\sum_{k=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}{N\sum_{i=1 }^{M}\sum_{j^{\prime}=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}\right.\\ -\frac{\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{\prime}}^{i }\sum_{i=1}^{M}\sum_{k=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}{N\sum_{i=1}^{M} \sum_{j^{\prime}=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}\\ -\frac{\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{\prime}}^{i }\sum_{i=1}^{M}\sum_{k=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}{N\sum_{i=1}^{M} \sum_{j^{\prime}=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime}j}(1,t)}\right\|\\ =\frac{\left\|\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}w_{j^{ \prime}}^{i}\sum_{i=1}^{M}\sum_{k=1}^{n_{k}}(\mathbf{\Psi}_{j^{\prime}j}(1,t By Lemmas 1 and 2, we conclude that \[\frac{\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}\left\|w_{j^{\prime}}^{ i}\right\|_{2}\delta\left(\mathbf{\Psi}(1,t)\right)}{\sum_{i=1}^{M}\sum_{j^{ \prime}=1}^{n_{i}}\mathbf{\Psi}_{j^{\prime},j}(1,t)}\] \[\leq\frac{4M^{2}\sum_{i=1}^{M}\sum_{j^{\prime}=1}^{n_{i}}\left\|w _{j^{\prime}}^{i}\right\|_{2}}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B} N}\min\{1,\gamma^{\lfloor t/2\Gamma\rfloor-1}\}.\] ## Appendix B State Estimation ### _Proof of Proposition 1_ By definition, \(f(w)=\frac{1}{2}(w-w^{*})^{\top}K(w-w^{*})+\sum_{i=1}^{M}\sum_{j=1}^{n_{i}} \sigma_{j}^{i}\). We have \[\left\|f(w)-f(w^{\prime})\right\|_{2}\] \[\leq\frac{1}{2}\left\|\left(w-w^{\prime}\right)^{\top}K\left(w-w ^{*}\right)\right\|_{2}\] \[\qquad+\frac{1}{2}\left\|\left(w^{\prime}-w^{*}\right)^{\top}K \left(w-w^{\prime}\right)\right\|_{2}\] \[\leq R_{0}\left\|K\right\|_{2}\left\|w-w^{\prime}\right\|_{2}.\] Since \(\left(H_{j}^{i}\right)^{\top}H_{j}^{i}\succeq 0\), it holds that \(\left\|H_{j}^{\top}H_{j}\right\|_{2}\leq\left\|K\right\|_{2}.\) With similar argument, we can conclude that \(f_{j}^{i}\) is also \(L\)-Lipschitz continuous with \(L:=R_{0}\left\|K\right\|_{2}\). ### _Proof of Theorem 2_ Let \(\bar{z}[t]=\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}z_{j}^{i}[t]\). Expanding \(\bar{z}[t]\), we have \[\bar{z}[t] =\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sum_{r=0}^{t-1}g_{j} ^{i}[r]\] \[=\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}\sum_{r=0}^{t-1}(H_{j }^{i})^{\top}H_{j}^{i}\left(w_{j}^{i}[r]-w^{*}\right)\] \[\qquad+\frac{1}{N}\sum_{i=1}^{M}\sum_{j=1}^{n_{i}}(H_{j}^{i})^{ \top}\sum_{r=0}^{t-1}\xi_{j}^{i}[r].\] It is worth noting that \(\{w_{j}^{i}[t]\}_{t=0}^{\infty}\) is obtained under the stochastic gradient. Let \(\{v[t]\}_{t=0}^{\infty}\) be the auxiliary sequence such that \(v[t]:=\prod_{\mathcal{W}}^{\varphi}\left(\bar{z}[t],\eta[t-1]\right)\). Let \(\widehat{v}[t]:=\frac{1}{t}\sum_{r=0}^{t}v[r]\). Since \(K\) is invertible (by Assumption 4), the global objective \(f\) has a unique minimizer \(w^{*}\). In addition, we have \[f\left(\widehat{w}_{j}^{i}[T]\right)-f(w^{*})\geq\lambda_{\min} \left(K\right)\left\|w_{j}^{i}-w^{*}\right\|_{2}^{2}.\] Thus, it remains to bound \(f\left(\widehat{w}_{j}^{i}[T]\right)-f(w^{*})\). Fix a time horizon \(T\). Since \(w^{*}\in\mathcal{W}\), we have \[f\left(\widehat{w}_{j}^{i}[T]\right)-f(w^{*})\] \[\leq f\left(\widehat{v}[T]\right)-f(w^{*})+L\left\|\widehat{w}_{j }^{i}[T]-\widehat{v}[T]\right\|_{2},\quad\text{by Proposition \ref{prop:main Using steps similar to the proof in Theorem 1, we are able to show \[\left\|\bar{z}[t]-\frac{z_{j}^{z}[t]}{m_{j}^{i}[t]}\right\|_{2}\leq \frac{4M^{2}L\gamma^{1/2\Gamma}}{N\left(1-\gamma^{1/2\Gamma}\right)\left(\min_{i \in[M]}\beta_{i}\right)^{2D^{*}\overline{B}}},\] proving the theorem. ## Appendix C Tracking Analysis ### _Dynamic Matrix Representation_ Recall that \(\widetilde{N}\) is the number of nodes in the \(M\) augmented graphs, one for each network. Let \(\mathbf{g}[t]\in\mathbb{R}^{d\widetilde{N}}\) be the vector that stacks the local stochastic gradients computed by the \(N\) agents with \(g_{j}[t]=\mathbf{0}\) if \(j\) corresponds to an edge. Fix \(t\) be arbitrary iteration such that \(t\mod\Gamma\neq 0\). Fix a network \(S_{i}\). For each \(j^{\prime}\in\mathcal{I}_{j}[t]\), it holds that \[\rho_{j^{\prime}j}^{+}[t]-\rho_{j^{\prime}j}[t-1]=B_{j^{\prime}j}[t]z_{n_{j^{ \prime}j}}[t-1]+B_{j^{\prime}j}[t]\frac{z_{j^{\prime}}[t-1]}{d_{j^{\prime}}[t ]+1}. \tag{17}\] For each \(j\in\cup_{i=1}^{M}\mathcal{V}_{i}\), we have \[z_{j}[t]=A\frac{z_{j}^{+}[t]}{d_{j}[t]+1}-g_{j}[t]=A\frac{1}{d_{ j}[t]+1}\times\] \[\left(\frac{z_{j}[t-1]}{d_{j}[t]+1}+\sum_{j^{\prime}\in\mathcal{I} _{j}[t]}(\rho_{j^{\prime}j}^{+}[t]-\rho_{j^{\prime}j}[t-1])\right)-g_{j}[t]\] \[\quad=\frac{Az_{j}[t-1]}{(d_{j}[t]+1)}+\sum_{j^{\prime}\in \mathcal{I}_{j}[t]}\frac{B_{j^{\prime}j}[t]}{d_{j}[t]+1}Az_{n_{j^{\prime}j}}[ t-1]\] \[\quad+\sum_{j^{\prime}\in\mathcal{I}_{j}[t]}\frac{B_{j^{\prime}j }[t]}{(d_{j}[t]+1)\left(d_{j^{\prime}}[t]+1\right)}Az_{j^{\prime}}[t-1]-g_{j} [t].\] For each edge \((j^{\prime},j)\), we have \[z_{n_{j^{\prime}j}[t]}=\sigma_{j^{\prime}}[t]-\rho_{j^{\prime}j} [t]=A\left(\sigma_{j^{\prime}}^{+}[t]+\frac{z_{j^{\prime}}^{+}[t]}{d_{j^{ \prime}}[t]+1}\right)-A\rho_{j^{\prime}j}^{+}[t]\] \[=A\left[\left(1-B_{j^{\prime}j}[t]\right)\left(z_{n_{j^{\prime}j }}[t-1]+\frac{z_{j^{\prime}}[t-1]}{d_{j^{\prime}}[t]+1}\right)+\frac{z_{j^{ \prime}}^{+}[t]}{d_{j^{\prime}}[t]+1}\right]\] \[\quad=(1-B_{j^{\prime}j}[t])\,Az_{n_{j^{\prime}j}}[t-1]\] \[\quad\quad+\left(\frac{1-B_{j^{\prime}j}[t]}{d_{j^{\prime}}[t]+1 }+\frac{1}{\left(d_{j^{\prime}}[t]+1\right)^{2}}\right)Az_{j^{\prime}}[t-1]\] \[\quad\quad+\sum_{k\in\mathcal{I}_{j^{\prime}}}\frac{B_{kj^{ \prime}}[t]}{d_{j^{\prime}}[t]+1}Az_{n_{kj^{\prime}}}[t-1]\] \[\quad\quad+\sum_{k\in\mathcal{I}_{j^{\prime}}}\frac{B_{kj^{ \prime}}[t]}{\left(d_{j^{\prime}}[t]+1\right)\left(d_{k}[t+1\right)}Az_{k}[t-1]. \tag{18}\] Hence, we have \[\mathbf{z}[t]=\left(\mathbf{M}[t]\otimes\mathbf{I}\right)\widetilde{\mathbf{z}}[t-1]-\mathbf{g}[t],\] where \[\left[\widetilde{\mathbf{z}}[t-1]\right]^{\top}=\left(z_{1}[t-1]^{\top}A^{\top}, \cdots,z_{\widetilde{N}}[t-1]^{\top}A^{\top}\right)\] i.e., \(\widetilde{\mathbf{z}}[t-1]=\left(\mathbf{I}\otimes A\right)\mathbf{z}[t-1]\). Similarly, we can show the same matrix representation holds for any \(t\) be arbitrary iteration such that \(t\mod\Gamma=0\). Following from the fact that \(\left(\mathbf{A}\otimes\mathbf{B}\right)\left(\mathbf{C}\otimes\mathbf{D}\right)=\left(\mathbf{ AC}\right)\otimes\left(\mathbf{BD}\right)\) with matrices \(\mathbf{A},\mathbf{B},\mathbf{C}\), and \(\mathbf{D}\) of proper dimensions so that the relevant matrix product is well-defined, we have \[\mathbf{z}[t] =\left(\mathbf{M}[t]\otimes\mathbf{I}\right)\left(\mathbf{I}\otimes A\right) \mathbf{z}[t-1]-\mathbf{g}[t]\] \[=\left(\mathbf{M}[t]\otimes\mathbf{A}\right)\mathbf{z}[t-1]-\mathbf{g}[t]. \tag{19}\] Unrolling Eq.(19), we get \[\mathbf{z}[t] =\left(\mathbf{M}[t]\otimes A\right)\mathbf{z}[t-1]-\mathbf{g}[t]\] \[\overset{(a)}{=}\left(\mathbf{M}[t]\otimes A\right)\cdots\left(\mathbf{ M}[1]\otimes A\right)\mathbf{z}[0]\] \[\quad\quad-\sum_{r=1}^{t}\left(\left(\mathbf{M}[t]\otimes A\right) \cdots\left(\mathbf{M}[r+1]\otimes A\right)\right)\mathbf{g}[r]\] \[=-\sum_{r=1}^{t}\left(\left(\mathbf{M}[t]\otimes A\right)\cdots\left( \mathbf{M}[r+1]\otimes A\right)\right)\mathbf{g}[r], \tag{20}\] where in equality (a) we use the convention that \(\left(\mathbf{M}[t]\otimes A\right)\cdots\left(\mathbf{M}[r+1]\otimes A\right)=\mathbf{I}_{d \widetilde{N}}\). Repeatedly applying the fact that \(\left(\mathbf{A}\otimes\mathbf{B}\right)\left(\mathbf{C}\otimes\mathbf{D}\right)=\left(\mathbf{ AC}\right)\otimes\left(\mathbf{BD}\right)\), we obtain \[\mathbf{z}[t]=-\sum_{r=1}^{t}\left(\left(\mathbf{M}[t]\cdots\mathbf{M}[r+1]\right)\otimes A ^{t-r}\right)\mathbf{g}[r].\] Notably, the update of the mass vector remains the same as that for Algorithm 1, i.e., \[\mathbf{m}[t]=\left(\mathbf{M}[t]\cdots\mathbf{M}[1]\right)\mathbf{m}[0],\] where \(m_{j}[0]=1\) if \(j\leq N\) and \(m_{j}[0]=0\) otherwise. ### _Proof of Theorem 3_ Recall that we index the nodes in the augmented graphs from \(1\) to \(\widetilde{N}\) with nodes \(1\) to \(N\) corresponding to the actual nodes (i.e., \(j\in\cup_{i=1}^{M}\mathcal{V}_{i}\)) and nodes \(N\) to \(\widetilde{N}\) corresponding to the virtual nodes (i.e., \(j\in\cup_{i=1}^{M}\mathcal{E}_{i}\)). For each agent \(j\in\cup_{i=1}^{M}\mathcal{V}_{i}\), we have \[z_{j}[t]=-\sum_{r=1}^{t}\sum_{j^{\prime}=1}^{\widetilde{N}}A^{t-r}g_{j^{\prime}}[r ]\mathbf{\Psi}_{j^{\prime}j}(r+1,t),\] where \(g_{j}[t]\) is the stochastic gradient of Eq. (3) which can be rewritten as \[g_{j}[t] =H_{j}^{\intercal}\left(H_{j}w_{j}[t-1]-H_{j}w^{*}[t]-\xi_{j}[t-1]\right)\] \[=H_{j}^{\intercal}H_{j}\left(w_{j}[t-1]-w^{*}[t-1]\right)-H_{j}^ {\intercal}\xi_{j}[t-1].\] In addition, \[m_{j}[t]=\sum_{j^{\prime}=1}^{\widetilde{N}}\mathbf{\Psi}_{j^{\prime}j}(1,t)m_{j}[0 ]=\sum_{j^{\prime}=1}^{N}\mathbf{\Psi}_{j^{\prime}j}(1,t).\] The evolution of \(\bar{z}\) can be formally described as \[\bar{z}[t+1]=\frac{1}{N}\sum_{j=1}^{\bar{N}}z_{j}[t+1]\] \[=-\frac{1}{N}\sum_{r=1}^{t+1}A^{t+1-r}\eta[r]\sum_{j^{\prime}=1}^{N }g_{j^{\prime}}[r]\] \[=-\frac{1}{N}\left(A\sum_{r=1}^{t}A^{t-r}\eta[r]\sum_{j^{\prime}= 1}^{N}g_{j^{\prime}}[r]+\eta[t+1]\sum_{j^{\prime}=1}^{N}g_{j^{\prime}}[t+1]\right)\] \[=Az[t]-\eta[t+1]\frac{1}{N}\sum_{j^{\prime}=1}^{N}g_{j^{\prime}}[t +1].\] Let \(\widetilde{w}_{j}[t]:=\frac{z_{j}[t]}{m_{j}[t]}\). By non-expansion property of projection onto a convex and compact set, we have \[\left\|w_{j}[t]-w^{*}[t]\right\|_{2}\leq\left\|\widetilde{w}_{j}[t]-w^{*}[t] \right\|_{2}.\] Note that \[\widetilde{w}_{j}[t]-w^{*}[t]=\underbrace{(\bar{z}[t]-w^{*}[t])}_{(a)}+ \underbrace{(\widetilde{w}_{j}[t]-\bar{z}[t])}_{(b)}. \tag{21}\] #### Bounding (b): \[\left\|\widetilde{w}_{j}[t]-\bar{z}[t]\right\|_{2}=\left\|\frac{z_ {j}[t]}{m_{j}[t]}-\bar{z}[t]\right\|_{2}\] \[=\left\|\frac{\sum_{r=1}^{t-1}\sum_{j^{\prime}=1}^{N}A^{t-r}\eta[ r]g_{j^{\prime}}[r]\mathbf{\Psi}_{j^{\prime},j}(r,t)}{\sum_{j^{\prime}=1}^{N}\mathbf{ \Psi}_{j^{\prime},j}(1,t)}\right.\] \[\left.-\frac{1}{N}\sum_{r=1}^{t}A^{t-r}\eta[r]\sum_{j^{\prime}=1}^ {N}g_{j^{\prime}}[r]\right\|_{2}\] \[=\left\|\frac{\sum_{k=1}^{N}\sum_{r=1}^{t-1}\sum_{j^{\prime}=1}^ {N}A^{t-r}\eta[r]g_{j^{\prime}}[r]\mathbf{\Psi}_{j^{\prime},j}(r,t)}{N\sum_{j^{ \prime}=1}^{N}\mathbf{\Psi}_{j^{\prime},j}(1,t)}\right.\] \[\left.-\frac{\sum_{r=1}^{t}A^{t-r}\eta[r]\sum_{j^{\prime}=1}^{N}g _{j^{\prime}}[r]\sum_{k=1}^{N}\mathbf{\Psi}_{k,j}(1,t)}{N\sum_{j^{\prime}=1}^{N} \mathbf{\Psi}_{j^{\prime},j}(1,t)}\right\|_{2}\] \[\overset{(a)}{\leq}\frac{4M^{2}L_{0}\sum_{r=0}^{t-1}\sum_{j^{ \prime}=1}^{N}\left\|A\right\|_{2}^{t-r}\eta[r]\delta\left(\mathbf{\Psi}(r,t)\right) }{N\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}}\] \[\overset{(b)}{\leq}\frac{4M^{2}L_{0}\sum_{r=0}^{t-1}\left\|A \right\|_{2}^{t-r}\eta[r]\gamma^{\frac{t-r}{1}}}{\left(\min_{i\in[M]}\beta_{i} \right)^{2D^{*}B}}\] \[=\frac{4M^{2}L_{0}\sum_{r=0}^{t-1}\eta[r]\left(\left\|A\right\|_{ 2}\gamma^{\frac{t}{2}}\right)^{t-r}}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^ {*}B}}\] where inequality (a) follows from Proposition 2, the definition of \(\delta\left(\cdot\right)\) as per Eq.(13), and Lemma 2, and inequality (b) follows from Lemma 1. For ease of exposition, let \(b=\left\|A\right\|_{2}\gamma^{\frac{1}{2}}\). Let \(r^{*}\in\{1,2,\cdots,t-1\}\). It holds that \[\sum_{r=0}^{t-1}\eta[r]\left(\left\|A\right\|_{2}\gamma^{\frac{1}{ 2}}\right)^{t-r}=\sum_{r=0}^{t-1}\eta[r]b^{t-r}\] \[=\frac{1}{\left\|K\right\|_{2}}\left(\underbrace{b^{t}+\sum_{r=1 }^{r^{*}}\frac{1}{r}b^{t-r}}_{(i)}+\underbrace{\sum_{r=r^{*}+1}^{t-1}\frac{1}{ r}b^{t-r}}_{(ii)}\right)\] Term \((i)\) can be upper bounded as \[b^{t}+\sum_{r=1}^{r^{*}}\frac{1}{r}b^{t-r}\leq b^{t}+\sum_{r=1}^{r^{*}}b^{t-r}= \sum_{r=0}^{r^{*}}b^{t-r}\leq\frac{b^{t-r^{*}}}{1-b}\] and term \((ii)\) can be upper bounded as \[\sum_{r=r^{*}+1}^{t-1}\frac{1}{r}b^{t-r}\leq\sum_{r=r^{*}+1}^{t- 1}\frac{1}{r^{*}+1}b^{t-r}\] \[=\frac{1}{r^{*}+1}\sum_{r=r^{*}+1}^{t-1}b^{t-r}\leq\frac{1}{(r^{* *}+1)(1-b)}\] Choosing \(r^{*}=1/2t\), when \[t\geq\frac{2}{\log 1/b}\log\left(\frac{2}{\log 1/b}\right) \triangleq t_{0}, \tag{22}\] where the base of the log is \(2\), both of the upper bounds above can be further upper bounded as \(\frac{2}{(1-b)t}\). Thus, \[\sum_{r=0}^{t-1}\eta[r]\left(\left\|A\right\|_{2}\gamma^{\frac{1}{2}}\right)^{t -r}\leq\frac{4}{(1-b)t}\hskip 28.452756pt\forall t\geq t_{0}.\] For \(t<t_{0}\), it holds that \[\sum_{r=0}^{t-1}\eta[r]\left(\left\|A\right\|_{2}\gamma^{\frac{1}{2}}\right)^{t -r}\leq\sum_{r=0}^{t-1}\eta[r]\leq\sum_{r=0}^{t-1}\leq\sum_{r=0}^{t_{0}-1}=t_{0}.\] Therefore, \[\left\|\widetilde{w}_{j}[t]-\bar{z}[t]\right\|_{2}\leq\left\{\frac{16M^{2}L_{0} }{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\left\|K\right\|_{2}(1-b)t} \hskip 14.226378pt\text{if}\hskip 8.535827ptt\geq t_{0};\right.\] proving the first part of Theorem 2. #### Bounding (a): \[\bar{z}[t]-w^{*}[t]\] \[=A\bar{z}[t-1]-\eta[t]\frac{1}{N}\sum_{j=1}^{N}\left(H_{j}^{\top}H _{j}\left(w_{j}[t-1]-w^{*}[t-1]\right)\right.\] \[\left.+H_{j}^{\top}\xi_{j}[t-1]\right)-w^{*}[t]\] \[=A\left(\bar{z}[t-1]-w^{*}[t-1]\right)-\eta[t]\frac{1}{N}\sum_{j=1 }^{N}H_{j}^{\top}\xi_{j}[t-1]\] \[\quad-\eta[t](\frac{1}{N}\sum_{j=1}^{N}H_{j}^{\top}H_{j}\left(w_{j }[t-1]-w^{*}[t-1]\right)). \tag{23}\] Adding and subtracting \(\bar{z}[t-1]\) in each of the summand in the last term of Eq.(23), and regrouping the terms, we get \[\bar{z}[t]-w^{*}[t] =\left(A-\frac{\eta[t]}{N}K\right)\left(\bar{z}[t-1]-w^{*}[t-1]\right)\] \[\qquad-\frac{\eta[t]}{N}K\left(w_{j}[t-1]-\bar{z}[t-1]\right)\] \[\qquad-\frac{\eta[t]}{N}\sum_{j=1}^{N}H_{j}^{\top}\xi_{j}[t-1].\] Let \(C[t-1]=\eta[t]\frac{1}{N}\sum_{j=1}^{N}H_{j}^{\top}H_{j}\left(w_{j}[t-1]-\bar{ z}[t-1]\right)\) and \(W[t-1]=\eta[t]\frac{1}{N}\sum_{j=1}^{N}H_{j}^{\top}\xi_{j}[t-1]\). Notably, \(\mathbb{E}\left[W[t-1]\right]=\mathbb{E}\left[\eta[t]\frac{1}{N}\sum_{j=1}^{N} H_{j}^{\top}\xi_{j}[t-1]\right]=\mathbf{0}\). We unroll the dynamics of \(\bar{z}[t]-w^{*}[t]\) as \[\bar{z}[t]-w^{*}[t]=\left(A-\frac{\eta[t]}{N}K\right)\left(\bar{ z}[t-1]-w^{*}[t-1]\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-C[t-1]-W[t-1]\] \[=\underbrace{\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A- \frac{\eta[1]}{N}K\right)\left(\bar{z}[0]-w^{*}[0]\right)}_{(A)}\] \[-\underbrace{\sum_{r=2}^{t+1}\left(A-\frac{\eta[t]}{N}K\right) \cdots\left(A-\frac{\eta[r]}{N}K\right)C[r-2]}_{(B)}\] \[-\underbrace{\sum_{r=2}^{t+1}\left(A-\frac{\eta[t]}{N}K\right) \cdots\left(A-\frac{\eta[r]}{N}K\right)W[r-2]}_{(C)}.\] Since \(A\) is symmetric, it admits an eigenvalue decomposition, i.e., \[A=U\Lambda U^{\top},\] where \(U\in\mathbb{R}^{d\times d}\) is the square \(d\times d\) matrix whose \(i\)-th column is the \(i\)-th eigenvector of \(A\), and \(\Lambda\) is the diagonal matrix whose diagonal elements are the corresponding eigenvalues. Thus, \[A-\eta[t]K=U\Lambda U^{\top}-\eta[t]K=U\left(\Lambda-\eta[t]U^{\top}KU\right) U^{\top}.\] For any \(r\), it holds that \[\left(A-\frac{\eta[t]}{N}K\right)\left(A-\frac{\eta[t-1]}{N}K \right)\cdots\left(A-\frac{\eta[r]}{N}K\right)\] \[=U\left(\Lambda-\eta[t]U^{\top}KU\right)U^{\top}U\left(\Lambda- \eta[t-1]U^{\top}KU\right)U^{\top}\] \[\qquad\qquad\qquad\qquad\qquad\cdots U\left(\Lambda-\eta[r]U^{ \top}KU\right)U^{\top}\] \[=U\left(\Lambda-\eta[t]U^{\top}KU\right)\left(\Lambda-\eta[t-1]U^ {\top}KU\right)\] \[\qquad\qquad\qquad\qquad\qquad\cdots\left(\Lambda-\eta[r]U^{ \top}KU\right)U^{\top}.\] Thus, \[\left\|\left(A-\frac{\eta[t]}{N}K\right)\left(A-\frac{\eta[t-1]}{ N}K\right)\cdots\left(A-\frac{\eta[r]}{N}K\right)\right\|_{2}\] \[\leq\prod_{\tau=r}^{t}\left\|\Lambda-\eta[\tau]U^{\top}KU\right\| _{2}.\] Notably, \(U\) is a rotation matrix. Thus, \(U^{\top}KU\) and \(K\) share the same set of eigenvalues. By definition, we have \[\left\|\Lambda-\eta[\tau]U^{\top}KU\right\|_{2} =\sup_{v\in\mathcal{S}^{d}}v^{\top}\left(\Lambda-\eta[\tau]U^{ \top}KU\right)v\] \[=\sup_{v\in\mathcal{S}^{d}}v^{\top}\Lambda v-\eta[\tau]\inf_{v\in \mathcal{S}^{d}}v^{\top}U^{\top}KUv\] \[\leq 1-\eta[\tau]\lambda_{d}.\] So for \(t\geq r\) \[\left\|\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A-\frac{\eta [r]}{N}K\right)\right\|_{2}\leq\prod_{\tau=r}^{t}\left(1-\eta[\tau]\lambda_{d}\right)\] \[=\exp\left(\sum_{\tau=r}^{t}\ln\left(1-\eta[\tau]\lambda_{d}\right)\right)\] \[\leq\exp\left(\sum_{\tau=r}^{t}-\eta[\tau]\lambda_{d}\right)\] \[=\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\exp\left(-\sum_ {\tau=r}^{t}\frac{1}{\tau}\right)\] \[\leq\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\exp\left(- \log(t)+\log(r-1)\right)\] \[=\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\frac{r-1}{t}.\] Besides, \[C[t-1]=\eta[t]\frac{1}{N}\sum_{j=1}^{N}H_{j}^{\top}H_{j}\left(w_{j}[t-1]-\bar {z}[t-1]\right).\] #### Bounding (A) \[\left\|\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A-\frac{\eta [1]}{N}K\right)\left(\bar{z}[0]-w^{*}[0]\right)\right\|_{2}\] \[\leq\left\|\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A-\frac {\eta[1]}{N}K\right)\right\|_{2}\left\|\bar{z}[0]-w^{*}[0]\right\|_{2}\] \[=\left\|\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A-\frac{ \eta[1]}{N}K\right)\right\|_{2}\left\|w^{*}[0]\right\|_{2}\] \[\leq\left\|w^{*}[0]\right\|_{2}\exp\left(\frac{\lambda_{1}}{ \lambda_{d}}\right)\frac{1}{t}.\] **Bounding (B)**: \[\left\|\sum_{r=2}^{t+1}\left(A-\frac{\eta[t]}{N}K\right)\cdots\left(A- \frac{\eta[r]}{N}K\right)C[r-2]\right\|_{2}\] \[\leq\sum_{r=2}^{t+1}\left\|\left(A-\frac{\eta[t]}{N}K\right)\cdots \left(A-\frac{\eta[r]}{N}K\right)\right\|_{2}\left\|C[r-2]\right\|_{2}\] \[\leq\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\sum_{r=2}^{ t+1}\frac{r-1}{t}\frac{1}{\left\|K\right\|_{2}(r-1)}\frac{1}{N}\sum_{j=1}^{N} \left\|H_{j}^{\top}H_{j}\right\|_{2}*\] \[\max_{j}\left\|w_{j}[r-2]-\bar{z}[r-2]\right\|_{2}\] \[\leq\frac{1}{t}\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right) \sum_{r=2}^{t+1}\max_{j}\left\|w_{j}[r-2]-\bar{z}[r-2]\right\|_{2}\] \[\leq\frac{1}{t}\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right) \sum_{r=2}^{t+1}\max_{j}\left\|\widetilde{w}_{j}[r-2]-\bar{z}[r-2]\right\|_{2},\] where the last inequality holds from the non-expansion property of projection. Suppose that \(t\geq t_{0}\). We have \[\frac{1}{t}\exp\left(\frac{\lambda_{1}}{\lambda_{d}}\right)\sum_{r =2}^{t+1}\max_{j}\left\|\widetilde{w}_{j}[r-2]-\bar{z}[r-2]\right\|_{2}\] \[\leq\frac{\exp\left(\lambda_{1}/\lambda_{d}\right)4M^{2}Lt_{0}^{2 }}{\left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\left\|K\right\|_{2}}\frac{1} {t}\] \[\qquad+\frac{\exp\left(\lambda_{1}/\lambda_{d}\right)16M^{2}L}{ \left(\min_{i\in[M]}\beta_{i}\right)^{2D^{*}B}\left\|K\right\|_{2}(1-b)}\frac {\log(t+1)}{t}.\] **Bounding (C), Handling noises.** We will use McDiarmid's inequality to derive high probability bound on term (C). Let's perturb the observation noise of agents at time \(r^{\prime}-2\). It is easy to see that the difference on each of the coordinate is upper bounded by \[\exp\left(\lambda_{1}/\lambda_{d}\right)\frac{2B_{0}}{\left\|K\right\|_{2}} \frac{1}{t}.\] By McDiarmid's inequality, we obtain that with probability at most \(\delta/d\), \[\left[\sum_{r=2}^{t+1}\left(A-\frac{\eta[t]}{N}K\right)\cdots \left(A-\frac{\eta[r]}{N}K\right)W[r-2]\right]^{i}\] \[\geq\exp\left(\lambda_{1}/\lambda_{d}\right)\frac{2B_{0}}{\left\| K\right\|_{2}}\sqrt{\frac{\log d/\delta}{2t}}.\] Therefore, we conclude that with probability at least \(1-\delta\), \[\left\|\sum_{r=2}^{t+1}\left(A-\frac{\eta[t]}{N}K\right)\cdots \left(A-\frac{\eta[r]}{N}K\right)W[r-2]\right\|_{2}\] \[\leq\sqrt{\frac{d}{2t}\log(d/\delta)}\exp\left(\lambda_{1}/ \lambda_{d}\right)\frac{2B_{0}}{\left\|K\right\|_{2}}.\] Combining the bounds on terms (A), (B), and (C), and (b), we conclude the theorem.
2306.11170
A Framework for Fast and Stable Representations of Multiparameter Persistent Homology Decompositions
Topological data analysis (TDA) is an area of data science that focuses on using invariants from algebraic topology to provide multiscale shape descriptors for geometric data sets such as point clouds. One of the most important such descriptors is {\em persistent homology}, which encodes the change in shape as a filtration parameter changes; a typical parameter is the feature scale. For many data sets, it is useful to simultaneously vary multiple filtration parameters, for example feature scale and density. While the theoretical properties of single parameter persistent homology are well understood, less is known about the multiparameter case. In particular, a central question is the problem of representing multiparameter persistent homology by elements of a vector space for integration with standard machine learning algorithms. Existing approaches to this problem either ignore most of the multiparameter information to reduce to the one-parameter case or are heuristic and potentially unstable in the face of noise. In this article, we introduce a new general representation framework that leverages recent results on {\em decompositions} of multiparameter persistent homology. This framework is rich in information, fast to compute, and encompasses previous approaches. Moreover, we establish theoretical stability guarantees under this framework as well as efficient algorithms for practical computation, making this framework an applicable and versatile tool for analyzing geometric and point cloud data. We validate our stability results and algorithms with numerical experiments that demonstrate statistical convergence, prediction accuracy, and fast running times on several real data sets.
David Loiseaux, Mathieu Carrière, Andrew J. Blumberg
2023-06-19T21:28:53Z
http://arxiv.org/abs/2306.11170v1
A Framework for Fast and Stable Representations of Multiparameter Persistent Homology Decompositions ###### Abstract Topological data analysis (TDA) is an area of data science that focuses on using invariants from algebraic topology to provide multiscale shape descriptors for geometric data sets such as point clouds. One of the most important such descriptors is _persistent homology_, which encodes the change in shape as a filtration parameter changes; a typical parameter is the feature scale. For many data sets, it is useful to simultaneously vary multiple filtration parameters, for example feature scale and density. While the theoretical properties of single parameter persistent homology are well understood, less is known about the multiparameter case. In particular, a central question is the problem of representing multiparameter persistent homology by elements of a vector space for integration with standard machine learning algorithms. Existing approaches to this problem either ignore most of the multiparameter information to reduce to the one-parameter case or are heuristic and potentially unstable in the face of noise. In this article, we introduce a new general representation framework that leverages recent results on _decompositions_ of multiparameter persistent homology. This framework is rich in information, fast to compute, and encompasses previous approaches. Moreover, we establish theoretical stability guarantees under this framework as well as efficient algorithms for practical computation, making this framework an applicable and versatile tool for analyzing geometric and point cloud data. We validate our stability results and algorithms with numerical experiments that demonstrate statistical convergence, prediction accuracy, and fast running times on several real data sets. ## 1 Introduction Topological Data Analysis (TDA) [1] is a methodology for analyzing data sets using multiscale shape descriptors coming from algebraic topology. There has been intense interest in the field in the last decade, since topological features promise to allow practitioners to compute and encode information that classical approaches do not capture. Moreover, TDA rests on solid theoretical grounds, with guarantees accompanying many of its methods and descriptors. TDA has proved useful in a wide variety of application areas, including computer graphics [14, 15], computational biology [16], and material science [2], STR\({}^{+}\)17], among many others. Footnote 1: The idea of TDA is to use the notion of _transient homology_, which is a natural choice for the definition of _transient homology_. The main tool of TDA is _persistent homology_. In its most standard form, one is given a finite metric space \(X\) (e.g., a finite set of points and their pairwise distances) and a continuous function \(f:X\to\mathbb{R}\). This function usually represents a parameter of interest (such as, e.g., scale or density for point clouds, marker genes for single-cell data, etc), and the goal of persistent homology is to characterize the topological variations of this function on the data at all possible scales. Of course, the idea of considering multiscale representations of geometric data is not new [14, 15, 16]; the contribution of persistent homology is to obtain a novel and theoretically tractable multiscale shape descriptor. More formally, persistent homology is achieved by computing the so-called _persistence barcode_ of \(f\), which is obtained by looking at all sublevel sets of the form \(\{f^{-1}((-\infty,\alpha])\}_{\alpha\in\mathbb{R}}\) also called _filtration induced by \(f\)_, and by computing a _decomposition_ of this filtration, that is, by recording the appearances and disappearances of topological features (connected components, loops, enclosed spheres, etc) in these sets. When such a feature appears (resp. disappears), e.g., in a sublevel set \(f^{-1}((-\infty,\alpha_{b}])\), we call the corresponding threshold \(\alpha_{b}\) (resp. \(\alpha_{d}\)) the _birth time_ (resp. _death time_) of the topological feature, and we summarize this information in a set of intervals, or bars, called the persistence barcode \(D(f):=\{(\alpha_{b},\alpha_{d})\}_{\alpha\in A}\subset\mathbb{R}\times\mathbb{R} \cup\{\infty\}\). Moreover, the bar length \(\alpha_{d}-\alpha_{b}\) often serves as a proxy for the statistical significance of the corresponding feature. However, an inherent limitation of the formulation of persistent homology is that it can handle only a single filtration parameter \(f\). However, in practice it is common that one has to deal with multiple parameters. This translates into multiple filtration functions: a standard example is when one aims at obtaining meaningful topological representation of a noisy point cloud. In this case, both feature scale and density functions are necessary (see Appendix A). An extension of persistent homology to several filtration functions is called _multiparameter_ persistent homology [1, 1], and studies the topological variations of a continuous _multiparameter_ function \(f:X\to\mathbb{R}^{n}\) with \(n\in\mathbb{N}^{*}\). This setting is notoriously difficult to analyze theoretically as there is no result ensuring the existence of an analogue of persistence barcodes, i.e., a decomposition into subsets of \(\mathbb{R}^{n}\), each representing the lifespan of a topological feature. Still, it remains possible to define weaker topological invariants in this setting. The most common one is the so-called _rank invariant_ (as well as its variations, such as the generalized rank invariant [13], and its decompositions, such as the signed barcodes [1]), which describes how the topological features associated to any pair of sublevel sets \(\{x\in X:f(x)\leq\alpha\}\) and \(\{x\in X:f(x)\leq\beta\}\) such that \(\alpha\leq\beta\) (w.r.t. the partial order in \(\mathbb{R}^{n}\)), are connected. The rank invariant is a construction in abstract algebra, and so the task of finding appropriate _representations_ of this invariant, i.e., embeddings into Hilbert spaces, is critical. Hence, a number of such representations have been defined, which first approximate the rank invariant by computing persistence barcodes from several linear combinations of filtrations (often called the _fibered barcode_), and then aggregate known single-parameter representations for them [1, 2, 3]. This procedure has also been generalized recently [14]. However, the rank invariant, and its associated representations, are known to be much less informative than decompositions (when they exist): many functions have different decompositions yet the same rank invariants. Therefore, the aforementioned representations can encode only limited multiparameter topological information. Instead, in this work, we focus on _candidate decompositions_ of the function, in order to create descriptors that are strictly more powerful than the rank invariant. Indeed, while there is no general decomposition theorem, there is recent work that constructs candidate decompositions in terms of simple pieces [1, 2, 3] that always exist but do not necessarily suffice to reconstruct all of the multiparameter information. Nonetheless, they are strictly more informative than the rank invariant under mild conditions, are stable, and approximate the true decomposition when it exists1. For instance, in Figure 2, we present a bifiltration of a noisy point cloud with scale and density **(left)**, and a corresponding candidate decomposition comprised of subsets of \(\mathbb{R}^{2}\), each representing a topological feature **(middle)**. For instance, there is a large green subset in the decomposition that represents the circle formed by the points that are not outliers (also highlighted in green in the bifiltration). Footnote 1: Note that although multiparameter persistent homology can always be decomposed as a sum of indecomposable pieces (see [1, Theorem 4.2] and [1]), these decompositions are prohibitively difficult to interpret and work with. Unfortunately, while more informative, candidate decompositions suffer from the same problem than the rank invariant; they also need appropriate representations in order to be processed by standard Figure 1: Common pipelines for the use of multiparameter persistent homology in data science—our work provides new contributions to the arrow highlighted in red. methods. In this work, we bridge this gap by providing new representations designed for candidate decompositions. See Figure 1 for a summarizing figure. Contributions.Our contributions in this work are listed below: * We provide a general framework that parametrizes representations of multiparameter persistent homology decompositions (Definition 1) and which encompasses previous approaches in the literature. These representations take the form of a parametrized family of continuous functions on \(\mathbb{R}^{n}\) that can be binned into images for visualization and data science. * We identify parameters in this framework that result in representations that have stability guarantees while still encoding more information than the rank invariant (see Theorem 1). * We illustrate the performance of our framework with numerical experiments: (1) We demonstrate the practical consequences of the stability theorem by measuring the statistical convergence of our representations. (2) We achieve the best performance with the lowest runtime on several classification tasks on public data sets (see Sections 4.1 and 4.3). Related work.Closely related to our method is the recent contribution [3], which also proposes a representation for decompositions. However, their approach, while being efficient in practice, is a heuristic with no corresponding mathematical guarantees. In particular, it is known to be unstable: similar decompositions can lead to very different representations, as shown in Appendix B. Our approach can be understood as a subsequent generalization of the work of [3], with new mathematical guarantees that allow to derive, e.g., statistical rates of convergence. Outline.Our work is organized as follows. In Section 2, we recall the basics of multiparameter persistent homology. Next, in Section 3 we present our general framework and state our associated stability result. Finally, we showcase the numerical performances of our representations in Section 4, and we conclude in Section 5. ## 2 Background In this section, we briefly recall the basics of single and multiparameter persistent homology, and refer the reader to Appendix C and [1, 2] for a more complete treatment. Persistent homology.The basic brick of persistent homology is a _filtered topological space_\(X\), by which we mean a topological space \(X\) together with a function \(f\colon X\to\mathbb{R}\) (for instance, in Figure 5, \(X=\mathbb{R}^{2}\) and \(f=f_{P}\)). Then, given \(\alpha>0\), we call \(F(\alpha):=f^{-1}((-\infty,\alpha])\subseteq X\) the _sublevel set of \(f\) at level \(\alpha\)_. Given levels \(\alpha_{1}\leq\dots\leq\alpha_{n}\), the corresponding sublevel sets are nested w.r.t. inclusion, i.e., one has \(F(\alpha_{1})\subseteq F(\alpha_{2})\subseteq\dots\subseteq F(\alpha_{i}) \subseteq\dots\subseteq F(\alpha_{n})\). This system is an example of _filtration_ of \(X\), where a filtration is generally defined as a sequence of nested subspaces \(X_{1}\subseteq\dots\subseteq X_{i}\subseteq\dots\subseteq X\). Then, the core idea of persistent homology is to apply Figure 2: **(left)** Bi-filtration of a noisy point cloud induced by both feature scale (using unions of balls with increasing radii) and (co)density. The cycle highlighted in the green zone can be detected as a large subset in the corresponding candidate decomposition computed by the MMA method [1] (middle), and in our representation of it (right). the \(k\)th _homology functor_\(H_{k}\) on each \(F(\alpha_{i})\). We do not define the homology functor explicitly here, but simply recall that each \(H_{k}(F(\alpha_{i}))\) is a vector space, whose basis elements represent the \(k\)th dimensional topological features of \(F(\alpha_{i})\) (connected components for \(k=0\), loops for \(k=1\), spheres for \(k=2\), etc). Moreover, the inclusions \(F(\alpha_{i})\subseteq F(\alpha_{i+1})\) translate into linear maps \(H_{k}(F(\alpha_{i}))\to H_{k}(F(\alpha_{i+1}))\), which connect the features of \(F(\alpha_{i})\) and \(F(\alpha_{i+1})\) together. This allows to keep track of the topological features in the filtration, and record their levels, often called times, of appearance and disappearance. More formally, such a sequence of vector spaces connected with linear maps \(\mathbb{M}=H_{*}(F(\alpha_{1}))\rightarrow\cdots\to H_{*}(F(\alpha_{n}))\) is called a _persistence module_, and the standard decomposition theorem [16, Theorem 2.8] states that this module can always be decomposed as \(\mathbb{M}=\oplus_{i=1}^{m}\mathbb{I}[\alpha_{b_{i}},\alpha_{d_{i}}]\), where \(\mathbb{I}[\alpha_{b_{i}},\alpha_{d_{i}}]\) stands for a module of dimension 1 (i.e., that represents a single topological feature) between \(\alpha_{b_{i}}\) and \(\alpha_{d_{i}}\), and dimension 0 (i.e., that represents no feature) elsewhere. It is thus convenient to summarize such a module with its _persistence barcode_\(D(\mathbb{M})=\{[\alpha_{b_{i}},\alpha_{d_{i}}]\}_{1\leq i\leq m}\). Note that in practice, one is only given a sampling of the topological space \(X\), which is usually unknown. In that case, persistence barcodes are computed using combinatorial models of \(X\) computed from the data, called _simplicial complexes_. See Appendix C. Multiparameter persistent homology.The persistence modules defined above extend straightforwardly when there are multiple filtration functions. An \(n\)-filtration, or multifiltration, induced by a function \(f:X\rightarrow\mathbb{R}^{n}\), is the family of sublevel sets \(F=\{F(\alpha)\}_{\alpha\in\mathbb{R}^{n}}\), where \(F(\alpha):=\{x\in X:f(x)\leq\alpha\}\) and \(\leq\) denotes the partial order of \(\mathbb{R}^{n}\). Again, applying the homology functor \(H_{k}\) on the multifiltration \(F\) induces a _multiparameter persistence module_\(\mathbb{M}\). However, contrary to the single-parameter case, the algebraic structure of such a module is very intricate, and there is no general decomposition into modules of dimension at most 1, and thus no analogue of the persistence barcode. Instead, the _rank invariant_ has been introduced as a weaker invariant: it is defined, for a module \(\mathbb{M}\), as the function \(\mathrm{RI}:(\alpha,\beta)\mapsto\mathrm{rank}(\mathbb{M}(\alpha)\rightarrow \mathbb{M}(\beta))\) for any \(\alpha\leq\beta\), but is also known to miss a lot of structural properties of \(\mathbb{M}\). To remedy this, several methods have been developed to compute _candidate decompositions_ for \(\mathbb{M}\)[1, 16, 17], where a candidate decomposition is a module \(\bar{\mathbb{M}}\) that can be decomposed as \(\bar{\mathbb{M}}\simeq\oplus_{i=1}^{m}M_{i}\), where each \(M_{i}\) is an _interval module_, i.e., its dimension is at most 1, and its support \(\mathrm{supp}\left(M_{i}\right):=\{\alpha\in\mathbb{R}^{n}:\dim(M_{i}(\alpha))=1\}\) is an interval of \(\mathbb{R}^{n}\) (see Appendix D). In particular, when \(\mathbb{M}\) does decompose into intervals, candidate decompositions must agree with the true decomposition. One also often asks candidate decompositions to preserve the rank invariant. Distances.Finally, multiparameter persistence modules can be compared with two standard distances: the _interleaving_ and _bottleneck_ (or \(\ell^{\infty}\)) distances. Their explicit definitions are technical and not necessary for our main exposition, so we refer the reader to, e.g., [1, Sections 6.1, 6.4] and Appendix D for more details. The _stability theorem_[15, Theorem 5.3] states that multiparameter persistence modules are stable: \(d_{1}(\mathbb{M},\mathbb{M}^{\prime})\leq\left\|f-f^{\prime}\right\|_{\infty},\) where \(f\) and \(f^{\prime}\) are continuous multiparameter functions associated to \(\mathbb{M}\) and \(\mathbb{M}^{\prime}\) respectively. ## 3 T-CDR: a template for representations of candidate decompositions Even though candidate decompositions of multiparameter persistence modules are known to encode useful data information, their algebraic definitions make them not suitable for subsequent data science and machine learning purposes. Hence, in this section, we introduce the Template Candidate Decomposition Representation (T-CDR): a general framework and template system for representations of candidate decompositions, i.e., maps defined on the space of candidate decompositions and taking values in an (implicit or explicit) Hilbert space. ### T-CDR definition Notations.In this article, by a slight abuse of notation, we will make no difference in the notations between an interval module and its support, and we will denote the restriction of an interval support \(M\) to a given line \(\ell\) as \(M\big{|}_{\ell}\). **Definition 1**.: Let \(\mathbb{M}=\oplus_{i=1}^{m}M_{i}\) be a candidate decomposition, and let \(\mathcal{M}\) be the space of interval modules. The _Template Candidate Decomposition Representation_ (T-CDR) of \(\mathbb{M}\) is: \[V_{\mathrm{op},w,\phi}(\mathbb{M})=\mathrm{op}(\{w(M_{i})\cdot\phi(M_{i})\}_{i= 1}^{m}), \tag{1}\] where \(\mathrm{op}\) is a permutation invariant operation (sum, max, min, mean, etc), \(w:\mathcal{M}\rightarrow\mathbb{R}\) is a weight function, and \(\phi:\mathcal{M}\rightarrow\mathcal{H}\) sends any interval module to a vector in a Hilbert space \(\mathcal{H}\). The general definition of T-CDR is inspired from a similar framework that was introduced for single-parameter persistence with the automatic representation method _Perslay_[1]. Relation to previous work.Interestingly, whenever applied on candidate decompositions that preserve the rank invariant, specific choices of \(\mathrm{op}\), \(w\) and \(\phi\) reproduce previous representations: * Using \(w:M_{i}\mapsto 1\), \(\phi:M_{i}\mapsto\left\{\begin{array}{ll}\mathbb{R}^{n}&\rightarrow\mathbb{ R}\\ x&\mapsto\Lambda(x,M_{i}\big{|}_{\ell_{x}})\end{array}\right.\) and \(\mathrm{op}=k\)th maximum, where \(l_{x}\) is the diagonal line crossing \(x\), and \(\Lambda(\cdot,\ell)\) denotes the tent function associated to any segment \(\ell\subset\mathbb{R}^{n}\), induces the \(k\)th multiparameter persistence landscape [20]. * Using \(w:M_{i}\mapsto 1\), \(\phi:M_{i}\mapsto\left\{\begin{array}{ll}\mathbb{R}^{n}\times\mathbb{R}^{n }&\rightarrow\mathbb{R}^{d}\\ p,q&\mapsto w^{\prime}(M_{i}\cap[p,q])\cdot\phi^{\prime}(M_{i}\cap[p,q])\\ \end{array}\right.\) and \(\mathrm{op}=\mathrm{op}^{\prime}\), where \(\mathrm{op}^{\prime}\), \(w^{\prime}\) and \(\phi^{\prime}\) are the parameters of any persistence diagram representation from Perslay, induces the multiparameter persistence kernel [1]. * Using \(w:M_{i}\mapsto\mathrm{vol}(M_{i})\), \(\phi:M_{i}\mapsto\left\{\begin{array}{ll}\mathbb{R}^{n}&\rightarrow\mathbb{ R}\\ x&\mapsto\exp(-\min_{\ell\in L}d(x,M_{i}\big{|}_{\ell})^{2}/\sigma^{2})\end{array}\right.\) and \(\mathrm{op}=\sum\), where \(L\) is a set of (pre-defined) diagonal lines, induces the multiparameter persistence image [1]. Recall that the first two approaches are built from fibered barcodes and rank invariants, and that it is easy to find persistence modules that are different yet share the same rank invariant (see [20, Figure 3]. On the other hand, the third approach uses more information about the candidate decomposition, but is known to be unstable (see Appendix B). Hence, in the next section, we focus on specific choices for the T-CDR parameters that induce stable yet informative representations. ### Metric properties In this section, we study specific parameters for T-CDR (see Definition 1) that induce representations with associated robustness properties. We call this subset of representations _Stable Candidate Decomposition Representations_ (S-CDR), and define them below. **Definition 2**.: The S-CDR parameters are: 1. the weight function \(w:M\mapsto\sup\{\varepsilon>0:\exists y\in\mathbb{R}^{n}\text{ s.t. }\ell_{y, \varepsilon}\subset\mathrm{supp}\left(M\right)\}\),where \(\ell_{y,\varepsilon}\) is the segment between \(y-\varepsilon\cdot[1,\ldots,1]\) and \(y+\varepsilon\cdot[1,\ldots,1]\), 2. the individual interval representations \(\phi_{\delta}(M):\mathbb{R}^{n}\rightarrow\mathbb{R}\): (a) \(\phi_{\delta}(M)(x)=\frac{1}{\delta}w(\mathrm{supp}\left(M\right)\cap R_{x, \boldsymbol{\delta}})\), (b) \(\phi_{\delta}(M)(x)=\frac{1}{(2\delta)^{n}}\mathrm{vol}\left(\mathrm{supp} \left(M\right)\cap R_{x,\boldsymbol{\delta}}\right)\), (c) \(\phi_{\delta}(M)(x)=\frac{1}{(2\delta)^{n}}\sup_{x^{\prime},\boldsymbol{\delta}^ {\prime}}\left\{\mathrm{vol}(R_{x^{\prime},\boldsymbol{\delta}^{\prime}}):R_{x^ {\prime},\boldsymbol{\delta}^{\prime}}\subseteq\mathrm{supp}\left(M\right) \cap R_{x,\boldsymbol{\delta}}\right\}\), where \(R_{x,\boldsymbol{\delta}}\) is the hypersquare \(\{y\in\mathbb{R}^{n}:x-\boldsymbol{\delta}\leq y\leq x+\boldsymbol{\delta} \}\subseteq\mathbb{R}^{n}\), \(\boldsymbol{\delta}:=\delta\cdot[1,\ldots,1]\in\mathbb{R}^{n}\) for any \(\delta>0\), and \(\mathrm{vol}\) denotes the volume of a set in \(\mathbb{R}^{n}\). 3. the permutation invariant operators \(\mathrm{op}=\sum\) and \(\mathrm{op}=\sup\). In other words, our weight function is the length of the largest diagonal segment one can fit inside \(\mathrm{supp}\left(M\right)\), and the interval representations (a), (b) and (c) are the largest diagonal length, volume, and largest hypersquare volume one can fit locally inside \(\mathrm{supp}\left(M\right)\cap R_{x,\boldsymbol{\delta}}\) respectively. Equipped with these S-CDR parameters, we can now define the two following S-CDR, that can be applied on any candidate decomposition \(\mathbb{M}=\oplus_{i=1}^{m}M_{i}\): \[V_{p,\delta}(\mathbb{M}):=\sum_{i=1}^{m}\frac{w(M_{i})^{p}}{\sum_{j=1}^{m}w(M_{ j})^{p}}\phi_{\delta}(M_{i}),\quad\text{(2)}\qquad\qquad\qquad V_{\infty,\delta}( \mathbb{M}):=\sup_{1\leq i\leq m}\phi_{\delta}(M_{i}). \tag{3}\] These S-CDR parameters allow for some trade-off between computational cost and the amount of information that is kept: (a) and (c) are very easy to compute, but (b) encodes more information about interval shapes. See Figure 2 (right) for visualizations. Stability.The main motivation for introducing S-CDR parameters is that the corresponding S-CDR are stable in the interleaving and bottleneck distances, as stated in the following theorem. **Theorem 1**.: _Let \(\mathbb{M}=\oplus_{i=1}^{m}M_{i}\) and \(\mathbb{M}^{\prime}=\oplus_{j=1}^{m^{\prime}}M_{j}^{\prime}\) be two candidate decompositions. Assume that we have \(\frac{1}{m}\sum_{i}w(M_{i}),\frac{1}{m^{\prime}}\sum_{j}w(M_{j}^{\prime})\geq C\), for some \(C>0\). Then for any \(\delta>0\), one has_ \[\left\|V_{0,\delta}(\mathbb{M})-V_{0,\delta}(\mathbb{M}^{\prime}) \right\|_{\infty} \leq 2(d_{\mathrm{B}}(\mathbb{M},\mathbb{M}^{\prime})\wedge\delta)/\delta, \tag{4}\] \[\left\|V_{1,\delta}(\mathbb{M})-V_{1,\delta}(\mathbb{M}^{\prime} )\right\|_{\infty} \leq\left[4+\frac{2}{C}\right](d_{\mathrm{B}}(\mathbb{M},\mathbb{ M}^{\prime})\wedge\delta)/\delta,\] (5) \[\left\|V_{\infty,\delta}(\mathbb{M})-V_{\infty,\delta}(\mathbb{M} ^{\prime})\right\|_{\infty} \leq(d_{\mathrm{I}}(\mathbb{M},\mathbb{M}^{\prime})\wedge\delta) /\delta, \tag{6}\] _where \(\wedge\) stands for minimum._ A proof of Theorem 1 can be found in Appendix F. These results are the main theoretical contribution in this work, as the only other decomposition-based representation in the literature [1] has no such guarantees. The other representations [1, 2, 3] enjoy similar guarantees than ours, but are computed from the rank invariant and do not exploit the information contained in decompositions. Theorem 1 shows that S-CDRs bring the best of both worlds: these representations are richer than the rank invariant and stable at the same time. We also provide an additional stability result with a similar, yet more complicated representation in Appendix G, whose upper bound does not involve taking minimum. **Remark 1**.: S-CDR are injective representations: if the support of two interval modules are different, then their corresponding S-CDRs (evaluated on a point that belongs to the support of one interval but not on the support of the other) will differ, provided that \(\delta\) is sufficiently small. ## 4 Numerical Experiments In this section, we illustrate the efficiency of our S-CDRs with numerical experiments. First, we explore the stability theorem in Section 4.1 by studying the convergence rates, both theoretically and empirically, of S-CDRs on various data sets, as well as their running times in Section 4.2. Then, we showcase the efficiency of S-CDRs on classification tasks in Section 4.3. Our code for computing S-CDRs is based on the MMA [1] and Gudhi [1] libraries for computing candidate decompositions2 and is publicly available at [https://github.com/DavidLapous/multipers](https://github.com/DavidLapous/multipers). We also provide pseudo-code in Appendix H. Footnote 2: Several different approaches can be used for computing decompositions [1, 3]. In our experiments, we used MMA [1] because of its simplicity and rapidity. ### Convergence rates In this section, we study the convergence rate of S-CDRs with respect to the number of sampled points, when computed from specific bifiltrations. Similar to the single parameter persistence setting [1], these rates are derived from Theorem 1. Indeed, since concentration inequalities for multiparameter persistence modules have already been described in the literature, these concentration inequalities can transfer to our representations. Note that while Equations (7) and (8), which provide such rates, are stated for the S-CDR in (3), they also hold for the S-CDR in (2). Measure bilitration.Let \(\mu\) be a compactly supported probability measure of \(\mathbb{R}^{D}\), and let \(\mu_{n}\) be the discrete measure associated to a sampling of \(n\) points from \(\mu\). The _measure bilitration_ associated to \(\mu\) and \(\mu_{n}\) is defined as \(\mathcal{F}_{r,t}^{\mu}:=\{x\in\mathbb{R}^{D}:\mu(B(x,r))\leq t\}\), where \(B(x,r)\) denotes the Euclidean ball centered on \(x\) with radius \(r\). Now, let \(\mathbb{M}\) and \(\mathbb{M}_{n}\) be the multiparameter persistence modules obtained from applying the homology functor on top of the measure bilitrations \(\mathcal{F}^{\mu}\) and \(\mathcal{F}^{\mu_{n}}\). These modules are known to enjoy the following stability result [1, Theorem 3.1, Proposition 2.23 (i)]: \(d_{\mathrm{I}}(\mathbb{M},\mathbb{M}_{n})\leq d_{\mathrm{Pr}}(\mu,\mu_{n})\leq \min(d_{W}^{p}(\mu,\mu_{n})^{\frac{1}{2}},d_{W}^{p}(\mu,\mu_{n})^{\frac{p}{p+1 }}),\) where \(d_{W}^{p}\) and \(d_{\mathrm{Pr}}\) stand for the \(p\)-Wasserstein and Prokhorov distances between probability measures. Combining these inequalities with Theorem 1, then taking expectations and applying the concentration inequalities of the Wasserstein distance (see [13, Theorem 3.1] and [14, Theorem 1]) lead to: \[\delta\mathbb{E}\left[\left\|V_{\infty,\delta}(\mathbb{M})-V_{\infty,\delta}( \mathbb{M}_{n})\right\|_{\infty}\right]\leq\left(c_{p,q}\mathbb{E}\left(|X|^{q }\right)n^{-\left(\frac{1}{2p\wedge q}\right)\wedge\frac{1}{p}-\frac{1}{q}} \log^{\alpha/q}n\right)^{\frac{p}{p+1}}, \tag{7}\] where \(\vee\) stands for maximum, \(\alpha=2\) if \(2p=q=d\), \(\alpha=1\) if \(d\neq 2p\) and \(q=dp/(d-p)\wedge 2p\) or \(q>d=2p\) and \(\alpha=0\) otherwise, \(c_{p,q}\) is a constant that depends on \(p\) and \(q\), and \(X\) is a random variable of law \(\mu\). Cech complex and density.A limitation of the measure bifiltration is that it can be difficult to compute. Hence, we now focus on another, easier to compute bifiltration. Let \(X\) be a smooth compact \(d\)-submanifold of \(\mathbb{R}^{D}\) (\(d\leq D\)), and \(\mu\) be a measure on \(X\) with density \(f\) with respect to the uniform measure on \(X\). We now define the bifiltration \(\mathcal{F}^{C,f}\) with: \[\mathcal{F}^{C,f}_{u,v}:=\hat{\mathcal{C}}\mathrm{ech}(u)\cap f^{-1}([v,\infty ))=\left\{x\in\mathbb{R}^{D}:d(x,X)\leq u,f(x)\geq v\right\}.\] Moreover, given a set \(X_{n}\) of \(n\) points sampled from \(\mu\), we also consider the approximate bifiltration \(\mathcal{F}^{C,f_{n}}\), where \(f_{n}\colon X\to\mathbb{R}\) is an estimation of \(f\) (such as, e.g., a kernel density estimator). Let \(\mathbb{M}\) and \(\mathbb{M}_{n}\) be the multiparameter persistence modules associated to \(\mathcal{F}^{C,f}\) and \(\mathcal{F}^{C,f_{n}}\). Then, the stability of the interleaving distance [12, Theorem 5.3] ensures: \[d_{1}(\mathbb{M},\mathbb{M}_{n})\leq\left\|f-f_{n}\right\|_{\infty}\lor d_{H} (X,X_{n}),\] where \(d_{H}\) stands for the Hausdorff distance. Moreover, concentration inequalities for the Hausdorff distance and kernel density estimators are also available in the literature (see [1, Theorem 4] and [15, Corollary 15]). More precisely, when the density \(f\) is \(L\)-Lipschitz and bounded from above and from below, i.e., when \(0<f_{\min}\leq f\leq f_{\max}<\infty\), and when \(f_{n}\) is a kernel density estimator of \(f\) with associated kernel \(k\), one has: \[\mathbb{E}(d_{H}(X,X_{n}))\lesssim\left(\frac{\log n}{n}\right)^{\frac{1}{2}} \text{ and }\mathbb{E}(\left\|f-f_{n}\right\|_{\infty})\lesssim Lh_{n}+\sqrt{\frac{ \log(1/h_{n})}{nh_{n}^{d}}},\] where \(h_{n}\) is the (adaptive) bandwidth of the kernel \(k\). In particular, if \(\mu\) is a measure comparable to the uniform measure of a \(d=2\)-manifold, then for any stationary sequence \(h_{n}:=h>0\), and considering a Gaussian kernel \(k\), one has: \[\delta\mathbb{E}\left[\left\|V_{\infty,\delta}(\mathbb{M})-V_{\infty,\delta}( \mathbb{M}_{n})\right\|_{\infty}\right]\lesssim\sqrt{\frac{\log n}{n}}+Lh. \tag{8}\] Empirical convergence rates.Now that we have established the theoretical convergence rates of S-CDRs, we estimate and validate them empirically on data sets. We will first study a synthetic data set and then a real data set of point clouds obtained with immunohistochemistry. We also illustrate how the stability of S-CDRs (stated in Theorem 1) is critical for obtaining such convergence in Appendix B, where we show that our main competitor, the multiparameter persistence image [1], is unstable and thus cannot achieve convergence, both theoretically and numerically. Annuulus with non-uniform density.In this synthetic example, we generate an annulus of 25,000 points in \(\mathbb{R}^{2}\) with a non-uniform density, displayed in Figure 2(a). Then, we compute the bifiltration \(\mathcal{F}^{C,f_{n}}\) corresponding to the Alpha filtration and the sublevel set filtration of a kernel density estimator, with bandwidth parameter \(h=0.1\), on the complete Alpha simplicial complex. Finally, we compute the candidate decompositions and associated S-CDRs of the associated multiparameter module (in homology dimension 1), and their normalized distances to the target representation, using either \(\left\|\cdot\right\|_{2}^{2}\) or \(\left\|\cdot\right\|_{\infty}\). The corresponding distances for various number of sample points are displayed in log-log plots in Figure 2(b). One can see that the empirical rate is roughly consistent with the theoretical one (\(-1/2\) for \(\left\|\cdot\right\|_{\infty}\) and \(-1\) for \(\left\|\cdot\right\|_{2}\)), even when \(p\neq\infty\) (in which case our S-CDRs are stable for \(d_{\mathrm{B}}\) but theoretically not for \(d_{\mathrm{I}}\)). _Immunohistochemistry data._ In our second experiment, we consider a point cloud representing cells, taken from [12], see Figure 3(a). These cells are given with biological markers, which are typically used to assess, e.g., cell types and functions. In this experiment, we first triangulate the point cloud by placing a \(100\times 100\) grid on top of it. Then, we filter this grid using the sublevel set filtrations of kernel density estimators (with Gaussian kernel and bandwidth \(h=1\)) associated to the CD8 and CD68 biological markers for immune cells. Finally, we compute the associated candidate decompositions of the multiparameter modules in homology dimensions 0 and 1, and we compute and concatenate their corresponding S-CDRs. The convergence rates are displayed in Figure 3(b). Similar to the previous experiment, the theoretical convergence rate of our representations is upper bounded by the one for kernel density estimators with the \(\infty\)-norm, which is of order \(\frac{1}{\sqrt{n}}\) with respect to the number \(n\) of sampled points. Again, the observed and theoretical convergence rates are consistent. ### Running time comparisons In this section, we provide running time comparisons between S-CDRs and the MPI and MPL representations. We provide results in Table 2, where it can be seen that S-CDRs (computed on the pinched annulus and immunohistochemistry data sets defined above) can be computed much faster than the other representations, by a factor of at least 25 (all representations are evaluated on grids of sizes \(50\times 50\) and \(100\times 100\), and we provide the maximum running time over \(p\in\{0,1,\infty\}\)). All computations were done using a Ryzen 4800 laptop CPU, with 16GB of RAM. Interestingly, this sparse and fast implementation based on corners can also be used to improve on the running time of the multiparameter persistence landscapes (MPL), as one can see from Algorithm 4 in Appendix H (which retrieves the persistence barcode of a multiparameter persistence module along a given line; this is enough to compute the MPL) and from the second line of Table 2. ### Classification Finally, we illustrate the efficiency of S-CDRs by using them for classification purposes. We show that they perform comparably or better than existing topological representations as well as standard Figure 4: Convergence rate of immunohistochemistry data set. Figure 3: Convergence rate of synthetic data set. baselines on several UCR benchmark data sets and on immunohistochemistry data. Concerning UCR, we work with point clouds obtained from time delay embedding applied on the UCR time series, following the procedure of [1]. In both tasks, every point cloud has a label (corresponding to the type of its cells in the immunohistochemistry data, and to pre-defined labels in the UCR data), and our goal is to check whether we can predict these labels by training classifiers on S-CDRs computed from the same bifiltration as in the previous section. We compare the performances of our S-CDRs (evaluated on a \(50\times 50\) grid) to the one of the multiparameter persistence landscape (MPL) [21], kernel (MPK) [13] and images (MPI) [1], as well as their single-parameter counterparts (P-L, P-I and PSS-K) 3. We also compare to some non-topological baselines: we used the standard Ripley function evaluated on 100 evenly spaced samples in \([0,1]\) for immunohistochemistry data, and k-NN classifiers with three difference distances for the UCR time series (denoted by B1, B2, B3), as suggested in [1]. All scores on the immunohistochemistry data were computed with 5 folds after cross-validating a few classifiers (random forests, support vector machines and xgboost). For the time series data, our accuracy scores were obtained after also cross-validating the following S-CDR parameters; \(p\in\{0,1\}\), \(\mathrm{op}\in\{\mathrm{sum},\mathrm{mean}\}\), \(\delta\in\{0.01,0.1,0.5,1\}\), \(h\in\{0.1,0.5,1,1.5\}\) with homology dimensions \(0\) and \(1\). All results can be found in Table 1, and the full table is in Appendix I (note that there are no variances for UCR data since pre-defined train/test splits were provided). One can see that S-CDR almost always outperform topological baselines and are comparable to the standard baselines on the UCR benchmarks. Most notably, S-CDR radically outperforms the standard baseline and competing topological measures on the immunohistochemistry data set. Footnote 3: Note that the sizes of the point clouds in the immunohistochemistry data were too large for MPK and MPI using the code provided in [https://github.com/MathieuCarriere/multipers](https://github.com/MathieuCarriere/multipers), and that all three single-parameter representations had roughly the same performance, so we only display one, denoted as P. ## 5 Conclusion In this article, we study the general question of representing decompositions of multiparameter persistence modules in Topological Data Analysis. We first introduce T-CDR: a general template framework including specific representations (called S-CDR) that are provably stable. Our experiments show that S-CDR is superior to the state of the art. **Limitations.** (1) Our current T-CDR parameter selection is currently done through cross-validation, which can be very time consuming and limits the number of parameters to choose from. (2) Our classification experiments were mostly illustrative. In particular, it would be useful to investigate more thoroughly on the influence of the T-CDR and S-CDR parameters, as well as the number of filtrations, on the classification scores. (3) In order to generate finite-dimensional vectors, we evaluated T-CDR and S-CDR on finite grids, which limited their discriminative powers when fine grids were too costly to compute. **Future work.** (1) Since T-CDR is similar to the Perslay framework of single parameter persistence [1] and since, in this work, each of the framework parameter was optimized by a neural network, it is thus natural to investigate whether one can optimize T-CDR parameters in a data-driven way as well, so as to be able to avoid cross-validation. (2) In our numerical applications, we focused on representations computed off of MMA decompositions [1]. In the future, we plan to investigate whether working with other decomposition methods [1, 1] lead to better numerical performance when combined with our representation framework. \begin{table} \begin{tabular}{|c|c c c c|c c c|c c|} \hline Dataset & B1 & B2 & B3 & PSS-K & P-L & P-L & MPK & MPL & MPI & S-CDR \\ \hline DFQE & 62.6 & 62.6 & **77.6** & **74.9** & 69.8 & 70.5 & 76.6 & 70.5 & **71.9** & **71.9** \\ ProQE & 71.7 & **72.5** & 71.7 & 47.5 & 67.4 & 66.3 & **74.6** & 69.6 & 71.7 & 73.8 \\ ProQE & 78.5 & 78.5 & 80.5 & 75.9 & 69.8 & 78.0 & 78.0 & 78.5 & 81.0 & **81.9** \\ ProQE & 89.0 & 79.0 & 78.4 & 78.4 & 72.2 & 72.5 & 78.7 & 78.4 & **81.4** & 79.4 \\ PPTM & 70.7 & **75.6** & 76.4 & 61.4 & 72.2 & 73.7 & 78.5 & 72.6 & 75.6 \\ Trp & **95.3** & **95.5** & 95.0 & 64.7 & 61.1 & 80.7 & 78.6 & 71.9 & **81.2** \\ GP & **91.3** & **91.3** & 90.7 & 90.6 & 84.7 & 80.0 & 88.7 & 94.0 & 90.7 & **96.3** \\ GP & **93.9** & **96.5** & **91.3** & 8.4 & 84.5 & 87.0 & 80.0 & 88.1 & 90.5 & 88.0 \\ GP & **97.5** & 97.5 & **97.7** & - & 88.3 & 87.3 & **96.8** & 88.3 & 95.9 & 95.3 \\ PC & **93.3** & 92.2 & 87.8 & - & 83.4 & 76.7 & 85.6 & 84.4 & 86.7 & **93.4** \\ \hline \multicolumn{8}{|c|}{Ripley} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{MPL} & \multicolumn{1}{c}{S-CDR} \\ \hline immuno & \multicolumn{1}{c}{6.7 \(\pm\) 2.3} & \multicolumn{1}{c}{60.7 \(\pm\) 4.2} & \multicolumn{1}{c}{65.3 \(\pm\) 3.0} & \multicolumn{1}{c}{**91.4 \(\pm\) 1.6**} \\ \hline \end{tabular} \end{table} Table 1: Scores for time series and immunohistochemistry. \begin{table} \begin{tabular}{|c|c|c|} \hline & Animals & Immuno \\ \hline Ours (S-CDR) & 250x \(\pm\) 25ms & 250x \(\pm\) 9.8ms \\ Ours (MPK) & 36.9ms \(\pm\) 0.8ms & 65.9ms \(\pm\) 0.9ms \\ MPI (50) & 6.43 \(\pm\) 25ms & 65.76 \(\pm\) 2.33ms \\ MPI (50) & 17.7 \(\pm\) 39ms & 15.63 \(\pm\) 1.4ms \\ MPI (100) & 13.1s \(\pm\) 125ms & 115.65 \(\pm\) 7.9ms \\ **MPL (100)** & 35.4 \(\pm\) 150ms & 31.3s \(\pm\) 22.3ms \\ \hline \end{tabular} \end{table} Table 2: Running times for S-CDRs and competitors.
2305.08171
Topological heavy fermions in magnetic field
The recently introduced topological heavy fermion model (THFM) provides a means for interpreting the low-energy electronic degrees of freedom of the magic angle twisted bilayer graphene as hybridization amidst highly dispersing topological conduction and weakly dispersing localized heavy fermions. In order to understand the Landau quantization of the ensuing electronic spectrum, a generalization of THFM to include the magnetic field B is desired, but currently missing. Here we provide a systematic derivation of the THFM in B and solve the resulting model to obtain the interacting Hofstadter spectra for single particle charged excitations. While naive minimal substitution within THFM fails to correctly account for the total number of magnetic subbands within the narrow band i.e. its total Chern number, our method -- based on projecting the light and heavy fermions onto the irreducible representations of the magnetic translation group -- reproduces the correct total Chern number. Analytical results presented here offer an intuitive understanding of the nature of the (strongly interacting) Hofstadter bands.
Keshav Singh, Aaron Chew, Jonah Herzog-Arbeitman, B. Andrei Bernevig, Oskar Vafek
2023-05-14T14:44:49Z
http://arxiv.org/abs/2305.08171v3
# Topological heavy fermions in magnetic field ###### Abstract The recently introduced topological heavy fermion model (THFM) provides a means for interpreting the low-energy electronic degrees of freedom of the magic angle twisted bilayer graphene as hybridization amidst highly dispersing topological conduction and weakly dispersing localized heavy fermions. In order to understand the Landau quantization of the ensuing electronic spectrum, a generalization of THFM to include the magnetic field \(B\) is desired, but currently missing. Here we provide a systematic derivation of the THFM in \(B\) and solve the resulting model to obtain the interacting Hofstadter spectra for single particle charged excitations. While naive minimal substitution within THFM fails to correctly account for the total number of magnetic subbands within the narrow band i.e. its total Chern number, our method -based on projecting the light and heavy fermions onto the irreducible representations of the magnetic translation group- reproduces the correct total Chern number. Analytical results presented here offer an intuitive understanding of the nature of the (strongly interacting) Hofstadter bands. ## I Introduction Since the discovery of the remarkable phase diagram of the magic angle twisted bilayer graphene (MATBG) [1; 2], substantial effort [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51] has been devoted to understanding its rich physics. The presence of topological flat bands within this system [52; 53; 54; 55] provides a novel platform to study the interplay between strong electron correlations and band topology. The recently introduced topological heavy fermion model (THFM) for MATBG [56; 57; 58] bridges the contrary signatures of localized [59; 60] and delocalized physics[61; 62] reported via STM and transport measurements[63; 64]. Within THFM the low energy electrons are viewed as a result of the hybridization between heavy \(p_{x}\pm ip_{y}\)-like Wannier states, localized at the AA stacking sites, and topological conduction fermions, denoted by \(f\) and \(c\) respectively in analogy to heavy fermion materials[56]. Among its other features, THFM allows for an intuitive explanation of the charged excitation spectra at integer fillings hitherto obtained via strong coupling expansion of projected models[65; 66]. The large moire period of \(\sim\)13nm in MATBG has revealed a sequence of broken symmetry Chern insulators yielding a plethora of \(\mathbf{B}\) induced phases at lower fluxes[63; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77] and has showcased, for the first time, reentrant correlated Hofstadter states at magnetic fields as low as 31T [78]. Thus it becomes important to better understand the interplay of correlations and band topology in the presence of a perpendicular \(\mathbf{B}\) field. Theoretical studies have previously focused on non-interacting [79; 80; 81] and strong coupling [82; 83; 84] regimes. Although exact, each employed considerable numerical analysis, restricting a deeper physical understanding of the mechanism for Landau quantization. In this manuscript, we show how one can understand the Landau quantization of the strong coupling spectra in terms of hybridization amidst Landau levels (LLs) of \(c\) fermions and hybrid Wannier states of \(f\) fermions. We find that only a particular number of \(f\) fermion momentum channels are allowed to hybridize to \(c\) fermion LLs, with coupling strength which decreases with increasing \(\mathbf{B}\) and increasing LL index \(m\). Moreover, through our analysis we can clearly understand the reason why a naive minimal coupling is unable to recover the correct total Chern number of the flat band. In the flat band limit of THFM, our framework allows for an exact solution including the dominant interactions and analytically explains the total Chern number. Although going away from the flat band limit requires numerics, given the simple structure of the our Hamiltonian, we are still able to compute the spectrum to unprecedentedly small fluxes and find it is captured well by the analytical solution in the flat band limit which can be taken all the way to \(\mathbf{B}=0\) as shown in Figs.(1),(8),(9) at \(\nu=0,-1,-2\) respectively. Moreover it provides means to intuitively understand the underlying energetics of the problem as we tune the coupling between the \(c\) and \(f\) modes from 0% to 100% as shown in Fig.(2). The formulas we derive are general for any rational value of \(\frac{\Phi}{\Phi_{0}}\), with \(\Phi\) being the flux through the unit cell and \(\Phi_{0}\) being the flux quantum \(\frac{hc}{c}\), but we focus our analysis on the \(\frac{1}{q}\) flux sequence and low \(\mathbf{B}\) where the results become particularly transparent. Our analysis as well unveils the physical nature of the anomalous low energy mode which is seen to be almost \(\mathbf{B}\)-independent, also observed in previous numerics[82], as the anomalous zero-LL of a massless Dirac particle, a key ingredient of the topological heavy fermion picture of MATBG. Although this work deals directly with THFM, our methods apply more generally. ## II Hamiltonian and the basis states The THFM in momentum space is given by [56] \[\hat{H}_{0}=\sum_{|{\bf k}|<\Lambda_{c}}\sum_{\tau s}\sum_{aa^{ \prime}=1}^{4}H_{aa^{\prime}}^{c,\tau}({\bf k})\tilde{c}_{{\bf k}a\tau s}^{ \dagger}\tilde{c}_{{\bf k}a^{\prime}\tau s}+\] \[\sum_{|{\bf k}|<\Lambda_{c}}\sum_{\tau s}\sum_{a=1}^{4}\sum_{b=1}^ {2}\left(e^{-\frac{1}{2}{\bf k}^{2}\lambda^{2}}H^{cf,v}({\bf k})_{ab}\tilde{c} _{{\bf k}a\tau s}^{\dagger}\tilde{f}_{{\bf k}b\tau s}\mbox{+h.c.}\right). \tag{1}\] Here \(\Lambda_{c}\) is the momentum cutoff for \(c\) fermions while \(f\)s, whose bandwidth is negligibly small, reside in the entire moire Brillouin zone. The tilde indicates the fermions describing the \({\bf B}=0\) Hamiltonian, \(\lambda\approx 0.38L_{m}\) acts as the damping factor for \(c\)-\(f\) hybridization and \(L_{m}\) is the moire period(mBZ). \(\tau=+1(-1)\) represents graphene valley \({\bf K}({\bf K}^{\prime})\) and \(s\) spin \(\uparrow\),\(\downarrow\). The \(c\)-\(c\) and \(c\)-\(f\) couplings are \[H_{aa^{\prime}}^{c,\tau}({\bf k})=\left(\begin{array}{cc}0_{2 \times 2}&v_{*}(\tau k_{x}\sigma_{0}+ik_{y}\sigma_{z})\\ v_{*}(\tau k_{x}\sigma_{0}-ik_{y}\sigma_{z})&M\sigma_{x}\end{array}\right)\] \[H_{ab}^{cf,\tau}({\bf k})=\left(\begin{array}{c}\gamma\sigma_{ 0}+v_{*}^{\prime}(\tau k_{x}\sigma_{x}+k_{y}\sigma_{y})\\ 0_{2\times 2}\end{array}\right), \tag{3}\] where the parameters \(v_{*}\), \(v_{*}^{\prime}\), \(\gamma\) and \(M\) are given in [56]. The interactions effects in THFM are captured via the parameters \(J\) and \(U_{1}\)[56; 57]. In order to illustrate our finite \({\bf B}\) solution, we first focus on the charge neutrality point (CNP) wherein for valley polarised (VP) states, the mean-field interaction reads \[V=-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! triangular lattice vector \(\mathbf{R}\), and \(W_{\mathbf{R},b\tau}(\mathbf{r})=W_{\mathbf{0},b\tau}(\mathbf{r}-\mathbf{R})\). The \(c\)-\(c\) and \(c\)-\(f\) couplings in Eqs.(2)-(3) are related to the real space basis as \[H^{c,\tau}_{aa^{\prime}}(\mathbf{k})=\int d^{2}\mathbf{r}\tilde{\Psi}^{*}_{ \mathbf{k}a\tau}(\mathbf{r})H^{\tau}_{BM}(\mathbf{p})\tilde{\Psi}^{*}_{ \mathbf{k}a^{\prime}\tau}(\mathbf{r}), \tag{5}\] \[\frac{e^{-\frac{1}{2}\mathbf{k}^{2}\lambda^{2}}2}{\sqrt{N}}H^{cf,\tau}_{ab}( \mathbf{k})=\int d^{2}\mathbf{r}e^{i\mathbf{k}\cdot\mathbf{r}}\tilde{\Psi}^{* }_{\Gamma a\tau}(\mathbf{r})H^{T}_{BM}(\mathbf{p})W_{\mathbf{0},b\tau}( \mathbf{r}), \tag{6}\] where \(N\) is number of moire unit cells and the well known BM Hamiltonian [85] (also see SM-A), \(H^{\tau}_{BM}(\mathbf{p})\), is invariant under translation by integer multiples of moire lattice vectors \(\mathbf{L}_{1}=L_{m}(\frac{\sqrt{3}}{2},\frac{1}{2})\) and \(\mathbf{L}_{2}=L_{m}(0,1)\). The corresponding primitive moire reciprocal lattice vectors are \(\mathbf{g}_{1}=\frac{4\pi}{\sqrt{3}L_{m}}(1,0)\) and \(\mathbf{g}_{2}=\frac{2\pi}{\sqrt{3}L_{m}}(-1,\sqrt{3})\). In the presence of out-of-plane magnetic field, employed via Landau gauge vector potential \(\mathbf{A}=(0,Bx,0)\), \(H^{\tau}_{BM}(\mathbf{p}-\frac{e}{c}\mathbf{A})\) is still invariant under the translation by \(\mathbf{L}_{2}\), but a translation by \(\mathbf{L}_{1}\) needs to be accompanied by a gauge transformation \[f(\mathbf{r})\rightarrow\hat{t}_{\mathbf{L}_{1}}f(\mathbf{r})=\exp\left(i \frac{eB}{\hbar c}L_{1x}y\right)f(\mathbf{r}-\mathbf{L}_{1}). \tag{7}\] Thus, if \(f(\mathbf{r})\) is an eigenstate of \(H^{\tau}_{BM}(\mathbf{p}-\frac{e}{c}\mathbf{A})\), then so is \(\hat{t}_{\mathbf{L}_{1}}f(\mathbf{r})\) with the same eigenvalue. Translations by \(\mathbf{L}_{2}\) are generated by \(\hat{t}_{\mathbf{L}_{2}}f(\mathbf{r})=f(\mathbf{r}-\mathbf{L}_{2})\). Then \(\hat{t}_{\pm\mathbf{L}_{2}}\hat{t}_{\mathbf{L}_{1}}=e^{\mp 2\pi i\frac{\phi}{ \phi_{0}}}\hat{t}_{\mathbf{L}_{1}}\hat{t}_{\mathbf{L}_{2}}\hat{t}_{\mathbf{L}_ {2}}\), where \(\phi_{0}=\frac{\hbar c}{e}\) and the flux through the unit cell is \(\phi=BL_{1x}L_{m}\). For \(\phi/\phi_{0}=p/q\), with \(p\) and \(q\) relatively prime integers, the magnetic translation operators obey \(\left[\hat{t}^{q}_{\mathbf{L}_{2}},\hat{t}_{\mathbf{L}_{1}}\right]=0\). We construct our \(\mathbf{B}\neq 0\) states by using irreps of the magnetic translation group (MTG) and the hybrid Wannier method [82; 86]. Because they are topologically trivial and well localized, a complete basis for heavy fermions can be generated from the two \(\mathbf{B}=0\)\(f\)-fermion Wannier states as \[\eta_{b\tau k_{1}k_{2}}(\mathbf{r})=\frac{1}{\sqrt{\mathcal{N}}}\sum_{s,n\in \mathbb{Z}}e^{2\pi i(sk_{1}+nk_{2})}\hat{t}^{s}_{\mathbf{L}_{1}}\hat{t}^{n}_{ \mathbf{L}_{2}}W_{\mathbf{0},b\tau}(\mathbf{r}). \tag{8}\] Here \(k_{1,2}\in[0,1)\) (see SM-C) and the normalization factor \(\mathcal{N}=s_{tot}n_{tot}\), where \(s_{tot}\) and \(n_{tot}\) denote the total count of \(s\) and \(n\) (see SM-C). The \(c\)-fermion basis can in turn be expanded as the product of Landau levels (LLs) and the \(\mathbf{B}=0\) Bloch states at the \(\Gamma\)-point, a result obtained when the \(\mathbf{k}\cdot\mathbf{p}\) method is extended to \(\mathbf{B}\neq 0\)[87]. In order to use the same quantum numbers as for \(f\)s, we also project the \(c\)'s LL wavefunctions \(\Phi_{m}\) onto the (orthonormal) irreps of MTG \[\chi_{k_{1}k_{2}m}(\mathbf{r})=\frac{1}{\sqrt{\ell L_{m}}}\frac{1}{\sqrt{ \mathcal{N}}}\sum_{s\in\mathbb{Z}}e^{2\pi isk_{1}}\hat{t}^{s}_{\mathbf{L}_{1}} \Phi_{m}(\mathbf{r},k_{2}\mathbf{g}_{2}). \tag{9}\] Here \(k_{1}\in[0,1)\), but unlike in Eq.(8), \(k_{2}\in[0,\frac{p}{q})\) (see SM-C) and \[\Phi_{j}(\mathbf{r},k_{2}\mathbf{g}_{2})=e^{2\pi ik_{2}\frac{y}{L_{m}}}\varphi_ {j}\left(x-k_{2}\frac{2\pi\ell^{2}}{L_{m}}\right), \tag{10}\] where the harmonic oscillator (h.o.) wavefunctions \(\varphi_{m}(x)=e^{-x^{2}/2\ell^{2}}H_{m}(x/\ell)/\pi^{\frac{1}{4}}\sqrt{2^{m}m!}\) with Hermite polynomials \(H_{m}\), and \(\ell^{2}=\hbar c/(eB)\). The \(k_{2}\) induced offset in the h.o. wavefunctions is thus \(qk_{2}L_{1x}/p\). Note that \(\Phi_{m}\) is an eigenstate of \(\hat{t}_{\pm\mathbf{L}_{2}}\), and \[\hat{t}_{\pm\mathbf{L}_{2}}\chi_{k_{1}k_{2}m}(\mathbf{r})=e^{\mp 2\pi ik_{2}} \chi_{[k_{1}\mp\frac{e}{q}]|k_{2}m}(\mathbf{r}), \tag{11}\] where \([b]_{a}\) represents \(b\) modulo \(a\) with \(a>0\). Since the \(q\mathbf{L}_{2}\) translations break up the \(k_{2}\) domain into units of width \(\frac{1}{q}\), from here on we use the label \(k=(k_{1},k_{2})\) with \(k_{1}\in[0,1)\) and we fix \(k_{2}\in[0,\frac{1}{q})\). The original \(k_{2}\) domains are then accessed using labels \(r^{\prime}\in\{0,\dots q-1\}\) and \(r\in\{0,\dots,p-1\}\), denoting the magnetic strip \([\frac{r^{\prime}}{q},\frac{r^{\prime}+1}{q})\) and \([\frac{r}{q},\frac{r+1}{q})\) along \(\mathbf{g}_{2}\) for \(\eta\) and \(\chi\) respectively. The \(\mathbf{B}\neq 0\) Hamiltonian cannot mix states with different \(k\). The relabelled finite field basis is \(\eta_{b\tau k^{\prime}r}(\mathbf{r})\) and \(\chi_{krm}(\mathbf{r})\) and their annihilation operators are \(f_{b\tau k^{\prime}}\) and \(c_{a\pi krm}\). Having assembled the low energy basis at \(\mathbf{B}\neq 0\), we now expand the fields for a given spin as \[\psi_{\tau}(\mathbf{r}) =\sum_{k\in[0,1)\otimes[0,\frac{1}{q})}\left(\sum_{b=1}^{2}\sum_{r^ {\prime}=0}^{q-1}\eta_{b\tau kr^{\prime}}(\mathbf{r})f_{b\tau kr^{\prime}}+\right.\] \[\left.\sum_{a=1}^{4}\sum_{m=0}^{m_{a,\tau}}\sum_{r=0}^{p-1}\Psi_{a \tau}(\mathbf{r})\chi_{krm}(\mathbf{r})c_{a\pi krm}\right), \tag{12}\] where \(\Psi_{a\tau}(\mathbf{r})=\sqrt{NA_{uc}}\tilde{\Psi}_{\Gamma a\tau}(\mathbf{r})\), \(A_{uc}\) denotes the moire unit cell area. Anticipating the appearance of anomalous Dirac LLs for the topological \(c\) fermions, we allow for the \(a\) dependence of the upper cutoff on the LL index at each valley \(\tau\), denoted by \(m_{a,\tau}\), with \(m_{1,+1}=m_{2,-1}=m_{max}+1\), \(m_{2,+1}=m_{1,-1}=m_{max}\), \(m_{3,+1}=m_{4,-1}=m_{max}+2\) and \(m_{4,+1}=m_{3,-1}=m_{max}-1\). By construction, \(\Psi_{a\tau}(\mathbf{r})\chi_{krm}(\mathbf{r})\) and \(\eta_{b\tau kr^{\prime}}(\mathbf{r})\) are eigenstates of \(\hat{t}_{\mathbf{L}_{1}}\) and \(\hat{t}_{\mathbf{L}_{2}}^{q}\) with eigenvalues \(e^{-2\pi ik_{1}}\) and \(e^{-2\pi iqk_{2}}\) respectively. This guarantees states with different \(k\) to be orthogonal. Overlaps of states with the same \(k\) but different with matrix elements that are found to be the same as those obtained by the direct minimal substitution in (2) and expanding in LL basis, as is expected from \(\mathbf{k}\cdot\mathbf{p}\)[87]: \[\tilde{h}^{\tau}_{[amr],[a^{\prime}m^{\prime}\vec{r}]}(k)=\delta_{r\vec{r}} \left(\begin{array}{cc}0_{2\times 2}&h^{\tau,c}_{mmr^{\prime}}\\ \sigma_{x}h^{\tau,c}_{mmr}\sigma_{x}&M\delta_{mmr^{\prime}}\sigma_{x}\end{array} \right)_{aa^{\prime}}, \tag{16}\] where the Pauli matrix \(\sigma_{x}\) acts on the \(c\) orbitals and \[h^{+1,c}_{mmr^{\prime}}=i\frac{\sqrt{2}v_{*}}{\ell}\left(\begin{array}{cc}- \sqrt{m^{\prime}}\delta_{m+1,m^{\prime}}&0\\ 0&\sqrt{m}\delta_{m,m^{\prime}+1}\end{array}\right) \tag{17}\] with \(h^{-1,c}_{mmr^{\prime}}=-\sigma_{x}h^{+1,c}_{mmr}\sigma_{x}\). For \(M=0\), we recover the LLs of two massless Dirac particles, with two zero LLs at each valley. Because these modes play a key role in explaining the interacting spectra of THFM in \(\mathbf{B}\), we plot their and \(f\)'s spectrum with the term in Eq.(4) included but turn off the \(c\)-\(f\) coupling by setting the prefactor of \(H^{\tau}_{cf}\) to zero in Fig.(2a). For each spin projection, there are two \(f\)s per moire unit cell at the energy \(\pm\frac{U_{1}}{2}\) at \(\tau=\mp 1\) intersected by the LLs with strong \(\mathbf{B}\) dependence due to \(c\)'s fast \(v_{*}\). The total number of \(f\) states is \(\mathbf{B}\)-independent while each LL contains \(N\frac{\phi}{\phi_{0}}\) states. We expect that \(H^{\tau}_{cf}\) hybridizes these modes within an energy \(\sim\gamma\), but in order to understand how such hybridization leaves behind a band of states whose total number is independent of \(\mathbf{B}\) -because its total Chern number vanishes[88]- and which LLs may decouple from \(f\)s, we need to carefully analyze \(H^{\tau}_{cf}\). As shown below, it is not simply minimally substituted (6). Instead \[H^{\tau}_{cf} = \sum_{k\in[0,1)\otimes[0,\frac{1}{4})}\sum_{a=1}^{4}\sum_{b=1}^{2 }\sum_{m=0}^{m_{a,\tau}}\sum_{r=0}^{p-1}\sum_{r^{\prime}=0}^{q-1}h^{\tau}_{[ amr],[br^{\prime}]}(k) \tag{18}\] \[c^{\dagger}_{a\tau krm}f_{b\tau kr^{\prime}},\] where, using the magnetic translation symmetry of \(H^{\tau}_{BM}\left(\mathbf{p}-\frac{e}{c}\mathbf{A}\right)\) and the free sum over the \(s\), we find \[h^{\tau}_{[amr],[br^{\prime}]}(k)=\frac{st_{tot}}{\sqrt{\mathcal{ N}}}\sum_{n\in\mathbb{Z}}e^{2\pi i(k_{2}+\frac{r^{\prime}}{q})n}\times\] \[\int d^{2}\mathbf{r}(\hat{t}^{n}_{-\mathbf{L}_{2}}\chi_{krm}( \mathbf{r}))^{*}\Psi^{*}_{a\tau}(\mathbf{r})H^{\tau}_{BM}\left(\mathbf{p}- \frac{e}{c}\mathbf{A}\right)W_{\mathbf{0},b\tau}(\mathbf{r}).\] Because \(\mathbf{A}\) acts on a well-localized function centered at \(\mathbf{r}=0\), its contribution can be neglected at low \(\mathbf{B}\) (we have confirmed this numerically in SM-D.3). The dominant term can be obtained using the inverse Fourier transform of Eq.(6) which gives \(\Psi^{*}_{a\tau}(\mathbf{r})H^{\tau}_{BM}\left(\mathbf{p}\right)W_{0,br}( \mathbf{r})\). Substituting into (II) thus reduces \(h^{\tau}_{[amr],[br^{\prime}]}(k)\) to integrals over a gaussian and shifted 1D h.o. wavefunctions which can be evaluated using results in Ref.[89] (see also SM-D.2). ## III Analysis of the \(\frac{1}{q}\) flux sequence It is particularly revealing to analyze the case \(p=1\), i.e. the \(\phi/\phi_{0}=1/q\) sequence. Then \(r=0\), because, as discussed above, \(r\) ranges from \(0\) to \(p-1\). Making use of (11) in (II) and substituting the explicit expression for \(\chi\) from (II), allows us to perform the sum over \(n\). We find \[h^{\tau}_{[am0],[br^{\prime}]}(k)=\sqrt{\frac{L_{1x}}{\ell}}\sum_{ j\in\mathbb{Z}}e^{-2\pi i(r^{\prime}+jq)k_{1}}\times\] \[\int d^{2}\mathbf{r}\left(\hat{t}^{r^{\prime}+jq}_{\mathbf{L}_{1} }\Phi_{m}(\mathbf{r},k_{2}\mathbf{g}_{2})\right)^{*}H^{cf,\tau}_{ab}\left( \frac{1}{i}\frac{\partial}{\partial\mathbf{r}}\right)\frac{e^{-\frac{r^{2}}{ 2\lambda^{2}}}}{2\pi\lambda^{2}}.\] The above expression can be visualized as an overlap between a 2D localized heavy state with size \(\lambda\) sitting at the origin and a 1D h.o. shifted in the \(x\)-direction with a plane wave phase variation in the \(y\)-direction that depends on the shift (see Fig.(3)). To understand under what choice of \(m,r^{\prime},j\) is this integral significant, note that the h.o. wavefunction is localized in the \(x\) direction about \((r^{\prime}+jq+k_{2}q)\,L_{1x}\), and its width is \(\sim 2\sqrt{2m+1}\sqrt{q}\) unit cells. In addition, the combination \(\frac{r^{\prime}}{q}+j+k_{2}\) controls the period of oscillation in the \(y\)-direction set by \(1/(\frac{r^{\prime}}{q}+j+k_{2})\) times the unit cell size. The integer \(r^{\prime}+jq\) thus determines the unit cell to which the h.o. is shifted, and, because \(k_{2}2\pi\ell^{2}/L_{m}=k_{2}qL_{1x}\), the value of \(k_{2}q\in[0,1)\) fine-tunes the shift within the unit cell. The index \(j\) then determines \(q\)-unit-cell periodic revival of the h.o. also illustrated. For example, if \(r^{\prime}=j=0\) then the h.o. is centered at the unit cell containing the localized heavy state and the period of oscillations in the \(y\)-direction is long compared to the unit cell, encompassing at least \(q\) unit cells. The hybridization with the localized heavy state proportional to \(\gamma\) will then be significant. Since the spatial extent of the h.o. state in the \(x\)-direction is \(\sim 2\sqrt{2m+1}\sqrt{q}\) unit cells -which at Figure 3: Schematic representation of the \(cf\) coupling for \(\phi/\phi_{0}=1/q\) in Eq.(III). Each tick represents (the 1D projection of) a moire unit cell, illustrating the overlap between a 2D localized heavy state with size \(\lambda\) sitting at the origin (red) and a Landau level (LL) i.e. a 1D harmonic oscillator (h.o.) shifted in the \(x\)-direction, its wavefunctions sketched by blue, with a plane wave phase variation in the \(y\)-direction (not shown) that depends on the shift. The \(r^{\prime}\) determines the momentum absorbed by the LL as well as the unit cell to which the h.o. is shifted and \(k_{2}q\) fine tunes the shift within the unit cell. The index \(j\) then determines \(q\)-unit-cell periodic revival of the h.o. The black parabolas mimic a quadratic potential to accompany the h.o. wavefunctions. low \({\bf B}\) is much longer than the localized heavy state even when we consider that it oscillates and has \(m\)-nodes- the result of the integration will be approximately given by the value of the h.o. wavefunction at \(-k_{2}qL_{1x}\), up to an overall phase, unless \(m\) is close to \(m_{max}\lesssim q/2\). If we keep \(j=0\) but increase \(r^{\prime}\) to 1 then the h.o. is centered at the unit cell adjacent to the one containing the localized heavy state and the period of oscillations in the \(y\)-direction is still long, between \(q/2\) and \(q\) unit cells. The hybridization with the localized heavy state proportional to \(\gamma\) will still be significant and the result of the integration will still be approximately given by the value of the h.o. wavefunction but now at \(-(1+k_{2}q)L_{1x}\), up to an overall phase. We thus see that for fixed \(j=0\) and any value of \(k_{2}q\), increasing \(r^{\prime}\) past \(\sim\sqrt{2m+1}\sqrt{q}\) results in an exponential suppression of the hybridization integral due to the large off-set. Therefore, for the values of \(r^{\prime}\) past \(q/2\), it is the \(j=-1\) revival copy of the h.o. states which gives the dominant contribution. All other values of \(j\) can be neglected, allowing us to remove the sum over \(j\) in Eq.(20) and replace \[r^{\prime}+jq\rightarrow{\rm sgn}_{+}\left(\frac{q}{2}-r^{\prime}\right){\rm min }[r^{\prime},q-r^{\prime}]\equiv r^{\prime}_{q}, \tag{21}\] where \({\rm sgn}_{+}(x)\) is the usual sign function except at 0 where it evaluates to 1. Increasing the value of \(m\) increases the size of the wavefunction which is another way to understand the effect of the bound on \(m\). It also increases the number of nodes in the 1D h.o. wavefunctions with faster oscillations in the \(x\)-direction. This results in stronger averaging when convolving with the localized state and the suppression of the overall amplitude of the hybridization. Similar averaging occurs upon increasing \({\bf B}\). The suppression of the overall amplitude is reflected in the decrease of the singular values of the \((m,r^{\prime})\)-matrix \(h^{\tau}_{[1m0],[1r^{\prime}]}(k)\) shown in Fig.(5) after rescaling by \(\gamma\), which enters our analysis below. The derivatives in Eq.(20), appearing in matrix elements \(h^{\tau}_{[1(2)m0],[2(1)r^{\prime}]}\), act on the localized heavy function to change its spatial symmetry from \(s\) to \(p_{x,y}\)-like. Moreover an integration by parts and expressing the derivatives via h.o. raising and lowering operators allows us to relate these cases to the analysis without derivatives(see SM-D 2 b,D 2 c). The qualitative trends discussed above are captured by our closed form expression for \(h^{\tau}_{[am0],[br^{\prime}]}(k)\). This non-derivative hybridization for the \(\frac{1}{q}\) sequence can be written as \[h^{\tau}_{[am0],[ar^{\prime}]}(k)=\gamma I^{0}_{m,r^{\prime}}(k),\ \ {\rm for}\ a=1,2. \tag{22}\] Here \[I^{0}_{m,r^{\prime}}(k)=\sqrt{\frac{L_{1x}}{\ell}}e^{i\pi r^{ \prime}_{q}(k_{2}-2k_{1})}e^{i\pi r^{\prime}_{q}(r^{\prime}_{q}-1)\frac{1}{2q}}\] \[e^{-2\pi^{2}\frac{\lambda^{2}}{L_{m}^{2}}\left(k_{2}+\frac{r^{ \prime}_{q}}{\ell^{2}}\right)^{2}}{\cal F}_{m}\left(\lambda,(r^{\prime}_{q}+ qk_{2})L_{1x}\right), \tag{23}\] and \[{\cal F}_{m}(\lambda,x_{0}) = \frac{1}{\pi^{\frac{1}{2}}\sqrt{2^{m}m!}}\sqrt{\frac{\ell^{2}}{ \ell^{2}+\lambda^{2}}}e^{-\frac{\kappa_{2}^{2}}{2(\ell^{2}+\lambda^{2})}} \tag{24}\] \[\times{\cal H}_{m}\left(\frac{-2x_{0}\ell}{\ell^{2}+\lambda^{2}},-1+\frac{2\lambda^{2}}{\ell^{2}+\lambda^{2}}\right).\] The two variable Hermite polynomials[89] are given by \[{\cal H}_{m}(x,y)=m!\sum_{k=0}^{\lfloor\frac{m}{2}\rfloor}\frac{x^{m-2k}y^{k} }{(m-2k)!k!}, \tag{25}\] where \(\lfloor m\rfloor\) denotes the floor function at \(m\). Their relation to the Hermite polynomials used above is \(H_{m}(x)={\cal H}_{m}(2x,-1)\). In the limit \(\lambda\to 0\), the 2D heavy localized state becomes the Dirac \(\delta\)-function, and as expected based on the above discussion, we indeed recover \({\cal F}_{m}(\lambda\to 0,x_{0})=\varphi_{m}(-x_{0})\). We found that keeping the full form of \({\cal F}_{m}\) is needed in order to achieve accurate results even for the low \(B\) range, therefore we do not take this limit when handling \({\cal F}_{m}\)(see also Fig.(12) in SM-E). The exponential suppression factor multiplying \({\cal F}_{m}\) in the Eq.(23) comes from the \(y\)-integration. Although the suppression depends on the shifted position of the h.o. states, its dependence on \(r^{\prime}\) is weaker than in \({\cal F}_{m}\) which comes from the \(x\)-integration, as expected based on the discussion above. The off-diagonal hybridization can be expressed as \[h^{\tau}_{[1m0],[2r^{\prime}]}(k) = v^{\prime}_{*}\left(\tau I^{x}_{m,r^{\prime}}(k)-iI^{y}_{m,r^{ \prime}}(k)\right) \tag{26}\] \[h^{\tau}_{[2m0],[1r^{\prime}]}(k) = \left(h^{\tau}_{[1m0],[2r^{\prime}]}(k)\right)^{*}. \tag{27}\] Using the exact recursion relations between \({\cal H}_{m}\) and its derivatives, \(I^{x}\) and \(I^{y}\) can be expressed in terms of \(I^{0}\) as \[I^{x}_{m,r^{\prime}}(k)=\frac{i}{\sqrt{2}\ell}\left(\sqrt{m}I^{0}_{m-1,r^{ \prime}}(k)-\sqrt{m+1}I^{0}_{m+1,r^{\prime}}(k)\right) \tag{28}\] \[I^{y}_{m,r^{\prime}}(k) = -\frac{1}{\sqrt{2}\ell}\left(\left(1+\frac{\lambda^{2}}{\ell^{2}} \right)\sqrt{m+1}I^{0}_{m+1,r^{\prime}}(k)\right. \tag{29}\] \[+ \left.\left(1-\frac{\lambda^{2}}{\ell^{2}}\right)\sqrt{m}I^{0}_{m-1,r^{\prime}}(k)\right).\] We add interactions at CNP via the mean field coefficients \(J\) and \(U_{1}\). This is accomplished by promoting the operators \(\tilde{\alpha}_{{\bf k}a\tau}\) and \(\tilde{f}_{{\bf k}b\tau}\) in Eq.(4) to \(c_{xxtrm}\) and \(f_{b\tau k\tau^{\prime}}\) respectively, with a sum over all finite \({\bf B}\) quantum numbers. The numerically determined strong coupling Hofstadter spectra using the above results for the matrix elements for \(\tau=-1\) is shown in Fig.(4a). To study the effects of including kinetic energy within the THFM, Fig(4b)shows a comparison with the Hofstadter excitation spectrum of the Coulomb interacting BM model with the band kinetic energy projected away. The gauge-invariant formalism introduced by Ref. [83] is used to compute the exact strong coupling excitation spectrum to fluxes reaching \(q=60\) using the sparse matrix properties of the basis (general and efficient formulae may be found in SM-J). The kinetic energy, neglected in the strong coupling calculations, decreases the charge gap but preserves the low-lying anomalous Landau level. Neglecting \(\frac{\lambda^{2}}{\ell^{2}}\) in the prefactors of \(I^{0}\)'s in Eq.(29) is an approximation of same order as neglecting \(\mathbf{A}\) in \(H_{BM}^{\tau}\) in Eq.(19). We confirm it numerically to be an excellent approximation in SM-D 3; note that we do not set \(\frac{\lambda^{2}}{\ell^{2}}\) to zero in \(I^{0}\) for the reasons mentioned above. Thus the finite \(\mathbf{B}\)\(c\)-\(f\) coupling can be accurately re-expressed as \[h_{[am0],[br^{\prime}]}^{\tau}(k)\approx\left(\begin{array}{c}\gamma I_{m,r^ {\prime}}^{0}(k)\sigma_{0}+h_{m,r^{\prime}}^{r,cf}(k)\\ 0_{2\times 2}\end{array}\right)_{ab}, \tag{30}\] where \(h_{m,r^{\prime}}^{+1,cf}(k)=\) \[i\frac{\sqrt{2}v_{s}^{\prime}}{\ell}\left(\begin{array}{cc}0&\sqrt{m}I_{m-1,r^{\prime}}^{0}(k)\\ -\sqrt{m+1}I_{m+1,r^{\prime}}^{0}(k)&0\end{array}\right), \tag{31}\] and \(h_{m,r^{\prime}}^{-1,cf}(k)=-\sigma_{x}h_{m,r^{\prime}}^{+1,cf}(k)\sigma_{x}\), where \(\sigma_{x}\) acts in orbital space of \(c\) and \(f\) fermions. The above form for \(c\)-\(f\) coupling sets up a platform to understand the problem in terms of coupled and decoupled modes of \(f\), and eventually paves the way for an analytical solution. ### Change of basis for the \(f\)-modes We will find it useful to first perform a singular value decomposition \[I_{m,r^{\prime}}^{0}(k)=U_{mm^{\prime}}\Sigma_{m^{\prime}\tilde{r}}V_{\tilde{ r}r^{\prime}}, \tag{32}\] where \(U,V\) are are \(m_{a,\tau}\times m_{a,\tau}\) and \(q\times q\) unitary matrices respectively, and the \(m_{a,\tau}\times q\) rectangular matrix \(\Sigma\) contains the singular values along the main diagonal and zeros elsewhere; summation convention on repeated indices is used throughout this section unless explicitly stated. Through \(U\) and \(V\), we can gain insight into the form of \(f\) and \(c\) modes allowed to hybridize. For example, \[h_{[1m0],[1r^{\prime}]}^{\tau}c_{1\tau k0m}^{\dagger}f_{1\tau kr^{\prime}}= \gamma\Sigma_{m^{\prime},\tilde{r}}c_{1\tau k0m^{\prime}}^{\dagger}\bar{f}_{ 1\tau k\tilde{r}}, \tag{33}\] where \(c_{1\tau k0m^{\prime}}^{\dagger}=c_{1\tau k0m}^{\dagger}U_{mm^{\prime}}\) and \(\bar{f}_{1\tau k\tilde{r}}=V_{\tilde{r}r^{\prime}}f_{1\tau k^{\prime}}\) are the new modes. In the above, the \(\bar{f}_{1\tau k\tilde{r}}\) modes with \(\tilde{r}>m_{1,\tau}\) decouple from \(c\)'s (recall that \(m_{1,+1}=m_{max}+1\) and \(m_{1,-1}=m_{max}\)). The remaining \(f\) modes couple to \(c\) modes with the strength \(\gamma\Sigma_{m^{\prime},\tilde{r}}=\gamma\Sigma_{m^{\prime}}\), where \(\Sigma_{m}\) denotes the \(m^{th}\) singular value. The columns of matrix \(U\) are the eigenvectors of the following matrix \[\Lambda_{mm^{\prime}}^{0}(k)=\sum_{r^{\prime}=0}^{q-1}I_{mr^{\prime}}^{0}(k)I_ {m^{\prime}r^{\prime}}^{0^{\prime}}(k). \tag{34}\] Since \(r^{\prime}\) is being summed over, we are allowed to shift its range to \(-\lfloor\frac{q}{2}\rfloor,-\lfloor\frac{q}{2}\rfloor+1,\ldots,q-1-\lfloor \frac{q}{2}\rfloor\) and thus set \(j=0\) in Eq.(21). Hence we can set \(r_{q}^{\prime}=r^{\prime}\) in Eq.(23) for \(I_{m,r^{\prime}}^{0}\). Since \(\mathcal{F}_{m}\) decays exponentially at \(\pm\lfloor\frac{q}{2}\rfloor\), in the low field limit, we can replace the bounds of this sum by \(\pm\infty\). Using Eq.(23) we have \[\Lambda_{mm^{\prime}}^{0}(k)\approx\frac{L_{1x}}{\ell}\sum_{r^{ \prime}=-\infty}^{\infty}e^{-4\pi^{2}\frac{\lambda^{2}}{\ell_{2}^{2}}\left(k_ {2}+\frac{\nu}{q}\right)^{2}}\times\] \[\mathcal{F}_{m}\left(\lambda,\left(r^{\prime}+k_{2}q\right)L_{1x} \right)\mathcal{F}_{m^{\prime}}\left(\lambda,\left(r^{\prime}+k_{2}q\right)L _{1x}\right). \tag{35}\] Using the Dirac comb identity, \(\sum_{r^{\prime}\in\mathbb{Z}}\delta(\rho-r^{\prime})=\sum_{t\in\mathbb{Z}}e^ {2\pi it\rho}\), we can convert this summation to an integral as \[\Lambda_{mm^{\prime}}^{0}(k) \approx\frac{L_{1x}}{\ell}\sum_{t\in\mathbb{Z}}\int_{-\infty}^{ \infty}d\rho e^{i2\pi t\rho}e^{-4\pi^{2}\frac{\lambda^{2}}{\ell_{2}^{2}}\left(k _{2}+\frac{\ell}{q}\right)^{2}}\] \[\quad\quad\quad\mathcal{F}_{m}\left(\lambda,\left(\rho+k_{2}q \right)L_{1x}\right)\mathcal{F}_{m^{\prime}}\left(\lambda,\left(\rho+k_{2}q \right)L_{1x}\right)\] \[=\frac{1}{\ell}\sum_{t\in\mathbb{Z}}e^{-i2\pi tka_{2}q}\int_{- \infty}^{\infty}d\rho e^{i2\pi t\rho/L_{1x}}e^{-\frac{\lambda^{2}}{\ell^{2}} \frac{\rho^{2}}{\ell^{2}}}\] \[\quad\quad\quad\mathcal{F}_{m}\left(\lambda,\rho\right)\mathcal{F} _{m^{\prime}}\left(\lambda,\rho\right). \tag{36}\] Because \(2\pi t/L_{1x}\) is a large wavevector compared to the length scale at which the h.o. wavefunctions vary, \(\sim\ell\), i.e. \(\frac{L_{1x}}{2\pi t}\ll\ell\) at low field, the overlaps are exponentially suppressed in \((\frac{2\pi t\ell}{L_{1x}})^{2}\). We can thus set \(t=0\) in the above Figure 4: Comparison of THFM Hofstadter spectrum (including dispersion of the flat bands) and strong coupling projected BM Hofstadter spectrum at \(w_{0}/w_{1}=0.7\). (a) THFM spectrum at with \(M=3.248meV\) for \(m_{max}=\lceil\frac{q-3}{2}\rceil\). (b) Exact spectrum in the flat band limit using the gauge-invariant basis of magnetic translation group irreps. Details of the method may be found in SM-J sum. Moreover since the exponential factor contains \(\frac{\lambda^{2}}{\ell^{2}}\), the off-diagonal terms can be neglected and thus \(\Lambda^{0}_{mm^{\prime}}\) is diagonal up to a good approximation. This implies that the matrix \(U\) is close to identity and the low \(\mathbf{B}\) singular values can be given as \[\Sigma_{m}=\sqrt{\Lambda^{0}_{mm}}. \tag{37}\] Note that the information of magnetic quantum number \(k\) gets dissolved automatically due to the low field limit where the magnetic sub-bands are not \(k\) dispersive. A compact expression for \(\Sigma_{m}\) can be computed from Eq.(36) using two index Hermite polynomials[89](also see SM-D 2) \[\Sigma_{m}=\left(\frac{1}{\sqrt{\xi(\kappa)}}\frac{1}{2^{m}m!} \mathcal{H}_{mm}\left(0,\frac{\kappa^{6}}{\xi(\kappa)};0,\frac{\kappa^{6}}{ \xi(\kappa)}\ |\ \frac{2}{\xi(\kappa)}\right)\right)^{\frac{1}{2}} \tag{38}\] where \(\kappa^{2}=\frac{\lambda^{2}}{\ell^{2}}=(\frac{\phi}{\phi_{0}})2\pi\lambda^{2 }/A_{u.c}\) and \[\xi(\kappa)=(1+\kappa^{2}+\kappa^{4})(1+\kappa^{2}), \tag{39}\] \[\mathcal{H}_{mn}\left(x,y;w,z|\beta\right)=\] \[\sum_{k=0}^{min(m,n)}\frac{m!n!\beta^{k}}{(n-k)!(m-k)!k!} \mathcal{H}_{m-k}(x,y)\mathcal{H}_{n-k}(w,z).\] In the \(\mathbf{B}\to 0\) limit, upto first order in flux, the singular values can be approximated as \[\Sigma_{m}\approx 1-\left(m+\frac{1}{2}\right)\kappa^{2}. \tag{41}\] Intuitively this shows that \(\Sigma_{m}\) dependence on \(m\) grows linearly with \(m\) even at negligibly small fluxes. The comparison of analytical singular values given by Eq.(38) with the ones obtained numerically from Eq.(23) is presented in Fig.(5), limiting to the values given in Eq.(41) as shown in SM-E. Having the closed form for the singular values, we hereby reformulate the finite \(\mathbf{B}\) Hamiltonian (Eq.(13)) in terms of the modes that are allowed to hybridize. The analysis for \(\tau=+1\) is presented below, for \(\tau=-1\) in SM-F 1. Using the SVD decomposition discussed we can re-write the matrix elements as \[h^{1}_{[1m0],[1^{\prime}]}c^{\dagger}_{11k0m}f_{11kr^{\prime}}= \gamma\Sigma_{m^{\prime},\bar{r}}c^{\dagger}_{11k0m}U_{mm^{\prime}}\] \[\times V_{\bar{r},r^{\prime}}f_{11kr^{\prime}}\] \[=\sum_{m=0}^{m_{1,+1}}\sum_{\bar{r}=0}^{m_{1,+1}}\gamma\Sigma_{m} \delta_{mi}c^{\dagger}_{11k0m}\bar{f}_{11k\bar{r}}, \tag{42}\] where we used the fact that \(U\) is an identity matrix and the rectangular matrix \(\Sigma_{m,r^{\prime}}\) is non-zero only along its main diagonal. \[h^{1}_{[2m0],[1^{\prime}]}c^{\dagger}_{21k0m}f_{11kr^{\prime}}=- i\sqrt{2}\frac{v^{\prime}_{\star}}{\ell}\times\] \[\sum_{m=0}^{m_{2,+1}}\sqrt{m+1}c^{\dagger}_{21k0m}\Sigma_{m+1, \bar{r}}V_{\bar{r},r^{\prime}}f_{11kr^{\prime}}=\] \[-i\sqrt{2}\frac{v^{\prime}_{\star}}{\ell}\sum_{m=0}^{m_{2,+1}} \sum_{\bar{r}=0}^{m_{2,+1}+1}\sqrt{m+1}\Sigma_{m+1}\delta_{m+1,\bar{r}}c^{ \dagger}_{21k0m}\bar{f}_{11k\bar{r}}.\] Because \(m_{2,+1}+1=m_{1,+1}=m_{max}+1\), the upper bound on the \(\bar{r}\) summation is the same in (42) and (43). Figure 5: First 25 analytical singular values obtained via Eq.(38) compared to the numerically obtained singular values of \(l^{0}\) defined in the Eq.(23) at \(\lambda=0.3792L_{m}\) Figure 6: The non-interacting heavy fermion Hofstadter spectra for flat band THFM at \(w_{0}/w_{1}=0.7\). For illustration, we have fixed \(m_{max}=5\) so that the \(\mathbf{B}\to 0\) energies for remote magnetic subbands, i.e. \(\pm\gamma\), are tractable. Similarly \[h^{1}_{[1m0],[2\tau]}c^{\dagger}_{11k0m}f_{21kr^{\prime}}=\] \[i\sqrt{2}\sqrt{m}\frac{v_{\star}^{\prime}}{\ell}\sum_{m=0}^{m_{1+ 1}}\sum_{\bar{r}=0}^{m_{1,+1}-1}\Sigma_{\bar{r}}\delta_{m-1,\bar{r}}c^{\dagger}_ {11k0m}\bar{f}_{21k\bar{r}}, \tag{44}\] \[h^{1}_{[2m0],[2\tau]}c^{\dagger}_{21k0m}f_{21kr^{\prime}}=\] \[\gamma\sum_{m=0}^{m_{2,+1}}\sum_{\bar{r}=0}^{m_{2,+1}}\Sigma_{m} \delta_{m,\bar{r}}c^{\dagger}_{21k0m}\bar{f}_{21kr}. \tag{45}\] Because \(m_{1,+1}-1=m_{2,+1}=m_{max}\), the upper bound on \(\bar{r}\) summation is the same in Eq.(44) and Eq.(45). Thus for a given \(k\), out of the available \(2q\)\(f-\)modes, only \(2m_{max}+3\) couple; recall that \(m_{max}\lesssim q/2\). The mean-field interactions for \(f-\) fermion modes in new \(\bar{f}\) basis can be given as \[V^{f,\tau=+1}=\sum_{k\in[0,1)\otimes[0,\frac{1}{q}]}V^{f,\tau=+1}_{coupled}+V ^{f,\tau=+1}_{decoupled}, \tag{46}\] \[V^{f,\tau=+1}_{coupled}=-\frac{U_{1}}{2}\left(\sum_{m=0}^{m_{max}+1}\bar{f}^{ \dagger}_{11km}\bar{f}_{11km}+\sum_{m=0}^{m_{max}}\bar{f}^{\dagger}_{21km}\bar {f}_{21km}\right) \tag{47}\] \[V^{f,\tau=+1}_{decoupled}=-\frac{U_{1}}{2}\sum_{b=1}^{2}\sum_{m^{\prime}=m_{ max}+\bar{b}}^{q-1}\bar{f}^{\dagger}_{b1km^{\prime}}\bar{f}_{b1km^{\prime}}, \tag{48}\] where \(\bar{1}(\bar{2})=2(1)\). Note that there are \(2q-(2m_{max}+3)\) decoupled \(f\) modes for each \(k\). Physically this corresponds to \(2-(2m_{max}+3)/q\) states per unit cell. The coupled modes can then be described as \[H^{\tau=+1}_{coupled}=\sum_{k}\sum_{\alpha,\alpha^{\prime}=1}^{6}\sum_{m=0} ^{m_{\alpha}}\sum_{m^{\prime}=0}^{m_{\alpha^{\prime}}}\Xi_{m\alpha,m^{\prime }\alpha^{\prime}}d^{\dagger}_{m\alpha}(k)d_{m^{\prime}\alpha^{\prime}}(k), \tag{49}\] where \(m_{\alpha=1,\ldots,4}=m_{\alpha,+1}\), \(m_{5}=m_{max}+1\) and \(m_{6}=m_{max}\), and \[d^{\dagger}_{m\alpha}(k)=\left(c^{\dagger}_{11k0m},c^{\dagger}_{21k0m},c^{ \dagger}_{31k0m},c^{\dagger}_{41k0m},f^{\dagger}_{11km},f^{\dagger}_{21km} \right). \tag{50}\] We now define an operator \[\hat{h}^{+1}_{\alpha,\alpha^{\prime}}=\left(\begin{array}{cccc}0&0&-i\sqrt{ 2}\frac{v_{\star}}{\ell}\hat{a}&0&\gamma\Sigma(\hat{a}^{\dagger}\hat{a})&i \sqrt{2}\frac{v_{\star}^{\prime}}{\ell}\hat{a}^{\dagger}\Sigma(\hat{a}^{ \dagger}\hat{a})\\ 0&0&i\sqrt{2}\frac{v_{\star}}{\ell}\hat{a}^{\dagger}&-i\sqrt{2}\frac{v_{ \star}}{\ell}\hat{a}\Sigma(\hat{a}^{\dagger}\hat{a})&\gamma\Sigma(\hat{a}^{ \dagger}\hat{a})\\ i\sqrt{2}\frac{v_{\star}}{\ell}\hat{a}^{\dagger}&0&-\frac{J}{2}&M&0&0\\ 0&-i\sqrt{2}\frac{v_{\star}}{\ell}\hat{a}&M&-\frac{J}{2}&0&0\\ \gamma\Sigma(\hat{a}^{\dagger}\hat{a})&i\sqrt{2}\frac{v_{\star}^{\prime}}{\ell }\Sigma(\hat{a}^{\dagger}\hat{a})\hat{a}^{\dagger}&0&0&-\frac{U_{1}}{2}&0\\ -i\sqrt{2}\frac{v_{\star}^{\prime}}{\ell}\Sigma(\hat{a}^{\dagger}\hat{a})\hat{ a}&\gamma\Sigma(\hat{a}^{\dagger}\hat{a})&0&0&0&-\frac{U_{1}}{2}\\ \end{array}\right)_{\alpha,\alpha^{\prime}}, \tag{51}\] where \(\hat{a}\) is a simple h.o. lowering operator in terms of which matrix \(\Xi_{m\alpha,m^{\prime}\alpha^{\prime}}\) can be expressed as \[\Xi_{m\alpha,m^{\prime}\alpha^{\prime}}=\langle m|\hat{h}^{+1}_{\alpha,\alpha^ {\prime}}|m^{\prime}\rangle. \tag{52}\] Here \(|m\rangle\) is a simple h.o. eigenstate and \(\Sigma(m)=\Sigma_{m}\). Note that a naive minimal substitution into Eq.(1), with \(\lambda=0\) would reproduce Eq.(51), except the singular values would get replaced by 1, which corresponds to the \(\mathbf{B}\to 0\) limit of our analysis. The decoupling of \(2-(2m_{max}+3)/q\) modes per moire unit cell per spin and the fairly strong \(\mathbf{B}\) and \(m\) dependence of singular values thus gets completely overlooked by the naive minimal substitution, which therefore fails to recover the correct state count for narrow bands to be 2 states per moire unit cell per spin per valley. Although we get the correct total Chern number 0, i.e. total 2 states per moire unit cell per spin independent of \(\mathbf{B}\) in the narrow band strong coupling window set by \(\left(\frac{J}{2},\frac{U_{1}}{2}\right)\), we can explain the state counting analytically by presenting an exact solution to the operator \(\hat{h}_{\alpha,\alpha^{\prime}}\) in the flat band limit \(M=0\). ### Analytical solution for flat band, U(4) symmetric, THFM in B For \(M=0\), the exact solutions to the eigenstates of the operator in Eq.(51) are presented below. The \(\mathbf{B}\) field independent \(-J/2\) Landau level energy shown in Fig.(1) comes from the anomalous \(c\)-mode \[\theta_{1}=\left[0,0,\left|0\right\rangle,0,0,0\right]^{T}. \tag{53}\] The rest of the problem can be solved using the following ansatze: \[\theta_{3} =\left[c_{1}^{(3)}\left|0\right\rangle,0,c_{3}^{(3)}\left|1\right\rangle,0,c_{5}^{(3)}\left|0\right\rangle,0\right]^{T}, \tag{54}\] \[\theta_{5} =\left[c_{1}^{(5)}\left|1\right\rangle,c_{2}^{(5)}\left|0\right\rangle,c_{3}^{(5)}\left|2\right\rangle,0,c_{5}^{(5)}\left|1\right\rangle,c_{6}^{(5)} \left|0\right\rangle\right]^{T},\] (55) \[\theta_{6_{m}} =\left[c_{1}^{(6_{m})}\left|m\right\rangle,c_{2}^{(6_{m})}\left|m -1\right\rangle,c_{3}^{(6_{m})}\left|m+1\right\rangle,\right.\] \[\left.c_{4}^{(6_{m})}\left|m-2\right\rangle,c_{5}^{(6_{m})}\left|m \right\rangle,c_{6}^{(6_{m})}\left|m-1\right\rangle\right]^{T}, \tag{56}\] where \(m\in\{2,\ldots,m_{max}+1\}\). \(c_{\alpha}^{(\beta)}\) denotes the coefficient of corresponding h.o state at index \(\alpha\) in the 6-component spinor in Eq.(50) and \(\beta\) labels the ansatz index \(\theta_{\beta}\). Using the above, we can set up the eigen-equation and solve for the corresponding coefficients. The ansatze \(\theta_{3}\) and \(\theta_{5}\) yield the \(3\times 3\) and \(5\times 5\) Hermitian matrices, whose eigenvectors are \(c_{\alpha}^{(3)}\) and \(c_{\alpha}^{(5)}\), respectively: \[h_{3}^{+1}=\left(\begin{array}{cccc}0&-i\frac{\sqrt{2}v_{x}}{\ell}&\gamma \Sigma_{0}\\ i\frac{\sqrt{2}v_{x}}{\ell}&-\frac{\gamma}{2}&0\\ \gamma\Sigma_{0}&0&-\frac{U_{1}}{2}\end{array}\right), \tag{57}\] \[h_{5}^{+1}=\left(\begin{array}{cccc}0&0&-i\frac{2v_{x}}{\ell}&\gamma\Sigma _{1}&i\frac{\sqrt{2}v_{x}}{\ell}\Sigma_{0}\\ 0&0&0&-i\frac{\sqrt{2}v_{x}}{\ell}\Sigma_{1}&\gamma\Sigma_{0}\\ i\frac{2v_{x}}{\ell}&0&-\frac{U_{2}}{2}&0&0\\ \gamma\Sigma_{1}&i\frac{\sqrt{2}v_{x}}{\ell}\Sigma_{1}&0&-\frac{U_{1}}{2}&0 \\ -i\frac{\sqrt{2}v_{x}}{\ell}\Sigma_{0}&\gamma\Sigma_{0}&0&0&-\frac{U_{1}}{2} \end{array}\right). \tag{58}\] Similarly, the ansatz \(\theta_{6}^{m}\) yields the following \(6\times 6\) Hermitian matrix for each \(m\), whose eigenvectors are \(c_{\alpha}^{(6,m)}\): \[h_{6}^{+1,m}=\left(\begin{array}{cccc}0&0&-i\sqrt{2m+2}\frac{v_{x}}{\ell}&0& \gamma\Sigma_{m}&i\sqrt{2m}\frac{v_{x}^{\prime}}{\ell}\Sigma_{m-1}\\ 0&0&0&i\sqrt{2m-2}\frac{v_{x}}{\ell}&-i\sqrt{2m}\frac{v_{x}^{\prime}}{\ell} \Sigma_{m}&\gamma\Sigma_{m-1}\\ +i\sqrt{2m+2}\frac{v_{x}}{\ell}&0&-\frac{J}{2}&0&0&0\\ 0&-i\sqrt{2m}-\frac{v_{x}}{\ell}&0&-\frac{J}{2}&0&0\\ \gamma\Sigma_{m}&i\sqrt{2m}\frac{v_{x}^{\prime}}{\ell}\Sigma_{m}&0&0&-\frac{U_ {1}}{2}&0\\ -i\sqrt{2m}\frac{v_{x}^{\prime}}{\ell}\Sigma_{m-1}&\gamma\Sigma_{m-1}&0&0&0&- \frac{U_{1}}{2}\end{array}\right). \tag{59}\] The strong coupling spectrum obtained using the above matrices is shown in Fig.(1), which also includes the spectrum at \(\tau=-1\). One way to obtain the spectrum at \(\tau=-1\) is by replacing \(J,U\rightarrow-J,-U\) in the above decoupled matrices(see SM-F 1). Also note that the decoupled \(f-\)modes and the anomalous \(c-\)mode at \(\tau=-1\) then form the field independent \(\frac{U_{1}}{2}\) and \(\frac{J}{2}\) levels respectively. The state counting within the spectrum can now be easily understood from the aforementioned analysis. As shown in Fig.(6), the \(\mathbf{B}\to 0\) energies for \(J=U_{1}=0\) are \(0\) for the flat band, and \(\pm\gamma\) for the remote bands, because \(\Sigma_{m}\to 1\). Note that these are the non-interacting zero-field energies at \(\Gamma\) point in mBZ. For any non-zero \(\mathbf{B}\), the total zero mode count can be understood as follows: \(h_{6}^{\tau,m}\) being bipartite contributes two zero modes (even if we do not set \(\Sigma_{m}=1\)). Including all \(m_{max}\) such matrices gives \(2m_{max}\) zero modes. \(h_{3}^{\tau}\), \(h_{5}^{\tau}\) and the anomalous \(c\) level contribute one zero mode each, giving us a total of \(2m_{max}+3\) zero modes. Including the \(2q-(2m_{max}+3)\) of the decoupled \(f\) modes, at each \(k\) we recover sum total of \(2q\) zero modes in the non-interacting case for each valley, giving the total of \(2\) states per moire unit cell per spin independent of \(\mathbf{B}\), i.e. the total Chern number \(0\). Note that the magnetic subbands within the narrow band window must remain separated by a gap from the remote subbands for a non-zero range of \(\mathbf{B}\) because the remote bands emanate out of \(\pm\gamma\) at \(\mathbf{B}=0\). For \(J\neq 0\) and \(U_{1}\neq 0\), at valley \(\tau\), the \(2q-(2m_{max}+3)\) of the decoupled \(f\)'s have the energy \(-\tau U_{1}/2\). The \(2m_{max}+2\) modes "cluster" around the \(\mathbf{B}\to 0\) energies of the decoupled blocks: \(-\tau\frac{J}{2}\) and \(-\tau\frac{U_{1}}{4}\pm\sqrt{\frac{U_{2}}{16}+\gamma^{2}}\) (marked by \(\pm\mathcal{E}_{\mp\tau}\) in Fig.(2)), where all three are singly degenerate for \(h_{3}^{\tau}\), last two are doubly degenerate for \(h_{5}^{\tau}\) and all three are doubly degenerate for \(h_{6}^{\tau}\). Including the anomalous \(-\tau J/2\) level (Eq.(53)) and the \(2q-(2m_{max}+3)\) decoupled \(f\) modes at \(-\tau\frac{U_{1}}{2}\), there are \(2q\) modes in total at each \(k\) within the strong coupling spectrum at each valley, corresponding to \(2\) states per moire unit cell per spin and consistent with the projected strong coupling results [82; 83]. ### Integer Fillings \(\nu=\pm\mathbf{1},\pm\mathbf{2}\) The above analysis can straightforwardly be applied to other integer fillings. In this section we illustrate it by extending the formalism to nonzero fillings \(\nu=\pm 1,\pm 2\). Since the mean field interactions are spin dependent for integer fillings, we introduce the spin quantum number label, \(s=\uparrow\downarrow\), to the \(c\) and \(f\) fermion annihilation operators as \(c_{arkrms}\) and \(f_{\eta\tau k\nu^{\prime}s}\) in order to differentiate between the spin-up and spin-down sectors. For \(\nu=-1\), we discuss the sector valley \(\mathbf{K}\) spin \(\downarrow\) for the VP state [56], wherein at zero-field, the charge \(\pm 1\) excitations occupy well gapped Chern \(\mp 1\) states marked by red in Fig(8a). The other spin-valley sectors at \(\nu=-1\), namely valley \(\mathbf{K}\) spin \(\uparrow\) and valley \(\mathbf{K}^{\prime}\) spin \(\uparrow\)\(\downarrow\)(degenerate) are discussed in SM-F 2,G. The red band in the energy window \((-30,-50)\) meV has Chern number \(-1\)[56], while the dispersive red band in energy window \((-55,-100)\)meV has Chern number \(+1\)[56]. Our formalism straightforwardly shows that in presence of magnetic field \(\mathbf{B}\), the Chern \(\pm 1\) bands gain and lose states respectively [88], with a total state count pinned to 2 per moire unit cell. The spectrum at \(\nu=+1\) is related by particle-hole symmetry. The interactions at valley \(\mathbf{K}\) and spin \(\downarrow\) read as \[V^{\tau=+1,\downarrow}_{\nu=\pm 1}=\nu\sum_{k}\left(\sum_{a =1}^{4}\sum_{m=0}^{m_{a+1}}\sum_{r=0}^{p-1}W_{a}c^{\dagger}_{a1krm1}c_{a1krm \downarrow}\right.\] \[+\sum_{a=3,4}\sum_{m=0}^{m_{a+1}}\sum_{r=0}^{p-1}(-1)^{a+1}\frac {J}{2}c^{\dagger}_{a1krm1}c_{a1krm\downarrow}\] \[+\sum_{b=1,2}\sum_{r^{\prime}=0}^{q-1}\left(\frac{2+(-1)^{b+1}}{ 2}U_{1}+6U_{2}\right)f^{\dagger}_{b1kr^{\prime}\downarrow}f_{b1kr^{\prime} \downarrow}\right). \tag{60}\] Here \(W_{a\in\{1\ldots 4\}}\) and \(U_{2}\) are mean field coefficients with \(W_{1}=W_{2}\) and \(W_{3}=W_{4}\)[56]. In the \(\bar{f}\) basis we can re-write the interaction for \(f\)-fermion modes as \[V^{f,\tau=+1,\downarrow}_{\nu=\pm 1}=\sum_{k\in[0,1)\otimes[0, \frac{1}{2})}V^{f,\tau=+1,\downarrow,\nu=\pm 1}_{coupled}\] \[+V^{f,\tau=+1,\downarrow,\nu=\pm 1}_{decoupled} \tag{61}\] where \[V^{f,\tau=+1,\downarrow,\nu=\pm 1}_{coupled}=\] \[\nu\left(\frac{3}{2}U_{1}+6U_{2}\right)\left(\sum_{m=0}^{m_{max} +1}\bar{f}^{\dagger}_{11km\downarrow}\bar{f}_{11km\downarrow}\right) \tag{62}\] \[+\nu\left(\frac{1}{2}U_{1}+6U_{2}\right)\left(\sum_{m=0}^{m_{max} }\bar{f}^{\dagger}_{21km\downarrow}\bar{f}_{21km\downarrow}\right),\] (63) \[V^{f,\tau=+1,s,\nu=\pm 2}_{decoupled}=\nu\sum_{b=1}^{2}\sum_{m^{ \prime}=m_{max}+\bar{b}}^{q-1}\] \[\left(\frac{2+(-1)^{b+1}}{2}U_{1}+6U_{2}\right)\bar{f}^{\dagger} _{b1km^{\prime}\downarrow}\bar{f}_{b1km^{\prime}\downarrow}, \tag{64}\] where \(\bar{1}(\bar{2})=2,1\). Note that out of the available \(2q\)\(f\) modes, \(q-(m_{max}+2)\) are decoupled with energy \(\nu\left(\frac{3}{2}U_{1}+6U_{2}\right)\) and \(q-(m_{max}+1)\) are decoupled with energy \(\nu\left(\frac{1}{2}U_{1}+6U_{2}\right)\), i.e. a total of \(2-(2m_{max}+3)/q\) decoupled \(f\) modes per moire unit cell. The coupled modes can then be described by \[H^{\tau=+1,\downarrow,\nu=\pm 1}_{coupled}=\sum_{k}\sum_{\alpha,\alpha^{ \prime}=1}^{6}\sum_{m=0}^{m_{\alpha}}\sum_{m^{\prime}=0}^{m_{\alpha^{\prime} }}\Xi^{1,\nu=\pm 1}_{m\alpha,m^{\prime}\alpha^{\prime}}d^{\dagger}_{m\alpha\downarrow}( k)d_{m^{\prime}\alpha^{\prime}\downarrow}(k), \tag{65}\] where \(m_{\alpha=1,\ldots,4}=m_{\alpha,+1}\), \(m_{5}=m_{max}+1\) and \(m_{6}=m_{max}\), and \[d^{\dagger}_{m\alpha\downarrow}(k)=\left(c^{\dagger}_{11k0m\downarrow},c^{ \dagger}_{21k0m\downarrow},c^{\dagger}_{31k0m\downarrow},c^{\dagger}_{41k0m \downarrow},\bar{f}^{\dagger}_{11km\downarrow},\bar{f}^{\dagger}_{21km \downarrow}\right)_{\alpha}, \tag{66}\] with \[\Xi^{1,\nu=\pm 1}_{m\alpha,m^{\prime}\alpha^{\prime}}=\langle m|\hat{h}^{+1, \downarrow,\nu=\pm 1}_{\alpha,\alpha^{\prime}}|m^{\prime}\rangle, \tag{67}\] where the operators \(\hat{h}^{+1,\downarrow,\nu=\pm 1}_{\alpha,\alpha^{\prime}}\) are given as \[\hat{h}^{+1,\downarrow,\nu=\pm 1}_{\alpha,\alpha^{\prime}}=\left( \begin{array}{cccc}\nu W_{1}&0&-i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}&0&\gamma \Sigma(\hat{a}^{\dagger}\hat{a})&i\sqrt{2}\frac{v^{\prime}_{s}}{\ell}\hat{a}^ {\dagger}\Sigma(\hat{a}^{\dagger}\hat{a})\\ 0&\nu W_{1}&0&i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}^{\dagger}&-i\sqrt{2}\frac{v _{s}^{\prime}}{\ell}\hat{a}\Sigma(\hat{a}^{\dagger}\hat{a})&\gamma\Sigma( \hat{a}^{\dagger}\hat{a})\\ i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}^{\dagger}&0&\nu(W_{3}+\frac{J}{2})&M&0&0\\ 0&-i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}&M&\nu(W_{3}-\frac{J}{2})&0&0\\ \gamma\Sigma(\hat{a}^{\dagger}\hat{a})&i\sqrt{2}\frac{v^{\prime}_{s}}{\ell} \Sigma(\hat{a}^{\dagger}\hat{a})\hat{a}^{\dagger}&0&0&\nu(\frac{3}{2}U_{1}+6U_{ 2})&0\\ -i\sqrt{2}\frac{v^{\prime}_{s}}{\ell}\Sigma(\hat{a}^{\dagger}\hat{a})\hat{a}& \gamma\Sigma(\hat{a}^{\dagger}\hat{a})&0&0&0&\nu(\frac{1}{2}U_{1}+6U_{2})\\ \end{array}\right)_{\alpha,\alpha^{\prime}} \tag{68}\] Figure 7: Illustrative plot at \(m_{max}=5\), \(w_{0}/w_{1}=0.7\), valley \(\mathbf{K}\) and spin \(\downarrow\) in flat band limit, \(M=0\), for \(\nu=-1\), showing (a) \(m_{max}=5\) magnetic subbands emanating out of \(-W_{3}+\frac{J}{2}\) for Chern \(-1\) band and (b) \(m_{max}+3=8\) magnetic subbands emanating out of energy \(-W_{3}-\frac{J}{2}\) for the Chern \(+1\) band, which includes the field independent level \(-W_{3}-\frac{J}{2}\) of decoupled \(c\) mode. Including the \(q-(m_{max}+1)=q-6\) decoupled \(f\) modes with energy \(-\frac{1}{2}U_{1}-6U_{2}\) for the Chern \(-1\) band and \(q-(m_{max}+2)=q-7\) decoupled \(f\) modes with energy \(-\frac{J}{2}U_{1}-6U_{2}\) for the Chern \(+1\) band, we have in total \((q-6)+5=q-1\) and \((q-7)+8=q+1\) magnetic subbands for Chern \(\mp 1\) bands respectively as we should. The eigenstates of the above operator are exactly solvable for the flat band limit, i.e. \(M=0\) and the spectra is shown in Fig.(8b). The decoupled \(c\) fermion given in Eq.(53) forms the field independent \(\nu(W_{3}+\frac{J}{2})\) level. The remaining eigenstates can be obtained using the ansatze given in Eqs.(54)-(56). Setting up eigen-equation for these ansatze yield us the following \(3\times 3\), \(5\times 5\) and \(m_{max}\)\(6\times 6\) matrices: \[h_{3}^{+1,\nu=\pm 1}=\left(\begin{array}{ccc}\nu W_{1}&-i \frac{\sqrt{2}v_{s}}{\ell}&\gamma\Sigma_{0}\\ i\frac{\sqrt{2}v_{s}}{\ell}&\nu(W_{3}+\frac{J}{2})&0\\ \gamma\Sigma_{0}&0&\nu(\frac{3}{2}U_{1}+6U_{2})\end{array}\right), \tag{69}\] \[h_{5}^{+1,\nu=\pm 1}=\left(\begin{array}{ccc}\nu W_{1}&0&-i \frac{2v_{s}}{\ell}&\gamma\Sigma_{1}&i\frac{\sqrt{2}v_{s}^{\prime}}{\ell} \Sigma_{0}\\ 0&\nu W_{1}&0&-i\frac{\sqrt{2}v_{s}^{\prime}}{\ell}\Sigma_{1}&\gamma\Sigma_{0 }\\ i\frac{2v_{s}}{\ell}&0&\nu(W_{3}+\frac{J}{2})&0&0\\ \gamma\Sigma_{1}&i\frac{\sqrt{2}v_{s}^{\prime}}{\ell}\Sigma_{1}&0&\nu(\frac{3 U_{1}}{2}+6U_{2})&0\\ -i\frac{\sqrt{2}v_{s}^{\prime}}{\ell}\Sigma_{0}&\gamma\Sigma_{0}&0&0&\nu(\frac{ U_{1}}{2}+6U_{2})\end{array}\right), \tag{70}\] \[h_{6}^{+1,m,\nu=\pm 1}=\left(\begin{array}{cccc}\nu W_{1}&0&-i \sqrt{2m+2}\frac{v_{s}}{\ell}&0&\gamma\Sigma_{m}&i\sqrt{2m}\frac{v_{s}^{\prime }}{\ell}\Sigma_{m-1}\\ 0&\nu W1&0&i\sqrt{2m-2}\frac{v_{s}}{\ell}&-i\sqrt{2m}\frac{v_{s}^{\prime}}{ \ell}\Sigma_{m}&\gamma\Sigma_{m-1}\\ +i\sqrt{2m+2}\frac{v_{s}}{\ell}&0&\nu(W_{3}+\frac{J}{2})&0&0&0\\ 0&-i\sqrt{2m-2}\frac{v_{s}^{\prime}}{\ell}&0&\nu(W_{3}-\frac{J}{2})&0&0\\ \gamma\Sigma_{m}&i\sqrt{2m}\frac{v_{s}^{\prime}}{\ell}\Sigma_{m}&0&0&\nu( \frac{3U_{1}}{2}+6U_{2})&0\\ -i\sqrt{2m}\frac{v_{s}^{\prime}}{\ell}\Sigma_{m-1}&\gamma\Sigma_{m-1}&0&0&0& \nu(\frac{U_{1}}{2}+6U_{2})\end{array}\right), \tag{71}\] where \(m\in\{2\ldots m_{max}+1\}\). The magnetic subbands contributed by the coupled modes for Chern \(+1\) band emanate out of \({\bf B}\to 0\) energy \(\nu(W_{3}+\frac{J}{2})\) of the above decoupled matrices, which is \(m_{max}\) fold degenerate for matrix Eq.(71) and singly degenerate for matrices Eq.(69) and Eq.(70). Including the field independent level \(\nu(W_{3}+\frac{J}{2})\) of the decoupled \(c\) mode, we have in total \(m_{max}+3\) magnetic subbands emanating out of \(\nu(W_{3}+\frac{J}{2})\) for Chern \(+1\) band. Now recall that we have \(q-(m_{max}+2)\) decoupled \(f\) modes with energy \(\nu(\frac{3U_{1}}{2}+6U_{2})\) for the Chern \(+1\) band, which gives in total \(q+1\) magnetic subbands for the Chern \(+1\) band. Similarly the coupled modes contribute \(m_{max}\) magnetic subbands for Chern \(-1\) band which emanate out of the \(m_{max}\) fold degenerate \({\bf B}\to 0\) energy \(\nu(W_{3}-\frac{J}{2})\) of matrix Eq.(71). Including the \(q-(m_{max}+1)\) decoupled \(f\) modes with energy \(\nu(\frac{U_{1}}{2}+6U_{2})\) for the Chern \(-1\) band, we have in total \(q-1\) magnetic subbands for the Chern \(-1\) band. An illustration for \(m_{max}=5\) is shown in Fig.(7). Our method thus captures the fact that Landau quantisation of Chern \(\pm 1\) band offers in total \(q\pm 1\) magnetic subbands or \(1\pm\frac{1}{q}=1\pm\frac{\phi}{\phi_{0}}\) states per moire unit cell as expected. The full spectrum in flat band limit is shown in Fig.(8). Note that the \({\bf B}\to 0\) energy of the decoupled matrices correspond to the zero field THFM energies at \(\Gamma\) in mBZ. The mean field interaction at filling \(\nu=\pm 2\) for VP state at \(\tau=+1\) for spin sector \(s\) reads[56] \[V_{\nu=\pm 2}^{\tau=+1,s}=\nu\sum_{k}\bigg{(}\sum_{a=1}^{4} \sum_{m=0}^{m_{a+1}}\sum_{r=0}^{p-1}W_{a}c_{a1krms}^{\dagger}c_{a1krms}\] \[+\sigma_{s}\sum_{a=3,4}\sum_{m=0}^{m_{a+1}}\sum_{r=0}^{p-1}\frac{ J}{4}c_{a1krms}^{\dagger}c_{a1krms}\] \[+\sum_{b=1,2}\sum_{r=0}^{q-1}\bigg{(}\frac{4+\sigma_{s}}{4}U_{1}+6 U_{2}\bigg{)}f_{b1kr^{\prime}s}^{\dagger}f_{b1kr^{\prime}s}\bigg{)}, \tag{72}\] where \(\sigma_{s}=\pm 1\) for \(s=\uparrow\downarrow\) respectively. In the \(\bar{f}\) basis we can re-write the interaction for \(f\)-fermion modes as \[V_{\nu=\pm 2}^{f,\tau=+1,s}=\sum_{k\in[0,1)\otimes[0,\frac{1}{q} ]}V_{coupled}^{f,\tau=+1,s,\nu=\pm 2}\] \[+V_{decoupled}^{f,\tau=+1,s,\nu=\pm 2} \tag{73}\] where \[V_{coupled}^{f,\tau=+1,s,\nu=\pm 2}=\nu\left(\frac{4+\sigma_{s}}{4}U_{1} +6U_{2}\right)\] \[\times\left(\sum_{m=0}^{m_{max}+1}\bar{f}_{11kms}^{\dagger}\bar{f} _{11kms}+\sum_{m=0}^{m_{max}}f_{21kms}^{\dagger}\bar{f}_{21kms}\right), \tag{74}\] \[V_{decoupled}^{f,\tau=+1,s,\nu=\pm 2}=\nu\left(\frac{4+\sigma_{s}}{4}U_{1} +6U_{2}\right)\] \[\times\sum_{b=1}^{2}\sum_{m^{\prime}=m_{max}+\bar{b}}^{q-1}\bar{f} _{b1km^{\prime}s}^{\dagger}\bar{f}_{b1km^{\prime}s}, \tag{75}\] where \(\bar{1}(\bar{2})=2,1\). Note that we still have \(2-(2m_{max}+3)/q\) decoupled \(f\) states per moire unit cell per spin. The coupled modes can then be described by \[H^{\tau=+1,s,\nu=\pm 2}_{coupled}=\sum_{k}\sum_{\alpha,\alpha^{\prime}=1}^{6} \sum_{m=0}^{m_{\alpha}}\sum_{m^{\prime}=0}^{m_{\alpha^{\prime}}}\Xi^{s,\nu= \pm 2}_{m\alpha,m^{\prime}\alpha^{\prime}}d^{\dagger}_{m\alpha s}(k)d_{m^{ \prime}\alpha^{\prime}s}(k), \tag{76}\] where \(m_{\alpha=1,\ldots,4}=m_{\alpha,+1}\), \(m_{5}=m_{max}+1\) and \(m_{6}=m_{max}\), and \[d^{\dagger}_{m\alpha s}(k)=\left(c^{\dagger}_{11k0ms},c^{\dagger}_{21k0ms},c^ {\dagger}_{31k0ms},c^{\dagger}_{41k0ms},\bar{f}^{\dagger}_{11kms},\bar{f}^{ \dagger}_{21kms}\right)_{\alpha}. \tag{77}\] where the operators \(\hat{h}^{+1,s,\nu=\pm 2}_{\alpha,\alpha^{\prime}}\) are given as \[\hat{h}^{+1,s,\nu=\pm 2}_{\alpha,\alpha^{\prime}}=\left(\begin{array}{cccc }\nu W_{1}&0&-i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}&0&\gamma\Sigma(\hat{a}^{ \dagger}\hat{a})&i\sqrt{2}\frac{v^{\prime}_{s}}{\ell}\hat{a}^{\dagger}\Sigma( \hat{a}^{\dagger}\hat{a})\\ 0&\nu W_{1}&0&i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}^{\dagger}&-i\sqrt{2}\frac{v^{ \prime}_{s}}{\ell}\hat{a}\Sigma(\hat{a}^{\dagger}\hat{a})&\gamma\Sigma(\hat{a} ^{\dagger}\hat{a})\\ i\sqrt{2}\frac{v_{s}}{\ell}\hat{a}^{\dagger}&0&\nu(W_{3}+\sigma_{s}\frac{J}{4}) &M&0&0\\ 0&-i\sqrt{2}\frac{v^{\prime}_{s}}{\ell}\hat{a}&M&\nu(W_{3}+\sigma_{s}\frac{J}{4 })&0&0\\ \gamma\Sigma(\hat{a}^{\dagger}\hat{a})&i\sqrt{2}\frac{v^{\prime}_{s}}{\ell} \Sigma(\hat{a}^{\dagger}\hat{a})\hat{a}^{\dagger}&0&0&\nu(\frac{4+\sigma_{s}} {4}U_{1}+6U_{2})&0\\ -i\sqrt{2}\frac{v^{\prime}_{s}}{\ell}\Sigma(\hat{a}^{\dagger}\hat{a})\hat{a}& \gamma\Sigma(\hat{a}^{\dagger}\hat{a})&0&0&0&\nu(\frac{4+\sigma_{s}}{4}U_{1}+6 U_{2})\end{array}\right)_{\alpha,\alpha^{\prime}} \tag{78}\] For \(M=0\), the exact eigenstates for the above operators in Eq-(78) can be solved using the same ansatze given in Eqs.(54)-(56)(see SM-H). The spectrum is shown in Fig.(9). The decoupled \(c\) state given by Eq.(53), forms the field independent \(\pm(2W_{3}+\frac{J}{2})\) and \(\pm(2W_{3}-\frac{J}{2})\) levels at fillings \(\nu=\pm 2\) and \(\tau=+1\) for spin \(\uparrow\downarrow\) sectors respectively. However the level \(\pm(2W_{3}+\frac{J}{2})\) is singly degenerate as it is not the part of spectrum at \(\tau=-1\)(see SM-F 3) and the total state count is pinned to 2 per moire unit cell per valley per spin for the narrow bands(see SM-H). ## IV Discussion We have put forward a generalization of THFM in finite \({\bf B}\). Although the formalism applies to any rational value of \(\frac{\Phi}{\Phi_{0}}\), the physical nature of hybridization amidst the heavy \(f\) and topological \(c\) fermions is particularly revealing for the \(\frac{1}{q}\) sequence. In the Landau gauge, the momentum imparted by the heavy hybrid Wannier states onto the \(c\)'s LLs is converted into the shift of the position of the LL wavefunctions, each unit of \(\frac{1}{q}\) of the primitive reciprocal lattice momentum \({\bf g}_{2}\) causes a shift by a moire unit cell. Because the momentum is defined only modulo \({\bf g}_{2}\), the LL wavefunction shift is also defined only modulo \(q\) unit cells, causing "revivals" illustrated in Fig.3. Together with a change of the oscillation rate of the wavefunction in the perpendicular direction under the shift, this process controls the strength of the \(c\)-\(f\) hybridization along the \(\frac{1}{q}\) sequence. Its consequence on the spectrum, including interactions at CNP, are illustrated in the Fig.(2) as the \(c\)-\(f\) hybridization is turned on from zero to its full strength. The finite \({\bf B}\) analytical solution for the flat \(U(4)\) symmetric THFM provides an intuitive picture of the mechanism for Landau quantization of the strong coupling spectra of MATBG at integer fillings in terms of the decoupled \(f\) modes and coupled \(c\)-\(f\) modes, all the way to zero magnetic field. It also provides a deeper understanding of the nature of the \(\pm\frac{J}{2}\) level at CNP, observed in numerics before[82], as the anomalous zero-LL of a massless Dirac particle, a key ingredient of the topological heavy fermion picture of MATBG. For \(q\) dependent \(m_{max}\), we see the a peeling away of levels close to \(\frac{U_{1}}{2}\) around \(q\approx 80\), because the theory starts "losing" LLs with increasing field, reminiscent of the results obtained in [82]. Although there are \(2-\frac{2m_{max}+3}{q}\) decoupled \(f-\)modes per unit cell per spin at CNP, the total number of states in the narrow band strong coupling window still remains pinned to 2 per unit cell per spin, as expected for a total Chern number 0. Even though the full \(M\neq 0\) problem requires numerical analysis, we are able to probe till fluxes at least as low as 1/145, which was not possible through the the framework of strong coupling expansion. We moreover argue that the overall physical picture should stay unchanged, as \(M\) anyways is the smallest energy scale in the problem. Throughout the text we neglected the spin Zeeman effect as it leads to a much smaller energy splitting than the orbital effect, the former is only a few Kelvin at the highest fields considered here while the latter is several meV, so at least an order of magnitude larger. The effect of renormalization of mean field parameters in magnetic field is yet to be incorporated in our framework. A full analysis for other integer fillings, candidate groundstates [90, 91], and Hofstadter-scale fluxes where reentrant many-body and topological effects are at play [92, 93, 94, 95, 96, 97, 98, 99], is also left for the future work. ###### Acknowledgements. The authors thank Xiaoyu Wang and Zhi-da Song for valuable conversations, and to Dumitru Calugaru for computational advice. J. H.-A. is supported by a Hertz Fellowship and by ONR Grant No. N00014-20-1-2303. B.A.B is supported by the DOE Grant No. DE-SC0016239 and by the EPiQS Initiative, Grant GBMF11070. A.C. was supported by Grant No. GBMF8685 towards the Princeton theory program and by the Gordon and Betty Moore Foundation through the EPiQS Initiative, Grant GBMF11070. Further sabbatical support for A.C., J.H.A and B.A.B was provided by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101020833), the Schmidt Fund for Innovative Research, Simons Investigator Grant No. 404513. O.V. was supported by NSF Grant No. DMR-1916958 and is partially funded by the Gordon and Betty Moore Foundation's EPiQS Initiative Grant GBMF11070, National High Magnetic Field Laboratory through NSF Grant No. DMR-1157490 and the State of Florida.
2302.03083
Continuity of the stabilizer map and irreducible extensions
Let $G$ be a locally compact group. For every $G$-flow $X$, one can consider the stabilizer map $x \mapsto G_x$, from $X$ to the space $\mathrm{Sub}(G)$ of closed subgroups of $G$. This map is not continuous in general. We prove that if one passes from $X$ to the universal irreducible extension of $X$, the stabilizer map becomes continuous. This result provides, in particular, a common generalization of a theorem of Frol\'ik (that the set of fixed points of a homeomorphism of an extremally disconnected compact space is open) and a theorem of Veech (that the action of a locally compact group on its greatest ambit is free). It also allows to naturally associate to every $G$-flow $X$ a stabilizer $G$-flow $\mathrm{S}_G(X)$ in the space $\mathrm{Sub}(G)$, which generalizes the notion of stabilizer uniformly recurrent subgroup associated to a minimal $G$-flow introduced by Glasner and Weiss.
Adrien Le Boudec, Todor Tsankov
2023-02-06T19:38:51Z
http://arxiv.org/abs/2302.03083v2
# Continuity of the stabilizer map on maximally highly proximal flows ###### Abstract. Let \(G\) be a locally compact group and let \(G\curvearrowright X\) be a \(G\)-flow. We prove that if the flow \(X\) is maximally highly proximal, then the stabilizer map \(x\mapsto G_{x}\), from \(X\) to the space of closed subgroups of \(G\), is continuous. This provides in particular a common generalization of a theorem of Frolik (that the set of fixed points of a homeomorphism of an extremally disconnected compact space is open) and a theorem of Veech (that the action of a locally compact group on its greatest ambit is free). Key words and phrases:Locally compact groups, stabilizer map, URS, MHP flows 2020 Mathematics Subject Classification: Primary: 37B05. Secondary: 22D12, 06E15, 54H15 ## 1. Introduction Let \(G\) be a topological group. Recall that a _\(G\)-flow_ is a continuous action \(G\curvearrowright X\) on a compact space \(X\) (all our compact spaces are Hausdorff). A \(G\)-flow is _minimal_ if every orbit is dense. A continuous, \(G\)-equivariant map \(\pi\colon Y\to X\) between \(G\)-flows is called a _\(G\)-map_. If \(\pi\) is surjective, we also say that \(Y\) is an _extension_ of \(X\), or that \(X\) is a _factor_ of \(Y\). A map \(\pi\colon Y\to X\) between compact spaces is called _irreducible_ if every non-empty open \(U\subseteq Y\) contains the fiber \(\pi^{-1}(\{x\})\) for some \(x\in X\), or, equivalently, if the image of any proper closed subset of \(Y\) is a proper subset of \(X\). Irreducible maps were studied by Gleason [G3], who proved that to every compact space \(X\), one can associate an extremally disconnected compact space \(\hat{X}\), the Stone space of the Boolean algebra \(\operatorname{RO}(X)\) of regular open subsets of \(X\), with an irreducible map \(\hat{X}\to X\) which is universal with respect to irreducible maps \(Y\to X\). Recall that a space is _extremally disconnected_ if the closure of every open subset is clopen. An extension \(\pi\colon Y\to X\) between \(G\)-flows is called _highly proximal_ if \(\pi\) is irreducible. This notion was studied by Auslander and Glasner [AG]. For minimal flows, this is equivalent to asking that the fibers of \(\pi\) can be compressed to a point by a net of elements of \(G\)[AG] (which justifies the name). Highly proximal extensions are thought as being rather small extensions. They preserve many dynamical properties such as minimality, proximality, strong proximality, and disjointness. When the spaces \(Y,X\) are metrizable, an extension \(\pi\colon Y\to X\) is highly proximal iff it is _almost one-to-one_ (the set \(\{y\in Y:\pi^{-1}(\{\pi(y)\})=\{y\}\}\) is dense in \(Y\)). Almost one-to-one extensions are an important tool in topological dynamics (used, for example, to construct symbolic representations of continuous systems), and the notion of highly proximal extension is the appropriate generalization that allows the existence of universal objects and the development of a general theory. For every \(G\)-flow \(X\), there exists a \(G\)-flow \(\hat{X}_{G}\) and a highly proximal extension \(\pi_{X}\colon\hat{X}_{G}\to X\) with the following universal property: for every highly proximal extension \(\pi\colon Y\to X\), there exists a \(G\)-map \(p\colon\hat{X}_{G}\to Y\) such that \(\pi\circ p=\pi_{X}\) Introduction The _universal highly proximal extension_ of \(X\) is a _universal highly proximal extension_ of \(X\). For minimal flows, the existence and uniqueness of \(\hat{X}_{G}\) were established in [AG] and the general case is due to Zucker [Zz]. A \(G\)-flow \(X\) is called _maximally highly proximal (MHP)_ if \(\hat{X}_{G}=X\). Equivalently, \(X\) is MHP if \(X\) admits no non-trivial highly proximal extension. The correspondence \(X\mapsto\hat{X}_{G}\) is idempotent and its image is the class of MHP \(G\)-flows. Thus the class of \(G\)-flows is partitioned into equivalence classes, where \(X\) and \(Y\) are equivalent if they admit a common highly proximal extension; or equivalently if \(\hat{X}_{G}\) and \(\hat{Y}_{G}\) are isomorphic. Each class contains a unique representative that is MHP. For discrete groups, the construction of \(\hat{X}_{G}\) reduces to the one by Gleason, and we have that \(\hat{X}_{G}=\hat{X}\)[G3, Th. 3.2]. In that setting, a \(G\)-flow \(X\) is MHP iff it is extremally disconnected. This depends only on the topology of \(X\), and not on \(G\). This is no longer true for non-discrete groups. Examples of MHP flows that arise in the non-discrete setting are \(X=G/H\), where \(H\) is a closed, cocompact subgroup of \(G\), and \(G\) acts on \(X\) by left-translations. MHP flows of Polish groups were extensively studied by Zucker in [Zz], where many more interesting examples can be found. More general topological groups were considered by Basso and Zucker in [BZ]. The highly proximal equivalence relation and MHP flows are useful to express certain rigidity properties among \(G\)-flows. An instance of this is a theorem of Rubin that asserts that any two \(G\)-flows that are faithful and _micro-supported_ are highly proximally equivalent [R]. Combined with [CLB, Prop. 2.3], this implies that every group \(G\) that admits a faithful micro-supported \(G\)-flow admits exactly one faithful micro-supported \(G\)-flow that is MHP. For certain non-discrete totally disconnected locally compact groups, this flow is the Stone space of the centralizer lattice of \(G\), a Boolean algebra constructed from the local structure of the group [CRW]. See the references above for the definition of a "micro-supported" action and more details. ### The main result In certain contexts, MHP flows are better behaved than general flows. The main result of this paper is an illustration of such a situation. For the remainder of the introduction, we suppose that \(G\) is a locally compact group, and we denote by \(\operatorname{Sub}(G)\) the space of closed subgroups of \(G\). Endowed with the Chabauty topology, the space \(\operatorname{Sub}(G)\) is compact, and the action of \(G\) on \(\operatorname{Sub}(G)\) by conjugation is continuous. To every \(G\)-flow \(X\), we can associate the stabilizer map \(X\to\operatorname{Sub}(G)\), \(x\mapsto G_{x}\), which is \(G\)-equivariant. The stabilizer map is always upper semi-continuous (see e.g. [GW]), but fails to be continuous in general. This lack of continuity is not just a technical issue, but is inherent to the study of \(G\)-flows. For instance it witnesses the difference between free and topologically free actions (see below). We show that for MHP flows, this defect disappears. **Theorem 1.1**.: _Let \(G\) be a locally compact group and let \(X\) be an MHP \(G\)-flow. Then the stabilizer map \(X\to\operatorname{Sub}(G)\), \(x\mapsto G_{x}\), is continuous._ As mentioned above, when \(G\) is a discrete group, \(X\) is MHP if and only if \(X\) is extremally disconnected. In that case, Theorem 1.1 is equivalent to saying that the set of fixed points in \(X\) of every element \(g\in G\) is an open subset of \(X\). This is a theorem of Frolik [F]. Another special case of Theorem 1.1 is a well-known theorem of Veech that the action of a locally compact group on its greatest ambit \(\operatorname{Sa}(G)\) is free. One can apply Theorem 1.1 because the greatest ambit is an MHP flow and the free left translation action \(G\curvearrowright G\) embeds into it densely (cf. Corollary 5.8). A relativized version of Veech's theorem was considered by Matte Bon and Tsankov in [MBT], where it was proved that the stabilizer map for the flow \(\operatorname{Sa}(G/H)\), where \(H\) is a closed subgroup of \(G\), is continuous. This is again a special case of Theorem 1.1 because the flow \(\operatorname{Sa}(G/H)\) is also MHP [22]. As Theorem 1.1 is a common generalization of Frolik's and Veech's theorem, it is perhaps not surprising that its proof mixes ideas from the proofs of both. We also rely on the topometric structure on MHP flows introduced by Zucker [22] (extending a construction of [BMT] for \(\operatorname{Sa}(G)\)), which while being rather simple for locally compact groups, is still useful for us. ### Freeness vs topological freeness Recall that \(G\curvearrowright X\) is _free_ if \(G_{x}\) is trivial for every \(x\in X\), and \(G\curvearrowright X\) is called _topologically free_ if for every compact \(K\subseteq G\) with \(1_{G}\notin K\), the closed set \(\{x\in X:x\in K\cdot x\}\) has empty interior. (When \(G\) is second countable, topological freeness is equivalent to saying that there is a dense set of points \(x\in X\) such that \(G_{x}\) is trivial.) The difference between freeness and topological freeness is detected by the failure of continuity of the stabilizer map: a topologically free action is free if and only if the stabilizer map is continuous. Also the property of being topologically free is invariant under highly proximal equivalence. Hence the following is a consequence of Theorem 1.1. **Corollary 1.2**.: _Let \(G\curvearrowright X\) be a \(G\)-flow. Then the following are equivalent:_ 1. \(X\) _is topologically free;_ 2. \(\hat{X}_{G}\) _is free._ _In particular, an MHP flow if topologically free if and only if it is free._ This has the following application. Recall that a \(G\)-flow is called _strongly proximal_ if the closure of the \(G\)-orbit of every Borel probability measure on \(X\) contains a Dirac measure. The flow \(X\) is called a _boundary_ if \(X\) is minimal and strongly proximal. Every group \(G\) admits a boundary \(\partial_{F}G\), unique up to isomorphism, such that every boundary is a factor of \(\partial_{F}G\)[22, SSIII]. It is called the _Furstenberg boundary_ of \(G\). By [21, Lemma 5.2] and [22, Lemma 4.1] the flow \(\partial_{F}G\) is MHP. **Corollary 1.3**.: _For every locally compact group \(G\), the stabilizer map is continuous on \(\partial_{F}G\). In particular the following are equivalent:_ 1. \(G\) _admits a topologically free boundary;_ 2. \(G\) _acts freely on_ \(\partial_{F}G\)_._ Proof.: The first assertion follows from the fact that \(\partial_{F}G\) is MHP and Theorem 1.1. For the second assertion, if \(G\) admits a topologically free boundary \(G\curvearrowright X\), then the action of \(G\curvearrowright\partial_{F}G\) is also topologically free since there is a factor map \(\partial_{F}G\to X\). Since \(\partial_{F}G\) is MHP, Corollary 1.2 implies that \(G\curvearrowright\partial_{F}G\) is free. The other direction is clear. When \(G\) is a discrete group, the equivalence in Corollary 1.3 was already known as it follows from [F]. Whether this property holds true in a given group \(G\) was recently shown to be equivalent to the simplicity of the reduced C\({}^{*}\)-algebra of \(G\)[KK]. ### Stabilizer flows Theorem 1.1 has interest beyond the case of topologically free actions. Recall that a _uniformly recurrent subgroup (URS)_ of a locally compact group \(G\) is a minimal closed, \(G\)-invariant subset of \(\operatorname{Sub}(G)\)[GW]. Every minimal \(G\)-flow \(X\) gives rise to a URS of \(G\), called the _stabilizer URS_ associated to \(X\), defined as the unique minimal closed \(G\)-invariant subset of the closure of the image of the stabilizer map in \(\operatorname{Sub}(G)\) (Glasner-Weiss [GW]). Theorem 1.1 allows us to associate a _stabilizer flow_ to any \(G\)-flow \(X\), without a minimality assumption: we consider the MHP flow \(\hat{X}_{G}\), and simply take the image of \(\hat{X}_{G}\) in \(\operatorname{Sub}(G)\) by the stabilizer map (cf. Definition 5.1). In Section 5, we prove some basic properties of the stabilizer flow. We show in particular that when \(X\) is minimal, the stabilizer flow and the stabilizer URS are equal. Hence in that situation the stabilizer flow is an alternative description of the stabilizer URS. **Corollary 1.4**.: _Let \(X\) be a minimal \(G\)-flow. Then the stabilizer URS of \(X\) is equal to \(\{G_{z}:z\in\hat{X}_{G}\}\)._ **Acknowledgments**.: We thank Nicolas Matte Bon for interesting discussions about this work. ## 2. The universal highly proximal extension of a \(G\)-flow In this section, we give a new construction of the universal highly proximal extension of a \(G\)-flow \(G\curvearrowright X\), where \(G\) is an arbitrary topological group. The existence of such an extension was proved by Auslander and Glasner [1] for minimal flows using an abstract argument and a construction without a minimality assumption, in terms of near-ultrafilters, was given by Zucker [1] for Polish groups and Basso and Zucker [2] for arbitrary topological groups. Our construction is in some sense dual to theirs: instead of constructing the points of \(\hat{X}_{G}\) directly, we describe the lattice of continuous functions \(C(\hat{X}_{G})\) and use an appropriate duality theorem to recover the space. ### The non-archimedean case A Boolean algebra is called _complete_ if it admits suprema (and infima) of arbitrary subsets. A Boolean algebra \(\mathcal{B}\) is complete iff its Stone space \(\mathsf{S}(\mathcal{B})\) is _extremally disconnected_, i.e., for every open \(U\subseteq\mathsf{S}(\mathcal{B})\), the set \(\overline{U}\) is also open. If \(\{A_{i}\}_{i\in I}\) is a family of clopen sets in \(\mathsf{S}(\mathcal{B})\), their supremum in \(\mathcal{B}\) is the clopen set \(\overline{\bigcup_{i}A_{i}}\). An open subset \(U\) of a topological space \(X\) is called _regular_ if \(U=\operatorname{Int}\overline{U}\). The collection \(\operatorname{RO}(X)\) of regular open subsets of \(X\) forms a complete Boolean algebra with operations \(\cup\) given by the union, and complement given by \(\neg U=\operatorname{Int}(X\setminus U)\). If \(X\) is Baire, \(\operatorname{RO}(X)\) can also be viewed as the quotient of the Boolean algebra of Baire measurable subsets of \(X\) by the ideal of meager sets. See [1, Section 8]. We denote by \(\hat{X}\) the Stone space of the algebra \(\operatorname{RO}(X)\). If \(X\) is compact, there is a natural surjective, continuous map \(\ell_{X}\colon\hat{X}\to X\) given by \[\{\ell_{X}(p)\}=\bigcap_{U\in p}\overline{U},\] where \(p\) is viewed as an ultrafilter on \(\operatorname{RO}(X)\). The construction \(X\mapsto\hat{X}\) only depends on the topology of \(X\), so if \(G\) is a group acting on \(X\) by homeomorphisms, it also acts on \(\hat{X}\). If \(G\) is a discrete group and \(G\curvearrowright X\) is a \(G\)-flow, then \(G\curvearrowright\hat{X}\) is also a \(G\)-flow and it is the universal highly proximal extension of \(G\curvearrowright X\). This follows from the results of Gleason [1]. The problem when \(G\) has non-trivial topology is that the action \(G\curvearrowright\hat{X}\) is not necessarily continuous even if the original action of \(G\) on \(X\) is. In the case where \(G\) is non-archimedean, this is easy to fix. Recall that a topological group \(G\) is called _non-archimedean_ if it admits a basis at \(1_{G}\) consisting of open subgroups. For locally compact groups, by a well known theorem of van Dantzig, being non-archimedean is equivalent to being totally disconnected (or _tdlc_, for short). If \(\mathcal{B}\) is a Boolean algebra on which \(G\) acts and \(V\leq G\), we will denote by \(\mathcal{B}_{V}\) the subalgebra of \(\mathcal{B}\) of elements fixed by \(V\). Note that if \(\mathcal{B}\) is complete, then \(\mathcal{B}_{V}\) is complete, too. If \(X\) is a \(G\)-flow, we let \[\operatorname{RO}(G,X)\coloneqq\bigcup\{\operatorname{RO}_{V}(X):V\text{ open subgroup of }G\}\] and note that, as a direct limit of Boolean algebras, \(\operatorname{RO}(G,X)\) is also a Boolean algebra but that it is not necessarily complete. Note also that \(\operatorname{RO}(G,X)\) is invariant under the action of \(G\) and that the action \(G\curvearrowright\operatorname{RO}(G,X)\) is continuous (where \(\operatorname{RO}(G,X)\) is taken to be discrete). **Lemma 2.1**.: _Let \(G\) be a non-archimedean group and let \(G\curvearrowright X\) be a \(G\)-flow. Then the elements of \(\operatorname{RO}(G,X)\) form a basis for the topology of \(X\)._ Proof.: By regularity of \(X\), it suffices to see that for every \(x\in U\in\operatorname{RO}(X)\) there exists \(U^{\prime}\in\operatorname{RO}(G,X)\) such that \(x\in U^{\prime}\subseteq U\). By continuity of the action, there exists an open subgroup \(V\) of \(G\) and an open subset \(U_{1}\subseteq U\) with \(x\in U_{1}\) such that \(VU_{1}\subseteq U\). Then \(U^{\prime}=\operatorname{Int}\overline{VU_{1}}\) works, because \(U^{\prime}\) is \(V\)-invariant and \(U^{\prime}\subseteq\operatorname{Int}\overline{U}=U\), since \(U\) is regular. We denote by \(X_{G}^{*}\) the Stone space of \(\operatorname{RO}(G,X)\). The action of \(G\) on \(X_{G}^{*}\) is continuous. Note that being the Stone space of a Boolean algebra, \(X_{G}^{*}\) is zero-dimensional. **Proposition 2.2**.: _Let \(G\) be a non-archimedean group and let \(G\curvearrowright X\) be a \(G\)-flow. Then \(G\curvearrowright X_{G}^{*}\) is the universal highly proximal extension of \(X\)._ Proof.: We denote by \(\pi\colon\hat{X}\to X_{G}^{*}\) the dual map of the inclusion \(\operatorname{RO}(G,X)\subseteq\operatorname{RO}(X)\) and note that \(\pi\) is continuous and \(G\)-equivariant. By Lemma 2.1, if two elements of \(\hat{X}\) have the same image by \(\pi\), then they have the same image under the map \(\ell\colon\hat{X}\to X\). Hence there is a continuous \(G\)-equivariant map \(\ell_{G}\colon X_{G}^{*}\to X\) such that \(\ell_{G}\circ\pi=\ell\). The map \(\ell_{G}\colon X_{G}^{*}\to X\) is irreducible because \(\ell\) is. If \(Y\to X\) is a highly proximal extension of \(X\), then \(\hat{Y}=\hat{X}\). Thus \(\operatorname{RO}(X)=\operatorname{RO}(Y)\) and \(\operatorname{RO}(G,X)=\operatorname{RO}(G,Y)\). In particular, \(Y\) is a factor of \(X_{G}^{*}=Y_{G}^{*}\). By continuity of the \(G\)-action on \(X\), we have \(\operatorname{Clopen}(X)\subseteq\operatorname{RO}(G,X)\), where \(\operatorname{Clopen}(X)\) is the subalgebra of \(\operatorname{RO}(X)\) consisting of clopen subsets of \(X\). That this inclusion is an equality actually characterizes MHP flows for non-archimedean groups. **Corollary 2.3**.: _Let \(G\) be a non-archimedean group and let \(G\curvearrowright X\) be a \(G\)-flow. Then the following are equivalent:_ 1. \(X\) _is_ MHP_;_ 2. \(\operatorname{RO}(G,X)=\operatorname{Clopen}(X)\)_._ Proof.: (i) \(\Rightarrow\) (ii) follows from Proposition 2.2. Note that (ii) implies that \(X\) is zero-dimensional in view of Lemma 2.1, so the implication (ii) \(\Rightarrow\) (i) also follows from Proposition 2.2. ### The general case When \(G\) is a general topological group, one cannot hope to construct the universal highly proximal extension as the Stone space of a Boolean algebra: for example, if \(G\) is connected, then all of its minimal flows are connected and have no non-trivial clopen sets. So for the general case, we employ Riesz spaces instead of Boolean algebras. Recall that a _Riesz space_ is an ordered real vector space, which is a _lattice_ for the ordering, i.e., all pairs of elements \(a,b\) have a least upper bound \(a\lor b\) and a greatest lower bound \(a\wedge b\). A Riesz space \(\mathcal{L}\) is called _archimedean_ if there exists a _unit_\(\mathbf{1}\in\mathcal{L}\) such that for every \(a\in\mathcal{L}\), there exists \(n\in\mathbf{N}\) with \(a\leq n\mathbf{1}\). A unit also naturally defines the _uniform norm_: \[\|a\|:=\inf\{r\in\mathbf{R}:|a|\leq r\mathbf{1}\},\] where, as usual, \(|a|=a\vee(-a)\). A natural example of an archimedean Riesz space is the collection of real-valued continuous functions \(C(X)\) on a compact space \(X\) with the usual lattice operations and unit the constant function \(\mathbf{1}\). Then the uniform norm coincides with the sup norm. The Yosida representation theorem, which we recall below, states that in fact every archimedean Riesz space complete in the uniform norm is of this form. For every archimedean Riesz space \(\mathcal{L}\) with a unit \(\mathbf{1}\) (and equipped with the uniform norm), we can consider its _spectrum_: \[\mathsf{S}(\mathcal{L})=\{x\in\mathcal{L}^{*}:x(a\lor b)=x(a)\lor x(b)\text{ for all }a,b\in\mathcal{L}\text{ and }x(\mathbf{1})=1\}.\] \(\mathsf{S}(\mathcal{L})\) is a compact space if equipped with the weak\({}^{*}\) topology and we have a map \(\Gamma\colon\mathcal{L}\to C(\mathsf{S}(\mathcal{L}))\) defined by \[\Gamma(a)(x)=x(a).\] \(\Gamma\) is clearly a contractive homomorphism and in fact it is an isometric isomorphism (see [1]vR, Section 13]). The algebra \(B(X)\) has also been considered before in a dynamical context by Keynes and Robertson in [13]. Let \(G\) be a topological group, let \(E\) be a Banach space and let \(G\curvearrowright E\) be an action by isometric isomorphisms. We will say that an element \(\phi\in E\) is \(G\)-_continuous_ if the map \(G\to E,g\mapsto g\cdot\phi\) is norm-continuous. **Lemma 2.4**.: _Let \(G\) be a topological group, let \(X\) be a compact space and let \(G\curvearrowright X\) be an action by homeomorphisms. Then the following are equivalent:_ 1. \(G\curvearrowright X\) _is a_ \(G\)_-flow (that is, the action is jointly continuous);_ 2. _Every function_ \(\phi\in C(X)\) _is_ \(G\)_-continuous for the induced action_ \(G\curvearrowright C(X)\)_._ Proof.: (i) \(\Rightarrow\) (ii). This is obvious. (ii) \(\Rightarrow\) (i). Let \(U\subseteq X\) be open and let \(x_{0}\in U\). Our goal is to find an open \(V\ni 1_{G}\) and an open \(W\ni x_{0}\) such that \(V\cdot W\subseteq U\). Let \(W\ni x_{0}\) be open such that \(\overline{W}\subseteq U\). By Urysohn's lemma, there exists \(\phi\in C(X)\) with \(\phi|_{\overline{W}}=1\) and \(\phi|_{X\setminus U}=0\). As \(\phi\) is \(G\)-continuous, there exists \(V\ni 1_{G}\) such that for every \(v\in V\), \(\|v^{-1}\cdot\phi-\phi\|<1/2\). This implies that \(V\cdot W\subseteq U\). Next we will describe the universal highly proximal extension of a \(G\)-flow \(G\curvearrowright X\). Let \(\mathcal{B}(X)\) denote the Riesz space of bounded Borel functions on \(X\) with unit the constant function \(\mathbf{1}\) and let \(\mathcal{M}\) be the ideal given by: \[\mathcal{M}=\big{\{}\phi\in\mathcal{B}(X):\{x\in X:\phi(x)\neq 0\}\text{ is meager}\big{\}}.\] Denote \(B(X)\coloneqq\mathcal{B}(X)/\mathcal{M}\) and let \(\|\cdot\|_{\mathcal{M}}\) be the seminorm on \(\mathcal{B}(X)\), which is the pullback of the uniform norm on \(B(X)\). Equivalently, \(\|\cdot\|_{\mathcal{M}}\) is the essential supremum seminorm defined by \[\|\phi\|_{\mathcal{M}}=\inf\bigl{\{}r\in\mathbf{R}:\{x\in X:|\phi(x)|>r\} \text{ is meager}\big{\}}.\] Let \(\hat{X}\) be the spectrum of \(B(X)\). It can naturally be identified with the Stone space of the Boolean algebra \(\operatorname{RO}(X)\) (see [1]vR, Section 14]). We let \(B_{G}(X)\) denote the set of \(G\)-continuous elements of \(B(X)\). We note that \(B_{G}(X)\) is a closed subspace of \(B(X)\) which is also closed under the lattice operations, so we can define \(\hat{X}_{G}\coloneqq\mathsf{S}(B_{G}(X))\). It follows from Lemma 2.4 that \(G\curvearrowright\hat{X}_{G}\) is a \(G\)-flow. We have the following. **Proposition 2.5**.: _Let \(G\) be a topological group and let \(G\curvearrowright X\) be a \(G\)-flow. Then the flow \(G\curvearrowright\hat{X}_{G}\) is the universal highly proximal extension of \(G\curvearrowright X\). In particular \(X\) is MHP if and only if the natural injection \(C(X)\to B_{G}(X)\) is a bijection._ Proof.: First, as \(\hat{X}_{G}\) is a factor of \(\hat{X}\), it is clear that the extension \(\hat{X}_{G}\to X\) is highly proximal. If \(G\curvearrowright Y\) is a highly proximal extension of \(G\curvearrowright X\), then by the universal property of \(\hat{X}\), there exists an embedding \(\iota\colon C(Y)\to C(\hat{X})=B(X)\) (see above). It follows from Lemma 2.4 that every \(\phi\in C(Y)\) is \(G\)-continuous, so \(\iota(\phi)\) is also \(G\)-continuous. Therefore \(\iota(C(Y))\subseteq B_{G}(X)\) and this gives a factor map \(\hat{X}_{G}\to Y\). It is proved in [AG] that for minimal flows \(X\), the correspondence \(X\mapsto\hat{X}_{G}\) is functorial. Our description of \(\hat{X}_{G}\) suggests the correct formulation of this result for general flows. Recall that a continuous map \(\phi\colon X\to Y\) is called _category-preserving_ if \(\phi^{-1}(A)\) is nowhere dense for any nowhere dense \(A\subseteq Y\). Every homomorphism between minimal flows is category-preserving. **Proposition 2.6**.: _The correspondence \(X\mapsto\hat{X}_{G}\) is a functor from the category of \(G\)-flows with morphisms category-preserving \(G\)-maps to the category of MHP flows._ Proof.: Let \(\phi\colon X\to Y\) be a category-preserving homomorphism of \(G\)-flows. The map \(\phi\) preserves the ideal of meager sets, so we obtain a dual homomorphism of Riesz spaces \(\phi^{*}\colon B(Y)\to B(X)\) given by \(\phi^{*}([f])=[f\circ\phi]\), where \(f\in\mathcal{B}(Y)\) and \([f]\) denotes its equivalence class in \(B(Y)\). The image of \(B_{G}(Y)\) is contained in \(B_{G}(X)\), so by the duality theorem, this gives us a map \(\hat{X}_{G}\to\hat{Y}_{G}\). ## 3. Characterizations of MHP flows Starting from this section, \(G\) will denote a locally compact group. A _pseudo-norm_ on \(G\) is a continuous function \(\left\|\cdot\right\|\colon G\to\mathbf{R}_{+}\) satisfying: * \(\left\|1_{G}\right\|=0\); * \(\left\|gh\right\|\leq\left\|g\right\|+\left\|h\right\|\) for all \(g,h\in G\). \(\left\|\cdot\right\|\) is a _norm_ if it moreover satisfies that the only element \(g\) with \(\left\|g\right\|=0\) is \(1_{G}\). A norm is called _compatible_ if it induces the topology of \(G\). Every pseudo-norm induces a right-invariant pseudo-metric \(d_{r}\) on \(G\) defined by \[d_{r}(g,h)=\left\|gh^{-1}\right\|. \tag{3.1}\] Let \(\left\|\cdot\right\|\) be some fixed pseudo-norm on \(G\). We denote by \(B_{r}\) the set of elements \(g\in G\) such that \(\left\|g\right\|<r\). If \(G\curvearrowright X\) is a \(G\)-flow, we can define a pseudo-metric \(\partial\) on \(X\) by \[\partial(x,y)=\inf\{\left\|g\right\|:g\in G,g\cdot x=y\}. \tag{3.2}\] If \(x\) and \(y\) are not in the same orbit, then \(\partial(x,y)=\infty\). Note that \(\partial\) is always lower semi-continuous for the compact topology \(\tau\) on \(X\). Recall that a real-valued function \(f\) is called _lower semi-continuous (lsc)_ if for every real number \(r\) the set \(\{f>r\}\) is open. It is _upper semi-continuous (usc)_ if \(\{f<r\}\) is open. When \(G\) is metrizable, we can work throughout with a fixed compatible norm on \(G\), and \(\partial\) is a metric on \(X\) that refines the topology \(\tau\), i.e., \((X,\tau,\partial)\) is a _compact topometric space_ in the sense of [B]. In general, one can work with a _topouniform spaces_ as is done in [BZ], but we will not need this here. In the case where \(G\) is Polish, locally compact, the topometric space above is the same as the one considered by Zucker [Z2]. The following characterization of MHP flows is the main theorem of this section. **Theorem 3.1**.: _Let \(G\) be locally compact and let \(G\curvearrowright X\) be a \(G\)-flow. Then the following are equivalent:_ 1. \(X\) _is MHP;_ 2. \(V\overline{U}\) _is open for every open neighborhood_ \(V\) _of_ \(1_{G}\) _and open subset_ \(U\) _of_ \(X\)_._ 3. _for every continuous pseudo-norm on_ \(G\)_, and every open subset_ \(U\) _of_ \(X\)_, the function_ \(X\to\mathbf{R}\cup\{\infty\}\)_,_ \(x\mapsto\partial(x,\overline{U})\)_, is continuous._ _When \(G\) is a tdlc group, these are also equivalent to:_ 1. \(\operatorname{RO}(G,X)=\operatorname{Clopen}(X)\)_;_ 2. \(X\) _is zero-dimensional and for every compact open subgroup_ \(V\) _of_ \(G\)_, the Boolean algebra_ \(\operatorname{Clopen}_{V}(X)\) _is complete._ Before going further, we make a few comments. The equivalence between (i) and (ii) is already contained in [Z2] (up to the observation that when \(G\) is locally compact, Definition 3.1 from [Z2] can be restated as in (ii)). Here we provide an alternative proof of that equivalence. The proof of (ii) \(\Rightarrow\) (i) follows arguments close to [G3], while the proof of the converse (which goes through (iii)) uses the characterization of MHP flows given in Proposition 2.5. We need some preliminaries. **Lemma 3.2**.: _Let \(G\curvearrowright X\) be a \(G\)-flow and let \(\partial\) be defined as above. Then the following hold:_ 1. _If_ \(F\subseteq X\) _is closed, the function_ \(x\mapsto\partial(x,F)\) _is lsc._ 2. _If_ \(U\subseteq X\) _is open, the function_ \(x\mapsto\partial(x,U)\) _is usc._ Proof.: (i) Let \(A=\{x:\partial(x,F)\leq r\}\). Let \(x\) be a limit point of \(A\) and let \((x_{i},e_{i})_{i}\) be a net in \(A\times\mathbf{R}^{+}\) converging to \((x,0)\). Let \(y_{i}\in F\) be such that \(\partial(x_{i},y_{i})<r+\epsilon_{i}\). By passing to a subnet, we may assume that \(y_{i}\to y\in F\). Then taking limits and using the fact that \(\partial\) is lsc, we obtain that \(\partial(x,y)\leq r\). (ii) Let \(r>0\), and let \(V\) be the open ball around \(1_{G}\) of radius \(r\). Then \[\partial(x,U)<r\iff V\cdot x\cap U\neq\emptyset,\] which is an open condition. _Remark 3.3_.: In fact, Lemma 3.2 does not need \(G\) to be locally compact (with the appropriate definition of \(\partial\) in the general case, see [Z2]). The proof of (i) works as above and (ii) is [Z2, Theorem 4.8] and it is harder. **Lemma 3.4**.: _Let \(X\) be a \(G\)-flow. Then \(X\) satisfies condition (ii) of Theorem 3.1 if and only if for every open neighborhood \(V\ni 1_{G}\) and open subset \(U\subseteq X\), there exists an open neighborhood \(V^{\prime}\ni 1_{G}\) with \(V^{\prime}\subseteq V\) such that \(V^{\prime}\overline{U}\) is open._ Proof.: We only have to prove the implication from right to left. Suppose that the property in the statement holds, and let \(U\) be an open subset of \(X\) and \(V\) an open neighborhood of \(1_{G}\). For every \(g\) in \(V\) one can find \(V^{\prime}_{g}\) an open neighborhood of \(1_{G}\) such that \(gV^{\prime}_{g}\) is contained in \(V\) and \(V^{\prime}_{g}\overline{U}\) is open. Writing \(V=\bigcup_{g\in V}V^{\prime}_{g}\), we then have \(V\overline{U}=\bigcup_{g\in V}V^{\prime}_{g}\overline{U}\), which is this thus open. **Lemma 3.5**.: _Let \(X\) be a \(G\)-flow that satisfies condition (ii) of Theorem 3.1. Then for all open subsets \(U_{1},U_{2}\subseteq X\), we have \(\overline{U_{1}}\cap\overline{U_{2}}\neq\emptyset\) if and only if \(VU_{1}\cap U_{2}\neq\emptyset\) for every open neighborhood \(V\ni 1_{G}\)._ Proof.: Suppose \(\overline{U_{1}}\cap\overline{U_{2}}\neq\emptyset\), and let \(V\) be an open neighborhood of \(1_{G}\). Then clearly \(V\overline{U_{1}}\cap\overline{U_{2}}\neq\emptyset\). Since \(V\overline{U_{1}}\) is open, this implies that \(V\overline{U_{1}}\cap U_{2}\neq\emptyset\). That condition is equivalent to \(\overline{U_{1}}\cap V^{-1}U_{2}\neq\emptyset\) and hence implies that \(U_{1}\cap V^{-1}U_{2}\neq\emptyset\). So \(VU_{1}\cap U_{2}\neq\emptyset\), as desired. The reverse implication is a general fact that follows from continuity of the \(G\)-action. Recall that a subalgebra \(A\) of a Boolean algebra \(B\) is _dense_ if for every non-zero element in \(B\) there is a non-zero element in \(A\) that is smaller. We recall the following (see [1, Theorem 4.19]). **Lemma 3.6**.: _Let \(A\) be a dense subalgebra of a Boolean algebra \(B\). If \(A\) is complete, then \(A=B\)._ Before starting the proof, we also introduce some notation. If \(\pi\colon Y\to X\) is a continuous map between topological spaces and \(U\subseteq Y\) is open, we denote by \(\pi_{*}(U)\) the _fiber image_ of \(U\): \[\pi_{*}(U)\coloneqq\{x\in X:\pi^{-1}(x)\subseteq U\}.\] The set \(\pi_{*}(U)\) is always open and if \(\pi\) is irreducible, it is non-empty for any non-empty \(U\). Proof of Theorem 3.1.: (ii) \(\Rightarrow\) (i). Let \(\pi\colon Y\to X\) be a highly proximal extension. We shall prove that \(\pi\) is injective. Suppose for a contradiction that there exist distinct points \(y_{1},y_{2}\) in \(Y\) with the same image \(x\) in \(X\). Then one can find an open \(V\ni 1_{G}\) and open subsets \(O_{1},O_{2}\subseteq Y\) such that \(y_{1}\in O_{1}\), \(y_{2}\in O_{2}\) and \(VO_{1}\cap O_{2}=\emptyset\). The irreducibility of \(\pi\) implies that \(\pi(O)\subseteq\overline{\pi_{*}(O)}\) for any open \(O\subseteq Y\). Indeed, if not, there is \(y\in O\) and an open \(W\ni\pi(y)\) disjoint from \(\pi_{*}(O)\). By irreducibility, \(\pi^{-1}(W)\cap O\) contains a fiber, whose image must be in \(\pi_{*}(O)\), contradiction. Thus the sets \(\overline{\pi_{*}(O_{1})}\) and \(\overline{\pi_{*}(O_{2})}\) both contain \(x\). Hence by the assumption (ii) and Lemma 3.5, we have \(V\pi_{*}(O_{1})\cap\pi_{*}(O_{2})\neq\emptyset\). Since \(V\pi_{*}(O_{1})=\pi_{*}(VO_{1})\), we deduce that \(VO_{1}\) and \(O_{2}\) intersect each other, which is a contradiction. (iii) \(\Rightarrow\) (ii). Let \(V\) be an open neighborhood of \(1_{G}\). We can always find a continuous pseudo-norm on \(G\) such that \(B_{1/2}\) is contained in \(V\)[1, Theorem 8.2]. If \(\partial\) is the pseudo-metric on \(X\) associated to this pseudo-norm, by assumption, the function \(f(x)\coloneqq\partial(x,\overline{U})\) is continuous. So \(B_{1/2}\overline{U}=\{f<1/2\}\) is open. Since \(B_{1/2}\subseteq V\) and \(V\) was arbitrary, Lemma 3.4 ensures that (ii) holds. (i) \(\Rightarrow\) (iii). Let \(\phi_{0}(x)=\partial(x,\overline{U})\) and \(\phi_{1}(x)=\partial(x,U)\). We have that \(\phi_{0}\) is lsc and \(\phi_{1}\) is usc by Lemma 3.2. Moreover \(\phi_{0}\leq\phi_{1}\), and both \(\phi_{0}\) and \(\phi_{1}\) are \(\partial\)-contractive. First we show that the set \(\{\phi_{0}<\phi_{1}\}\) is meager. Note that \[\{\phi_{0}<\phi_{1}\}=\bigcup_{q_{1}<q_{2}\in\mathbf{Q}}\{\phi_{0}\leq q_{1}< q_{2}\leq\phi_{1}\}\] and each set in the union is closed. So if \(\{\phi_{0}<\phi_{1}\}\) is non-meager, there exist \(q_{1}<q_{2}\) such that \(\{\phi_{0}\leq q_{1}<q_{2}\leq\phi_{1}\}\) has non-empty interior \(W\). The set \(\{x:\partial(x,W)<q_{2}\}\) is open and intersects \(\overline{U}\), so it must intersect \(U\). So there exist \(x\in U,y\in W\) with \(\partial(x,y)<q_{2}\), which contradicts the definition of \(W\). Now for \(r>0\), set \(\phi_{0,r}=\min(\phi_{0},r)\) and \(\phi_{1,r}=\min(\phi_{1},r)\). The functions \(\phi_{0,r},\phi_{1,r}\) are bounded and remain \(\partial\)-contractive (hence \(G\)-continuous). As \(X\) is MHP, by Proposition 2.5, there exists a continuous function \(\theta\) on \(X\) such that \(\phi_{0,r}=\phi_{1,r}=\theta\) on a comeager set. As the sets \(\{\theta<\phi_{0,r}\}\) and \(\{\theta>\phi_{1,r}\}\) are open, they must be empty, and we must have that \(\phi_{0,r}\leq\theta\leq\phi_{1,r}\). We claim that \(\theta\) is \(\partial\)-contractive. If not, there exist \(x\in X\) and \(g\in G\) such that \(|\theta(x)-\theta(g\cdot x)|>\|g\|\). However, the set \(\{x:|\theta(x)-\theta(g\cdot x)|>\|g\|\}\) is open, so as \(\theta=\phi_{0,r}\) on a comeager set, there exists \(x\) such that \(|\phi_{0,r}(x)-\phi_{0,r}(g\cdot x)|>\|g\|\), contradiction. Note that \(\theta^{-1}(0)\supseteq U\), so by continuity, \(\theta^{-1}(0)\supseteq\overline{U}\). As \(\theta\) is \(\partial\)-contractive and \(\theta=0\) on \(\overline{U}\), for every \(x\in X\), we have \[\theta(x)\leq\inf_{y\in\overline{U}}\partial(x,y)=\partial(x,\overline{U})= \phi_{0}(x).\] Since \(\theta\leq r\), this shows that \(\theta\leq\phi_{0,r}\), and hence \(\theta=\phi_{0,r}\). So \(\phi_{0,r}\) is continuous. Since \(r\) is arbitrary, it follows that \(\phi_{0}\) is continuous. We now assume \(G\) is a tdlc group. The equivalence between (i) and (iv) follows from Corollary 2.3. Recall in particular that these imply that \(X\) is zero-dimensional. Hence the fact that (iv) implies (v) is clear since \(\operatorname{RO}_{V}(X)\) is always complete. It remains to see that (v) implies (iv). To that end, let \(V\) be a compact open subgroup of \(G\). We want to see that \(\operatorname{RO}_{V}(X)=\operatorname{Clopen}_{V}(X)\). We claim that \(\operatorname{Clopen}_{V}(X)\) is a dense subalgebra of \(\operatorname{RO}_{V}(X)\). Indeed, if \(U\) is a non-empty element of \(\operatorname{RO}_{V}(X)\), then we can find a non-empty clopen subset \(U_{1}\) inside \(U\) since \(X\) is zero-dimensional. Since \(V\) is compact and open, the stabilizer of \(U_{1}\) has finite index in \(V\), so that \(VU_{1}\) is a union of finitely many clopen subsets, and hence is clopen. Moreover \(VU_{1}\subseteq U\) since \(U\) is \(V\)-invariant. Hence \(\operatorname{Clopen}_{V}(X)\) is dense in \(\operatorname{RO}_{V}(G,X)\). Since we make the assumption that \(\operatorname{Clopen}_{V}(X)\) is complete, Lemma 3.6 implies that \(\operatorname{RO}_{V}(X)=\operatorname{Clopen}_{V}(X)\), as desired. Compare the next corollary with [BMT, Lemma 2.4]. **Corollary 3.7**.: _Let \(G\curvearrowright X\) be an MHP flow. Then for \(U_{1},U_{2}\subseteq X\) open,_ \[\partial(\overline{U_{1}},\overline{U_{2}})=\partial(U_{1},U_{2}).\] Proof.: Suppose that \(\partial(\overline{U_{1}},\overline{U_{2}})<r\). Consider the set \(\{x:\partial(x,\overline{U_{2}})<r\}\). By Theorem 3.1, it is open and it intersects \(\overline{U_{1}}\), so it intersects \(U_{1}\). Let \(W\subseteq\{x:\partial(x,\overline{U_{2}})<r\}\) be open, non-empty with \(\overline{W}\subseteq U_{1}\). Then by continuity of the function \(\partial(\cdot,\overline{W})\), there exists \(x\in U_{2}\) with \(\partial(x,\overline{W})<r\). So \(\partial(U_{1},U_{2})<r\). ## 4. Continuity of the stabilizer map Let \(Y\) be locally compact space and let \(2^{Y}\) denote the space of closed subsets of \(Y\). The _Chabauty topology_ on \(2^{Y}\) is given by the subbasis of sets of the form \[O_{K}=\{F\in 2^{Y}:F\cap K=\emptyset\}\quad\text{ and }\quad O^{U}=\{F\in 2^{Y}:F \cap U\neq\emptyset\}\] with \(K\subseteq Y\) compact and \(U\subseteq Y\) open. The space \(2^{Y}\) equipped with this topology is compact. A map \(\phi\colon X\to 2^{Y}\) is _upper semi-continuous_ if \(\phi^{-1}(O_{K})\) is open for every compact subset \(K\) of \(Y\) and it is _lower semi-continuous_ if \(\phi^{-1}(O^{U})\) is open for every open subset \(U\) of \(Y\). If \(G\) is a locally compact group, the set \(\operatorname{Sub}(G)\) of closed subgroups of \(G\) is closed in \(2^{G}\), and hence, a compact space. Moreover, the conjugation action of \(G\) on \(\operatorname{Sub}(G)\) is continuous. **Definition 4.1**.: Let \(X\) be a \(G\)-flow. For \(x\in X\), let \(G_{x}\) denote the stabilizer of \(x\). The map \(\operatorname{Stab}\colon X\to\operatorname{Sub}(G)\) defined by \(\operatorname{Stab}(x)=G_{x}\) is called the _stabilizer map_ associated to the flow \(X\). It is easy to see that for every \(G\)-flow, the stabilizer map is \(G\)-equivariant and upper semi-continuous (see, e.g., [GW]). It is also well-known that in general it is not continuous. The main theorem of the paper is the following. **Theorem 4.2**.: _Let \(G\) be a locally compact group and let \(X\) be an MHP \(G\)-flow. Then the stabilizer map \(X\to\operatorname{Sub}(G)\), \(x\mapsto G_{x}\) is continuous._ The remainder of this section is devoted to the proof of the theorem. Let \(\|\cdot\|\colon G\to\mathbf{R}_{+}\) be a pseudo-norm on \(G\). We recall that \(B_{r}\) denotes the set of elements \(g\in G\) such that \(\|g\|<r\); we also let \(\bar{B}_{r}\) be the set of elements \(g\in G\) such that \(\|g\|\leq r\). We say that \(\|\cdot\|\) is _proper_ if \(\bar{B}_{r}\) is compact for all \(r\). Let \(X\) be a \(G\)-flow. Recall that to every pseudo-norm on \(G\), we associated, by the equation (3.2), a pseudo-metric \(\partial\) on \(X\). We will say that a pseudo-norm \(\|\cdot\|\colon G\to\mathbf{R}_{+}\) is _normal_ if for every \(g\in G\), the conjugation by \(g\) is a uniformly continuous map of the pseudo-metric space \((G,d_{r})\), where \(d_{r}\) is the right-invariant metric on \(G\) associated to \(\|\cdot\|\) given by (3.1). In particular, the kernel \(\{g\in G:\|g\|=0\}\) of a normal pseudo-norm is a normal subgroup of \(G\). **Proposition 4.3**.: _Let \(G\) be a \(\sigma\)-compact locally compact group, and let \(V\) be an open neighborhood of \(1_{G}\). Then there exists a continuous, proper, normal pseudo-norm on \(G\) and \(r>0\) such that \(B_{r}\subseteq V\)._ Proof.: Choose and open neighborhood \(W\) of \(1_{G}\) such that \(W^{2}\subseteq V\). Since \(G\) is \(\sigma\)-compact, theorems of Kakutani-Kodaira ([HR, Theorem 8.7]) and Struble [S] ensure that there exists a compact normal subgroup \(K\) of \(G\) with \(K\subseteq W\) such that \(G/K\) admits a compatible and proper norm \(\|\cdot\|_{G/K}\). If we let \(\|g\|=\|gK\|_{G/K}\), then \(\|\cdot\|\) is a pseudo-norm on \(G\) that is continuous and proper. Moreover since the image of \(W\) in \(G/K\) is an open neighborhood of the identity in \(G/K\) and \(\|\cdot\|_{G/K}\) is compatible, there is \(r>0\) such that \(\|gK\|_{G/K}<r\) implies \(gK\in WK\). Hence \(\|g\|<r\) implies \(g\in V\). Normality is clear since \(\|\cdot\|_{G/K}\) induces the topology on \(G/K\). The following is the main lemma. **Lemma 4.4**.: _Let \(G\) be a locally compact and let \(\|\cdot\|\) be a continuous, proper, normal pseudo-norm on \(G\). Let \(X\) be an MHP \(G\)-flow, let \(g\in G\) and \(r>0\). Then there exist \(n\geq 1\) and a continuous function \(\phi\colon X\to\mathbb{R}^{n}\) such that for all \(x\in X\)_ \[\partial(g\cdot x,x)>r\implies\|\phi(g\cdot x)-\phi(x)\|_{\infty}\geq r/3.\] Proof.: Since \(\|\cdot\|\) is normal, \(g\) and \(g^{-1}\) are uniformly continuous as self-maps of \((X,\partial)\). So let \(\delta<r/3\) and \(\delta^{\prime}\) be such that \[\forall x,y\in X\quad\partial(x,y)<\delta\implies\partial(g\cdot x,g\cdot y)<r/3\] \[\forall x,y\in X\quad\partial(g\cdot x,g\cdot y)<\delta^{\prime} \implies\partial(x,y)<\delta.\] Since \(\|\cdot\|\) is continuous and proper, one can find \(g_{1},\dots,g_{\ell}\) such that \(\bar{B}_{2r}\) is contained in \(\bigcup_{i=1}^{\ell}g_{i}B_{\delta/2}\). By the pigeonhole principle, this implies that a ball of radius \(2r\) in \((X,\partial)\) cannot contain more than \(\ell\) points which are pairwise at least \(\delta\) apart. That is, for every \(x,x_{1},\dots,x_{\ell+1}\in X\) such that \(\partial(x,x_{i})\leq 2r\) for all \(i\), there are \(i\neq j\) such that \(\partial(x_{i},x_{j})<\delta\). Similarly, there is \(k\in\mathbb{N}\) such that for all \(x,x_{1},\dots,x_{k+1}\in X\) such that \(\partial(x,x_{i})\leq 2r\) for all \(i\) there are \(i\neq j\) such that \(\partial(x_{i},x_{j})<\delta^{\prime}\). Set \(n=k+\ell+1\). Set \(M_{r}=\{x:\partial(g\cdot x,x)>r\}\). We will construct open sets \(U_{1},\dots,U_{n}\subseteq X\) with the following properties: 1. the closure of \(\bigcup_{i}B_{\delta}U_{i}\) contains \(M_{r}\); 2. \(\partial(U_{i},U_{j})\geq\delta\) for \(i\neq j\); 3. \(\partial(g\cdot U_{i},U_{i})\geq r\) for all \(i\). Once the construction is completed, we finish the proof as follows. We set \[\phi_{i}(x)=\min(\partial(x,\overline{U_{i}}),r)\] and \(\phi=(\phi_{i})_{i}\). By Theorem 3.1, \(\phi\) is continuous. To see that \(\phi\) satisfies the conclusion, in view of (i) it is enough to see that \(\|\phi(g\cdot x)-\phi(x)\|_{\infty}\geq r/3\) for every \(x\) in \(\bigcup_{i}B_{\delta}U_{i}\). So let \(x\in B_{\delta}U_{i}\) and let \(y\in U_{i}\) be such that \(\partial(x,y)<\delta\). Then \(\partial(g\cdot x,g\cdot y)<r/3\) and using Corollary 3.7, we obtain \[\partial(g\cdot x,\overline{U_{i}}) \geq\partial(g\cdot y,\overline{U_{i}})-\partial(g\cdot y,g\cdot x)\] \[\geq\partial(g\cdot\overline{U_{i}},\overline{U_{i}})-r/3\] \[=\partial(g\cdot U_{i},U_{i})-r/3\geq 2r/3.\] So \[||\phi(g\cdot x)-\phi(x)||_{\infty}\geq\phi_{i}(g\cdot x)-\phi_{i}(x)\geq 2r/3- \delta\geq r/3\] and we are done. Now we proceed with the construction. Using Zorn's lemma, we find a maximal (under inclusion) tuple of open sets \((U_{i})\) satisfying (ii) and (iii) above. We will show that it must also satisfy (i). If not, there exists \(x_{0}\in M_{r}\) and an open neighborhood \(W_{0}\) of \(x_{0}\) such that \(\partial(W_{0},U_{i})\geq\delta\) for all \(i\). By lower semi-continuity of \(\partial\), there is an open neighborhood \(W_{1}\) of \(x_{0}\) such that \(\partial(W_{1},g\cdot W_{1})\geq r\). Suppose that there exists \(j\leq n\) such that * \(\partial(g\cdot x_{0},U_{j})>r\); * \(\partial(x_{0},g\cdot U_{j})>r\). Since both conditions are open, there exists an open neighborhood \(W_{2}\) of \(x_{0}\) such that \(\partial(W_{2},g\cdot U_{j})\geq r\), and \(\partial(g\cdot W_{2},U_{j})\geq r\). This implies that if we set \(W=W_{0}\cap W_{1}\cap W_{2}\), we can add \(W\) to \(U_{j}\) without violating (ii) or (iii), thus contradicting the maximality of \((U_{i})\). So our final task in order to obtain a contradiction is to find \(j\) satisfying the two conditions above. First, note that \[|\{i:\partial(g\cdot x_{0},U_{i})\leq r\}|\leq\ell.\] Indeed, suppose to the contrary that there exist \(y_{i_{0}}\in U_{i_{0}},\ldots,y_{i_{\ell}}\in U_{i_{\ell}}\) with \(\partial(y_{i_{\ell}},g\cdot x_{0})<2r\) for all \(s\leq\ell\). Then the \(y_{i_{s}}\) are \(\ell+1\) points in a ball of radius \(2r\) which are pairwise \(\delta\) apart by (ii), which contradicts the definition of \(\ell\). Similarly, \[|\{i:\partial(x_{0},g\cdot U_{i})\leq r\}|\leq k,\] because if there exist \(y_{i_{0}}\in U_{i_{0}},\ldots,y_{i_{k}}\in U_{i_{k}}\) with \(\partial(x_{0},g\cdot y_{i_{k}})<2r\) for all \(s\), then by the choice of \(k\), there exist \(s\neq t\) with \(\partial(g\cdot y_{i_{s}},g\cdot y_{i_{s}})<\delta^{\prime}\). Now the choice of \(\delta^{\prime}\) implies that \(\partial(y_{i_{\ell}},y_{i_{t}})<\delta\), contradicting (ii). Now by the choice of \(n\), there exists \(j\) as desired. Proof of Theorem 4.2.: It is a general fact that the stabilizer map is upper semi-continuous, so we only have to prove lower semi-continuity. So for every open subset \(O\) of \(G\), we have to prove that \[X_{G,O}:=\{x\in X:G_{x}\cap O\neq\emptyset\}\] is an open subset of \(X\). Clearly it is enough to do this for every relatively compact open subset \(O\). Let \(L\) be the subgroup of \(G\) generated by \(O\). The subgroup \(L\) is open, so the \(L\)-flow \(X\) is also MHP (by Theorem 3.1 (ii)). Moreover \(L\) is compactly generated, so in particular \(L\) is \(\sigma\)-compact. Since \(X_{G,O}=X_{L,O}\), it follows that it is enough to prove the desired conclusion under the assumption that the group is \(\sigma\)-compact. From now on, we make this assumption. We fix \(x_{0}\in X_{G,O}\). Let \(g\in O\) be such that \(g\cdot x_{0}=x_{0}\) and let \(V\ni 1_{G}\) be open such that \(Vg\subseteq O\). Now we find a neighborhood \(U_{0}\) of \(x_{0}\) that is contained in \(X_{G,Vg}\subseteq X_{G,O}\). Since \(G\) is \(\sigma\)-compact, by Proposition 4.3, there are \(r>0\) and a continuous, proper, normal pseudo-norm \(||\cdot||\) on \(G\) such that \(B_{2r}\subseteq V\). If \(\phi\colon X\to\mathbf{R}^{n}\) is a continuous function as given by Lemma 4.4, then we have \[U_{0}:=\{x\in X:||\phi(g\cdot x)-\phi(x)||_{\infty}<r/3\} \subseteq\{x\in X:\partial(g\cdot x,x)\leq r\}\] \[\subseteq\{x\in X:x\in B_{2r}g\cdot x\}\] \[\subseteq X_{G,Vg}.\] So \(U_{0}\) is an open neighborhood of \(x_{0}\) that has the desired property. ## 5. Stabilizer flows Throughout this section, let \(G\) be a locally compact group. The continuity of the stabilizer map allows us to associate to any MHP flow \(X\) a subflow of \(\operatorname{Sub}(G)\), namely, the image of the stabilizer map. As every flow has a unique universal highly proximal extension, this leads us to the following definition. **Definition 5.1**.: Let \(G\) be locally compact and let \(G\curvearrowright X\) be a \(G\)-flow. The _stabilizer flow_\(\operatorname{S}_{G}(X)\) of \(X\) is the subflow of \(\operatorname{Sub}(G)\) given by \[\operatorname{S}_{G}(X)\coloneqq\operatorname{Stab}(\hat{X}_{G})=\{G_{z}:z \in\hat{X}_{G}\}.\] We have the following general facts about the stabilizer flow. **Proposition 5.2**.: _Let \(G\curvearrowright X\) be a \(G\)-flow and let \(\pi\colon\hat{X}_{G}\to X\) be the universal highly proximal extension of \(X\). Then the following hold:_ 1. _For any compact_ \(K\subseteq G\)_, the set_ \[D_{K}\coloneqq\{z\in\hat{X}_{G}:z\not\in K\cdot z\text{ and }\pi(z)\in K\cdot\pi(z)\}\] _is nowhere dense in_ \(\hat{X}_{G}\)_._ 2. _For any dense subset_ \(X^{\prime}\subseteq X\)_, we have that_ \(\operatorname{S}_{G}(X)\subseteq\overline{\operatorname{Stab}(X^{\prime})}\)_._ 3. _If_ \(x\in X\) _is a point of continuity of_ \(\operatorname{Stab}\)_, then_ \(\operatorname{G}_{x}\in\operatorname{S}_{G}(X)\)_._ 4. _If the set_ \(X_{0}\subseteq X\) _of continuity points of_ \(\operatorname{Stab}\) _is dense in_ \(X\)_, then_ \(\operatorname{S}_{G}(X)=\overline{\operatorname{Stab}(X_{0})}\)_._ Proof.: (i) Let \(U\subseteq\hat{X}_{G}\) be non-empty, open. We will find a non-empty, open subset of \(U\) disjoint from \(D_{K}\). Let \(z_{0}\in U\cap D_{K}\) (if there is no such \(z_{0}\), we are done). The set \(\{(z,z^{\prime})\in\hat{X}_{G}^{2}:z\not\in K\cdot z^{\prime}\}\) is open and \(z_{0}\) belongs to it, so there exists a neighborhood \(U^{\prime}\) of \(z_{0}\), \(U^{\prime}\subseteq U\) such that \(K\cdot U^{\prime}\cap U^{\prime}=\emptyset\). By irreducibility of \(\pi\), the set \(\pi_{*}(U^{\prime})\) is non-empty and for any \(x\in\pi_{*}(U^{\prime})\), we have that \(x\not\in K\cdot x\). Thus the open set \(\pi^{-1}(\pi_{*}(U^{\prime}))\subseteq U^{\prime}\) is disjoint from \(D_{K}\). (ii) Let \(z_{0}\in\hat{X}_{G}\) and let \[\mathcal{U}=\{H\in\operatorname{Sub}(G):H\cap O_{1}\neq\emptyset,\ldots,H \cap O_{n}\neq\emptyset,H\cap K=\emptyset\},\] where \(O_{1},\ldots,O_{n}\subseteq G\) are open and \(K\subseteq G\) is compact, be a neighborhood of \(\operatorname{G}_{z_{0}}\) in \(\operatorname{Sub}(G)\). Our goal is to find \(x\in X^{\prime}\) with \(G_{x}\in\mathcal{U}\). Let \(U=\{z\in Z:G_{z}\in\mathcal{U}\}\) and note that by continuity of the stabilizer map, \(U\) is open. By (i), the open set \(U\setminus\overline{D_{K}}\) is non-empty. We claim that any \(x\in\pi_{*}(U\setminus\overline{D_{K}})\cap X^{\prime}\) works. Indeed, fix such an \(x\) and let \(z\in\hat{X}_{G}\) be such that \(\pi(z)=x\). As \(z\not\in D_{K}\), we have that \(G_{x}\cap K=\emptyset\) and as \(G_{z}\leq G_{x}\), we also have that \(G_{x}\cap O_{i}\neq\emptyset\) for all \(i\), so \(G_{x}\in\mathcal{U}\). (iii) Let \(x\) be a point of continuity of \(\operatorname{Stab}\) and let \[\mathcal{U}\coloneqq\{H\in\operatorname{Sub}(G):H\cap O_{1}\neq\emptyset, \ldots,H\cap O_{n}\neq\emptyset,H\cap K=\emptyset\}\] be a neighborhood of \(G_{x}\), where each \(O_{i}\subseteq G\) is open and \(K\subseteq G\) is compact. Let \(O^{\prime}_{i}\subseteq G\) be open, relatively compact with \(\overline{O^{\prime}_{i}}\subseteq O_{i}\) such that \(G_{x}\in\mathcal{U}^{\prime}\), where \[\mathcal{U}^{\prime}\coloneqq\{H\in\operatorname{Sub}(G):H\cap O^{\prime}_{1} \neq\emptyset,\ldots,H\cap O^{\prime}_{n}\neq\emptyset,H\cap K=\emptyset\}.\] By the continuity of \(\operatorname{Stab}\) at \(x\), there is an open \(W\ni x\) with \(\operatorname{Stab}(W)\subseteq\mathcal{U}^{\prime}\). By (i), the set \(\bigcup_{i}D_{\overline{O^{\prime}_{i}}}\) is nowhere dense, so there exists \(z\in\pi^{-1}(W)\setminus\bigcup_{i}D_{\overline{O^{\prime}_{i}}}\). Then \(z\in\overline{O^{\prime}_{i}}\cdot z\subseteq O_{i}\cdot z\) for every \(i\) and \(G_{z}\cap K=\emptyset\) (because \(\pi(z)\in W\) and \(G_{z}\leq G_{\pi(z)}\)). Thus \(G_{z}\in\mathcal{U}\). As \(\mathcal{U}\) was arbitrary and \(\operatorname{S}_{G}(X)\) is closed, this implies that \(G_{x}\in\operatorname{S}_{G}(X)\). (iv) follows from (ii) and (iii). The following is well-known and follows from [12, Theorem VII]. We include a short proof for completeness. **Lemma 5.3**.: _Let \(X\) be a compact space, \(Y\) a locally compact space, and let \(\varphi\colon X\to 2^{Y}\) be upper semi-continuous. Let \((U_{i})_{i\in I}\) be a basis for the topology on \(Y\) such that each \(U_{i}\) is relatively compact. For \(i\in I\), we let_ \[X_{i}=\left\{x\in X:\phi(x)\cap\overline{U_{i}}\neq\emptyset\right\}.\] _Then \(\phi\) is continuous at each point of the set \(\bigcap_{i}(X\setminus\partial X_{i})\)._ _In particular, if \(Y\) is second countable, then the set of continuity points of \(\phi\) is comeager._ Proof.: Let \(x\in\bigcap_{i}(X\setminus\partial X_{i})\), and let \((x_{a})\) be a net in \(X\) converging to \(x\) and such that \((\phi(x_{a}))\) converges to \(F\). By upper semi-continuity, we know that \(F\subseteq\phi(x)\), and we want to prove equality. Let \(i\) such that \(\phi(x)\cap\overline{U_{i}}\neq\emptyset\), i.e., \(x\in X_{i}\). Since \(x\) is in \(X\setminus\partial X_{i}\) by assumption, \(x\) must be in the interior of \(X_{i}\). Since \((x_{a})\) converges to \(x\), eventually \(x_{a}\in X_{i}\), that is, \(\phi(x_{a})\cap\overline{U_{i}}\neq\emptyset\). Since \(\overline{U_{i}}\) is compact, this implies \(F\cap\overline{U_{i}}\neq\emptyset\). So whenever \(\phi(x)\) intersects \(\overline{U_{i}}\), so does \(F\). Since \((U_{i})_{i\in I}\) is a basis for the topology on \(Y\), this shows that \(\phi(x)\subseteq F\), as desired. Note that \(X_{i}\) is always closed by upper semi-continuity, so \(X\setminus\partial X_{i}\) is a dense open subset. In case \(Y\) is second countable, \((U_{i})_{i\in I}\) can be chosen to be countable, and hence the domain of continuity of \(\phi\) is comeager. **Corollary 5.4**.: _Let \(G\) be second countable and let \(G\curvearrowright X\) be a \(G\)-flow. Then the set \(X_{0}\subseteq X\) of continuity points of \(\operatorname{Stab}\) is dense \(G_{\delta}\) in \(X\) and we have_ \[\operatorname{S}_{G}(X)=\overline{\operatorname{Stab}(X_{0})}.\] Proof.: The first claim follows from the upper semi-continuity of \(\operatorname{Stab}\) and Lemma 5.3, and the second claim follows from \((\operatorname{iv})\) of Proposition 5.2. _Remark 5.5_.: When \(G\) is not second countable, it is no longer true that there exists \(x\in X\) such that \(G_{x}\in\operatorname{S}_{G}(X)\). Indeed, consider the group \(G=\operatorname{SO}(3,\mathbf{R})\), equipped with the discrete topology, acting on the 2-dimensional sphere \(X=\mathbf{S}^{2}\). Then \(G_{x}\neq\{1_{G}\}\) for all \(x\in X\). On the other hand, every non-identity element has only two fixed points in \(X\), so the action is topologically free, which means that \(\operatorname{S}_{G}(X)=\left\{\{1_{G}\}\right\}\) (see Corollary 5.7). Here the set of continuity points of \(\operatorname{Stab}\) is empty. In the case where \(X\) is minimal, stabilizer flows have already been considered in the literature under the name of stabilizer URSs. Recall that a _uniformly recurrent subgroup (URS)_ of \(G\) is a minimal subflow of \(\operatorname{Sub}(G)\). Glasner and Weiss [12] associated to every minimal \(G\)-flow its _stabilizer URS_ as follows. Upper semi-continuity of the stabilizer map implies that \(\overline{\operatorname{Stab}(X)}\) has a unique minimal subflow (see [1, Lemma 1.1] or [12, Proposition 1.2]). Then the _stabilizer URS of \(X\)_ is simply defined to be this minimal subflow. Proposition 5.2 implies that for minimal flows, our definition and theirs coincide. **Corollary 5.6**.: _Let \(X\) be a minimal \(G\)-flow. Then its stabilizer URS is equal to \(\operatorname{S}_{G}(X)\)._ Proof.: Proposition 5.2 (ii) tells us that \(\operatorname{S}_{G}(X)\subseteq\overline{\operatorname{Stab}(X)}\). As \(X\) is minimal, \(\hat{X}_{G}\) is also minimal and so is its factor \(\operatorname{S}_{G}(X)\). Now the conclusion follows from the fact that \(\overline{\operatorname{Stab}(X)}\) has a unique minimal subflow. Corollary 5.4 was also known for minimal \(X\): see [12, Proposition 1.2]. Recall that a flow \(G\curvearrowright X\) is called _topologically free_ if for every compact \(K\subseteq G\) that does not contain \(1_{G}\), the closed set \(\{x\in X:x\in K\cdot x\}\) has empty interior. A point \(x\in X\) is called _free_ if the orbit map \(G\to G\cdot x\), \(g\mapsto g\cdot x\) is injective. A flow is called _free_ if all points are free. It is clear that a flow for which the free points are dense is topologically free, and a simple Baire category argument shows that the converse is also true if \(G\) is second countable. **Corollary 5.7**.: _Let \(G\curvearrowright X\) be a \(G\)-flow. Then the following are equivalent:_ * \(X\) _is topologically free;_ * \(\hat{X}_{G}\) _is free;_ * \(\mathrm{S}_{G}(X)=\big{\{}\{1_{G}\}\big{\}}\)_._ _In particular, topologically free MHP flows are free._ Proof.: The equivalence of (ii) and (iii) follows from the definition of \(\mathrm{S}_{G}(X)\). (i) \(\Rightarrow\) (ii) Let \(g\in G\), \(g\neq 1_{G}\). Let \(V\subseteq G\) be an open, relatively compact subset with \(g\in V\) and \(1_{G}\notin\overline{V}\). Then the set \(\{x:x\in V\cdot x\}\) is open by Theorem 4.2 and has empty interior by topological freeness, so it must be empty. So we conclude that \(g\cdot x\neq x\) for all \(x\). (ii) \(\Rightarrow\) (i) Suppose, towards a contradiction, that there is a compact \(K\subseteq G\) with \(1_{G}\notin K\) such that the set \(\{x\in X:x\in K\cdot x\}\) has non-empty interior \(W\). By Proposition 5.2 (i), the set \(\pi^{-1}(W)\setminus D_{K}\) is non-empty and for any \(z\) in this set, we have that \(z\in K\cdot z\), contradicting the freeness of \(\hat{X}_{G}\). From this, it is not hard to deduce a well-known theorem of Veech. **Corollary 5.8** (Veech).: _Every locally compact group admits a free flow._ Proof.: Let \(G\) be a locally compact group and let \(\mathrm{Sa}(G)\) denote its _Samuel compactification_, i.e., the spectrum of the Riesz space of right uniformly continuous bounded functions on \(G\). Then \(G\curvearrowright\mathrm{Sa}(G)\) is a \(G\)-flow and \(G\) embeds densely in \(\mathrm{Sa}(G)\) as point evaluations. Also, the flow \(\mathrm{Sa}(G)\) is MHP by [Z2, 3.2.1] (alternatively, it is not difficult to verify condition (iii) of Theorem 3.1). As the left translation action \(G\curvearrowright G\) is free, Corollary 5.7 tells us that the flow \(\mathrm{Sa}(G)\) is also free.
2304.08258
Quantum Estimation of the Stokes Vector Rotation for a General Polarimetric Transformation
Classical polarimetry is a well-established discipline with diverse applications across different branches of science. The burgeoning interest in leveraging quantum resources to achieve highly sensitive measurements has spurred researchers to elucidate the behavior of polarized light within a quantum mechanical framework, thereby fostering the development of a quantum theory of polarimetry. In this work, drawing inspiration from polarimetric investigations in biological tissues, we investigate the precision limits of polarization rotation angle estimation about a known rotation axis, in a quantum polarimetric process, comprising three distinct quantum channels. The rotation angle to be estimated is induced by the retarder channel on the Stokes vector of the probe state. The diattenuator and depolarizer channels, acting on the probe state, can be thought of as effective noise processes. We explore the precision constraints inherent in quantum polarimetry by evaluating the quantum Fisher information (QFI) for probe states of significance in quantum metrology, namely NOON, Kings of Quantumness, and Coherent states. The effects of the noise channels as well as their ordering is analyzed on the estimation error of the rotation angle to characterize practical and optimal quantum probe states for quantum polarimetry. Furthermore, we propose an experimental framework tailored for NOON state quantum polarimetry, aiming to bridge theoretical insights with empirical validation.
Ali Pedram, Vira R. Besaga, Lea Gassab, Frank Setzpfandt, Özgür E. Müstecaplıoğlu
2023-04-17T13:18:08Z
http://arxiv.org/abs/2304.08258v2
# Quantum Estimation of the Stokes Vector Rotation for a General Polarimetric Transformation ###### Abstract Classical polarimetry is a rich and well established discipline within classical optics with many applications in different branches of science. Ever-growing interest in utilizing quantum resources in order to make highly sensitive measurements, prompted the researchers to describe polarized light in a quantum mechanical framework and build a quantum theory of polarimetry within this framework. In this work, inspired by the polarimetric studies in biological tissues, we study the ultimate limit of rotation angle estimation with a known rotation axis in a quantum polarimetric process, which consists of three quantum channels. The rotation angle to be estimated is induced by the retarder channel on the Stokes vector of the probe state. However, the diattenuator and depolarizer channels act on the probe state, which effectively can be thought of as a noise process. Finally the quantum Fisher information (QFI) is calculated and the effect of these noise channels and their ordering is studied on the estimation error of the rotation angle. quantum metrology; quantum optics; polarization of light ## I Introduction Polarization is a property of a propagating electromagnetic wave which quantifies the direction of the oscillations of the electric field. This property of the electromagnetic waves are exploited in a diverse set of technological applications in materials science [1; 2; 3; 4; 5], astronomy [6; 7], medical sciences [8; 9; 10; 11; 12], and quantum information [13]. Interaction with a medium can alter the polarization state of light. A simple framework for calculating the transformation of polarized light under different optical elements is Jones matrix calculus [14]. In this framework the polarized light is modeled using a \(2\times 1\) vector and the optical elements causing the polarized light to undergo linear transformations is modeled by \(2\times 2\) matrices. However, Jones calculus is only applicable for fully polarized states. A more general framework is the Mueller matrix calculus, which can be used to model partially polarized states and depolarizing transformations as well [14]. In Mueller calculus the polarization states are represented by the Stokes vectors which are \(4\times 1\) vectors and optical elements are modeled using \(4\times 4\) Mueller matrices. Optical polarimetry is a field of study and a set of techniques concerning measurement and interpretation of the polarization information of light. Studying the polarization of light before and after it interacts (transmission, reflection, or scattering) with a medium, one can infer several optical and geometric properties of the sample. Based on the optical properties of the material under study different polarimetry technique might be used, e.g. transmission polarimetry, ellipsometry, etc. Mueller polarimeters measure the complete polarization state of light using the 16 elements of the Mueller matrix [15]. Non-invasive nature and high precision of polarimetry makes it suitable for studying sensitive samples. Ellipsometry is widely used for many applications including measurement of the refractive index and thickness of thin films [3]. Another application of polarimetry, is the measurement of the polarization parameters of the human eye and thickness of the retinal nerve fiber layer (RNFL), which can be used to diagnose glaucoma [16; 17; 18; 19; 12]. The basic principle is that, RNFL is birefringent, therefore it can be modeled by a linear retarder which rotates the Stokes vector of the incoming probe state. Estimation of this rotation angle therefore, yields an estimate of the thickness of the RNFL [17; 18; 19]. With growing interest in quantum metrology, there have been attempts to utilize quantum resources for accurate measurement of physical parameters. Estimation of rotation angles is also of a great interest in quantum communication for studies in alignment of the reference frames of the communicating parties[20; 21; 22; 23; 24; 25; 26; 27; 28]. Other than rotation sensing, there are also attempts to utilize quantum resources for specific polarimetric tasks in the literature [29; 30; 31; 32]. Goldberg et al. [33; 34; 35; 36] have studied changes in quantum polarization and have established a quantum mechanical framework for polarimetry. In this framework the Stokes parameters are promoted to operators and by imposing the requirement that the Stokes vector of the expected values of these operators must transform according to the relevant Mueller matrices, the corresponding quantum polarization channels are introduced and investigated. Based on the framework developed in [33] and inspired by the studies in vision and applications of metrology in biological systems [8; 9; 10; 11; 12; 16; 17; 18; 19; 37; 38], we aim to assess the feasibility of using quantum polarimetry for estimation of rotation angle of stokes vector due to a birefringent medium, modeled by a linear retarder, in presence of diattenuation and depolarization. Studies on biological tissues have shown that, although the polar decomposition of Mueller matrix [39] in tissue polarimetry can yield reliable polarization properties in classical regime, however this decomposition does not correspond to the underlying physical reality [9; 40; 41]. Therefore it is crucial to study the precision limits in quantum polarimetry considering non-commutivity of the elementary components of the Mueller matrix, which implies considering different composition orders of the quantum polarization channels. This manuscript is organized as follows. In Section II we introduce concepts in quantum polarimetry such as Stokes operators and their transformations due to polarization channels. In basic concepts in quantum metrology, i.e. Section III QFI and quantum Cramer-Rao bound (QCRB) are introduced. In Section IV the results of our calculations for QFI using N00N and coherent states as probe states are given and finally in Section V we present our conclusions. ## II Quantum description of polarized light and polarization transformations In a quantum optical setting, each polarization mode can be thought of as a harmonic oscillator. One can write a general pure state of light by acting the creation operators of the horizontal and vertical polarization modes on the vacuum state. \[\begin{split}\left|\psi\right\rangle=\sum_{m,n}c_{m,n}\left|m,n \right\rangle,\\ \left|m,n\right\rangle\equiv\hat{a}^{\dagger m}\hat{b}^{\dagger n }\left|\mathrm{vac}\right\rangle/\sqrt{m!n!}.\end{split} \tag{1}\] We take \(\hat{a}\) and \(\hat{b}\) to be the annihilation operators of the horizontal and vertical polarization modes respectively. These operators satisfy the bosonic commutation relations. Using the field operators of the horizontal and vertical polarization modes we can define the Stokes operators as \[\begin{split}\hat{S}_{0}=\left(\hat{a}^{\dagger}\hat{a}+\hat{b}^ {\dagger}\hat{b}\right)/2&\hat{S}_{x}=\left(\hat{a}^{\dagger} \hat{b}+\hat{b}^{\dagger}\hat{a}\right)/2,\\ \hat{S}_{y}=-\mathrm{i}\left(\hat{a}^{\dagger}\hat{b}-\hat{b}^{ \dagger}\hat{a}\right)/2&\hat{S}_{z}=\left(\hat{a}^{\dagger} \hat{a}-\hat{b}^{\dagger}\hat{b}\right)/2.\end{split} \tag{2}\] The Stokes operators are quantum generalizations of the Stokes parameters, which are promoted to an operator status. These operators obey the following \(\mathfrak{su}(2)\) algebraic relations \[\begin{split}\left[\hat{S}_{i},\hat{S}_{j}\right]=\mathrm{i} \sum_{k=1}^{3}\epsilon_{ijk}\hat{S}_{k}\\ \hat{S}_{x}^{2}+\hat{S}_{y}^{2}+\hat{S}_{z}^{3}=\hat{S}_{0}\left( \hat{S}_{0}+1\right).\end{split} \tag{3}\] Similar to the classical case, one can use the stokes operators to define a semiclassical degree of polarization (DOP) [34]. \[\mathbb{P}_{s}=\frac{|\langle\hat{\mathbf{S}}\rangle|}{\langle\hat{S}_{0} \rangle}=\frac{\sqrt{\langle\hat{S}_{1}\rangle^{2}+\langle\hat{S}_{2}\rangle^ {2}+\langle\hat{S}_{3}\rangle^{2}}}{\langle\hat{S}_{0}\rangle}\,. \tag{4}\] Here, \(\langle\hat{S}_{i}\rangle=\mathrm{Tr}[\hat{\hat{\rho}}\hat{S}_{i}]\) denotes the expected value of the operator \(\hat{S}_{i}\) and \(\hat{\mathbf{S}}\) is the normalized Stokes vector. In quantum mechanics, a general transformation of an operator is described by a completely positive trace preserving (CPTP) dynamical map which are also dubbed as a quantum channel. Kraus' theorem states that any quantum channel can be described by a set of Kraus operators \(\{K_{l}\}\) such that, \[\hat{S}_{\mu}\rightarrow\sum_{l}\hat{K}_{l}^{\dagger}\hat{S}_{\mu}\hat{K}_{l}. \tag{5}\] In order for the expectation values of the Stokes operators transform according to the classical Mueller description, the following condition must be satisfied. \[\langle S_{\mu}\rangle\rightarrow\sum_{\nu=0}^{3}M_{\mu\nu}\langle S_{\nu}\rangle, \tag{6}\] For both Eq. (5) and Eq. (6) to hold, we can impose the condition that the Stokes operators should transform in the same way as the Stokes parameters [33], \[\sum_{l}\hat{K}_{l}^{\dagger}\hat{S}_{\mu}\hat{K}_{l}=\sum_{\nu}M_{\mu\nu} \hat{S}_{\nu}. \tag{7}\] It is shown that if the Mueller matrix is not singular, one can decompose it into a product of three elementary matrix factors with well-defined polarimetric properties: a retarder, a diattenuator and a depolarizer [39]. In Eq. (8) the standard Lu-Chipman decomposition is shown. \[M=M_{d}M_{R}M_{D}. \tag{8}\] Here, \(M_{D}\), \(M_{R}\) and \(M_{d}\) are the elementary Mueller matrices for the diattenuator, retarder and depolarizer components respectively. Firstly, it must be noted that this decomposition is made solely for interpreting the polarimetric data and extracting physical parameters more conveniently by capturing the effective polarization transformation and doesn't necessarily describe the actual physical underlying process. This is specially the case for biological tissues for which the depolarization, diattenuation and retardation effects are likely to occur simultaneously [9; 40; 41]. Secondly, due to the fact that in general matrix product is non-commutative, the decomposition order given in Eq. (8) is not the only decomposition that one can make out of the original Mueller matrix. Based on the optical characteristics of the sample, there exist a multitude of ways to break down the total Mueller matrix into a combination of the elementary ones of which Eq. (8) is only a special case [9; 39; 40; 41; 42; 43; 44; 45; 46; 47]. Based on Eq. (7) we can describe the quantum mechanical polarization channels as [33], \[\varepsilon(\hat{\rho})=\varepsilon_{d}\circ\varepsilon_{R}\circ\varepsilon_{D} (\hat{\rho}), \tag{9}\] in which \(\varepsilon_{D}\), \(\varepsilon_{R}\) and \(\varepsilon_{d}\) are the diattenuator, retarder and depolarizer channels, respectively. A schematic representation of this channel is given in Fig. 1. The retarder channel rotates the Stokes vector which also acts as a unitary rotation on the density matrix [27; 33; 35; 36]. \[\varepsilon_{R}(\hat{\rho})=\hat{R}(\theta,\mathbf{n})\hat{\rho}\hat{R}^{ \dagger}(\theta,\mathbf{n}). \tag{10}\] Here, the operator \(\hat{R}\) is given by \[\hat{R}(\theta,\mathbf{n})=\exp(i\theta\hat{\mathbf{S}}.\mathbf{n}), \tag{11}\] in which \(\mathbf{n}\) is the direction of rotation and \(\theta\) is the angle of rotation. The diattenuator channel can be modeled by a sequential application of rotation operators in an enlarged Hilbert space containing two ancillary modes and tracing out the ancillary modes. It is given by [33; 35; 36] \[\varepsilon_{D}(\hat{\rho})=\mathrm{Tr}_{v_{1},v_{2}}[\hat{U}_{D}(\hat{\rho} \otimes|\mathrm{vac}\rangle_{v_{1},v_{2}v_{1},v_{2}}\langle\mathrm{vac}|) \hat{U}_{D}^{\dagger}], \tag{12}\] in which \(v_{1}\) and \(v_{2}\) are the ancillary modes and the operator \(\hat{U}_{D}\) is \[\begin{split}\hat{U}_{D}=\hat{R}_{a,b}^{\dagger}(0,\beta,\gamma) \hat{R}_{b,v_{2}}(0,-2cos^{-1}(0,\sqrt{r},0))\\ \hat{R}_{a,v_{1}}(0,-2cos^{-1}(0,\sqrt{q},0))\hat{R}_{a,b}(0, \beta,\gamma).\end{split} \tag{13}\] Here, \(\hat{R}_{a,b}\) denotes a rotation operator between modes \(a\) and \(b\) parametrized by the Euler angles \((0,\beta,\gamma)\). \(\hat{R}_{a,v_{1}}\) and \(\hat{R}_{b,v_{2}}\) are rotations between modes \(a\) and \(v_{1}\) and \(b\) and \(v_{2}\) respectively with their respective attenuation parameters \(q\) and \(r\). Finally the depolarizer is given by [33] \[\varepsilon_{d}(\hat{\rho})=p\hat{\rho}+(1-p)\sum_{M}p_{N}\frac{\hat{\mathds{ I}}_{N}}{N+1}, \tag{14}\] in which \(p\) is the depolarization parameter and \(p_{N}\) is the weight of the density matrix in each photon number subspace. For our purposes, we can express depolarization as a convex sum of two rotation operators [35; 36]. For the sake of simplicity we take two rotation operators. \[\varepsilon_{d}(\hat{\rho})=\frac{\hat{R}_{1}\hat{\rho}\hat{R}_{1}^{\dagger}+ \hat{R}_{2}\hat{\rho}\hat{R}_{2}^{\dagger}}{2}. \tag{15}\] ## III Quantum parameter estimation and QCRB In this section we introduce some of the fundamental results for quantum single parameter estimation. Assume that through a process, the parameter \(\theta\) gets encoded on a quantum state. For a parameterized state the lower bound for the variance of any unbiased estimator \(\hat{\theta}\) of the parameter, is given by the QCRB [48; 49; 50; 51; 52; 53]. \[(\Delta\hat{\theta})^{2}\geq\frac{1}{\nu\mathcal{F}_{Q}(\theta)}. \tag{16}\] Here, \(\mathcal{F}_{Q}\) is the QFI and \(\nu\) is the number of measurement repetitions. QFI is a generalization of the classical Fisher information (CFI) which is defined as the expectation value of the squared derivative of the logarithm of the probability distribution. For a pure state QFI is defined as, \[\mathcal{F}_{Q}(\theta)=4[\langle\partial_{\theta}\psi|\partial_{\theta}\psi \rangle-|\langle\partial_{\theta}\psi|\psi\rangle|^{2}]. \tag{17}\] For a mixed state, one can define QFI using the symmetric logarithmic derivative (SLD) operator as \[\mathcal{F}_{Q}(\theta)=\mathrm{Tr}[\hat{\rho}\hat{L}_{\theta}^{2}]. \tag{18}\] SLD is implicitly defined by the following equation \[\partial_{\theta}\hat{\rho}=\frac{\hat{L}_{\theta}\hat{\rho}+\hat{\rho}\hat{L}_ {\theta}}{2}. \tag{19}\] By expressing \(\hat{L}_{\theta}\) in the eigenbasis of \(\hat{\rho}\) one can write QFI as \[\mathcal{F}_{Q}(\theta)=2\sum_{k,l}\frac{|\langle k|\partial_{\theta}\hat{\rho }|l\rangle|^{2}}{\lambda_{k}+\lambda_{l}}. \tag{20}\] Here, \(k\), \(l\) and \(\lambda_{k}\), \(\lambda_{l}\) are the eigenvalues and eigenvectors of \(\hat{\rho}\). Figure 1: A schematic representation of the general polarization channel. In this figure the order of the action of the diattunator, retarder and depolarizer channels follows the standard Li-Chipman form. Results In this section we numerically calculate the QFI in order to determine the rotation angle measurement precision. In all of our calculations the rotation angle to be estimated is \(\theta=0.01\) with the rotation direction \((\Theta,\Phi)=(0,0)\) in polar coordinates. The probe states have been considered for this task are the coherent and N00N states which are known to saturate the standard quantum limit (SQL) and Heisenberg limit (SQL) for this task considering small rotation angles [27]. As a reference, our numerical results for the QFI in case of no noise is presented in Fig. 2. It is instructive to assess the performance of these states by considering the effect of each of the noise channels on the estimation precision of the rotation angle of the Stokes vector. Therefore, in 1 we will study the effect of the depolarization channel on the QFI for the rotation angle and in 2 the joint effect of depolarization and diattenuation is considered. In both of these cases the ordering of the implementation of the channels is also considered. Using the terminology common in the classical polarimetry, we will call \(\varepsilon_{for}(\hat{\rho})=\varepsilon_{d}\circ\varepsilon_{R}\circ \varepsilon_{D}(\hat{\rho})\) as the "forward" decomposition and \(\varepsilon_{rev}(\hat{\rho})=\varepsilon_{d}\circ\varepsilon_{R}\circ \varepsilon_{D}(\hat{\rho})\) as the backward decomposition. ### Noisy Rotation Sensing with Depolarization In this section we calculate the QFI for the rotation angle in both forward and reverse processes without considering the effect of diattenuation, focusing only on depolarization. The results of out calculations are shown in Fig. 3. The rotation axes for \(\hat{R}_{1}\) and \(\hat{R}_{2}\) are \((\Theta,\Phi)=(\pi/2,\pi/2)\) and \((\Theta,\Phi)=(0,0)\) respectively and the rotation angle for both of them is set to \(0.01\). It is evident that for N00N state, both in forward and reverse processes the overall performance has decreased and the scaling with the average photon number is not strictly monotonic in the case of reverse process. For the case of coherent state, in forward process there is no appreciable change in QFI compared with the noiseless case. However, in the reverse process, we witness a noise assisted increase in QFI. The results show that N00N state is advantageous for the task of rotation sensing, compared with the coherent state. ### Noisy Rotation Sensing with Depolarization and Diattenuation Finally, we calculate the QFI for the rotation angle vs the average photon number of the probe states considering both depolarization and diattenuation channels in forward and reverse decomposition channels. The results are shown in Fig. 4. The parameters of rotation are identical with the ones chosen in the previous subsection and diattenuation parameters are \(q=0.9\) and \(r=1\). The results show that, even though the scaling is diminished, N00N state still results in larger values for QFI. ## V Conclusion In conclusion, we studied the precision limits for single parameter estimation of the rotation angle of the Stokes vector in a polarimetric setup, within the framework of recently developed quantum polarimetry. We have incorporated the effects of depolarization and diattenuation channels within our numerical calculations and consid Figure 3: QFI for rotation angle in presence of depolarization noise vs average photon number. Calculations are for coherent state in forward (solid line with triangular data points in red) and reverse (dashed line with square data points in magenta) processes and for N00N state in forward (dash dotted line with circular data points in blue) and reverse (dotted line with triangular data points in cyan) processes. Figure 2: QFI for rotation angle vs average photon number calculated for coherent state (solid line with triangular data points in red) and N00N state (dash dotted line with circular data points in blue). Coherent state and N00N state saturate the HL respectively ered different orders of implementations of these channels. The results show that QFI generally deteriorates upon implementation of the polarimetric depolarization channel. However this statement is not valid in every case, as we have witnessed an increase in QFI using the coherent state as a probe in the reverse decomposition of our channel when diattenuation was disregarded. As expected, implementing diattenuation decreases QFI for all cases. However, our results demonstrate that the advantage of N00N state over coherent state still persists upon implementation of diattenuation and depolarization in both forward and reverse processes. ###### Acknowledgements. We gratefully acknowledge financial support from the Scientific and Technological Research Council of Turkiye (TUBITAK), grant No. 120F200.
2310.03292
SoK: Access Control Policy Generation from High-level Natural Language Requirements
Administrator-centered access control failures can cause data breaches, putting organizations at risk of financial loss and reputation damage. Existing graphical policy configuration tools and automated policy generation frameworks attempt to help administrators configure and generate access control policies by avoiding such failures. However, graphical policy configuration tools are prone to human errors, making them unusable. On the other hand, automated policy generation frameworks are prone to erroneous predictions, making them unreliable. Therefore, to find ways to improve their usability and reliability, we conducted a Systematic Literature Review analyzing 49 publications, to identify those tools, frameworks, and their limitations. Identifying those limitations will help develop effective access control policy generation solutions while avoiding access control failures.
Sakuna Harinda Jayasundara, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello
2023-10-05T03:45:20Z
http://arxiv.org/abs/2310.03292v1
# SoK: Access Control Policy Generation from High-level Natural Language Requirements ###### Abstract. Administrator-centered access control failures can cause data breaches, putting organizations at risk of financial loss and reputation damage. Existing graphical policy configuration tools and automated policy generation frameworks attempt to help administrators configure and generate access control policies by avoiding such failures. However, graphical policy configuration tools are prone to human errors, making them unusable. On the other hand, automated policy generation frameworks are prone to erroneous predictions, making them unreliable. Therefore, to find ways to improve their usability and reliability, we conducted a Systematic Literature Review analyzing 49 publications, to identify those tools, frameworks, and their limitations. Identifying those limitations will help develop effective access control policy generation solutions while avoiding access control failures. access control, policy engineering, system administrator, user interfaces, frameworks, usability, reliability + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author:: author: Corresponding:: author: Corresponding: author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: Corresponding: author:: author: Corresponding: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author:: Corresponding: author: author:: Corresponding: author: author: Corresponding: author: author:: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author:: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author:: Corresponding: author:: author: Corresponding: author: author:: author: Corresponding: author: author:: Corresponding: author: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author:: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author:: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author:: author: Corresponding: author: author:: Corresponding: author: author:: author: Corresponding: author:: author: Corresponding: author: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author:: author: Corresponding: author: author: Corresponding: author: author:: author: Corresponding: author: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author: author: author: Corresponding: author: author: author: Corresponding: author: author:: author: Corresponding: author: author: Corresponding: author:: author: Corresponding: author:: its employees victims of a data leak. This incident shows how severe the mistakes of an administrator can be when it comes to access control. Therefore, to avoid such administrator-centered access control failures, previous literature proposed graphical policy configuration (i.e., policy authoring and visualization) tools that guide the administrators to write and visualize policies manually from high-level access control requirements without worrying about complex access control languages and their syntax [9; 16; 17; 34; 35; 36; 44; 47; 48; 66; 67; 68; 80; 81; 84; 100]. However, manual policy authoring is a repetitive, laborious, and error-prone task [49; 57]. The administrator has to repetitively write policies one by one so that the appropriate access is provided to correct resources [44; 66; 67]. For example, when administrators have several natural language access requirements to apply to the authorization system, they have to go through the requirements one by one manually, first to identify the underlying rules, secondly, to identify the policy components (e.g., users, actions, and resources) of the rules and finally use those components to build the policy using the graphical policy authoring tool. This manual process becomes even harder when there are usability issues of the configuration interface that induce human errors [68]. Therefore, many administrators consider manual access control policy configuration as an overhead that makes them burned out and stressed, which leads to accidental human errors [11; 62]. As a solution, researchers then developed fully-automated policy generation frameworks to remove the system administrator almost entirely from policy generation [1; 2; 4; 5; 6; 7; 13; 14; 47; 30; 33; 41; 49; 50; 51; 52; 53; 64; 69; 72; 73; 75; 76; 77; 78; 83; 85; 93; 94; 96; 97; 99]. Those frameworks translate the natural language access control policies (NLACPs) in high-level requirement specification documents into machine-executable policies automatically using natural language processing (NLP) and machine learning (ML) techniques [49]. Therefore, the stress and fatigue due to the policy engineering overhead will be alleviated. However, the existing automated solutions are not reliable enough to generate access control policies without being verified by a human expert [22; 37], as the ML/NLP techniques used to develop those frameworks do not produce accurate results always [37], as they are often prone to a significant number of false positives and false negatives [22]. In summary, even though graphical policy authoring and visualization tools attempt to guide the administrator in writing and visualizing policies even without the knowledge about access control languages, their own limitations induce human errors, making them **unusable** for accurate policy authoring and visualization. On the other hand, NLP-based automated policy generation frameworks are **not reliable enough** to generate machine-executable access control policies accurately without human supervision. Therefore, to help the administrator avoid access control failures due to such challenges, it is crucial to identify ways to improve the usability and reliability (i.e., address the usability-security trade-off) of those tools and frameworks. To do that, first, we have to identify what are the existing policy configuration (i.e., policy authoring and visualization) and generation tools and frameworks and their limitations. With that in mind, we conduct this Systematic Literature Review (SLR) to answer the following key research questions: **RQ1**: What are the tools and frameworks proposed to generate and configure access control policies? **RQ2**: What are the limitations of the existing tools and framework developed to generate and configure access control policies? The rest of the article is organized as follows. In Section 2, we briefly report related works to this SLR and highlight the research gap and the contribution to knowledge. Then, in Section 3, we discuss the methodology we followed to plan and conduct the SLR, followed by the results in Section 4. After reporting results, we discuss them and provide guidelines to further improve the identified tools and frameworks in Section 5. Finally, we discuss some of the limitations of this SLR in Section 6 followed by conclusions and future works in Section 7. ## 2. Related Work Previous research initially attempted to solve access control failures by guiding the system administrator to write policies via graphical user interfaces (GUIs) [(66; 48)]. Consider a template-based policy authoring interface as shown in Fig.1. It provides pre-defined templates such as {Subject} can {Action} {Target} to write policies by choosing suitable policy components for the placeholders denoted between "[34; 35]. This interface points out what the necessary policy components are (i.e., subject, action, and target) and in what way those components should be organized in the policy [35]. Therefore, by following the provided template, administrators can avoid incorrect access control policies (i.e., policies containing wrong policy components in wrong placeholders) and incomplete access control policies (i.e., policies without necessary policy components). However, the usability issues of the existing tools make policy authoring and visualization difficult for administrators [33]. For example, sometimes, their intended policies cannot be easily written using the provided policy template. Assume that the administrator has to write a policy, _"Bob is allowed to access the computer "A" if the time is between 9 a.m. and 5 p.m."_, using the mentioned template-based authoring interface. If the interface does not support conditions (_"if the time is between 9 a.m. and 5 p.m."_), the administrator might neglect the condition entirely and generate a policy that allows Bob to access the computer at all times, without any restrictions. These situations may result in access control failures leading to data breaches [(68; 16; 44)]. To avoid such situations, it is imperative to understand where these limitations are in graphical policy authoring and visualization tools and provide solutions. With that motivation, previous literature that proposed such tools evaluated their own tools individually and pointed out their unique usability issues via user studies [(66; 67; 35; 68)]. Nevertheless, it is important to have a holistic idea about the common problems of those graphical policy authoring and visualization interfaces as a whole to develop more effective and usable policy authoring tools. Therefore, we conducted this SLR by considering all such interfaces we identified in the extracted literature to highlight their limitations. However, even if graphical policy authoring interfaces guide the administrator to write correct and complete policies, those tools still fail to provide a complete solution to access control failures. Because failures can still occur due to human errors as the policy configuration using those tools is a manual and repetitive process [(49; 11; 37)]. As a solution, previous research also suggested removing the human factor (i.e., the administrator) entirely from policy configuration by utilizing automated policy generation frameworks consisting of ML/NLP techniques [(93; 73)]. However, those ML/NLP techniques employed in automated policy generation frameworks are not reliable enough to generate machine-executable policies without human supervision [(37)]. For example, ML models are often prone to false positives and false negatives. Therefore, if a generated policy with falsely identified policy components is applied to the access control system without being verified by an administrator, it causes security holes in the system. Those holes can open Figure 1. A template-based policy authoring interface (left) and its template designer (right) that defines templates [(34; 35)]. up a back door to hackers, resulting in data breaches. These kinds of limitations were also pointed out by other studies [22]. As Del et al. revealed, neural network-based policy analysis techniques sometimes fail to identify privacy policies (i.e., low recall) [22]. At the same time, the traditional ML-based approaches tend to identify privacy policies falsely [22] (i.e., low precision). These findings support our claim that even if the automated solutions alleviate the administrator's overhead, they still are not reliable enough to operate without human involvement [22; 30; 37; 93]. Despite having those challenges in both the graphical policy authoring and visualization tools and automated policy generation frameworks, all the existing related surveys/systematic literature reviews have only focused on improving the usability of graphical access control policy configuration tools[24; 59]. However, improving graphical policy configuration tools [9; 16; 34; 35; 36; 44; 47; 48; 66; 67; 68; 80; 81; 84; 100] alone will not mitigate access control failures as it induces accidental human errors due its manual and repetitive nature [37; 49; 50; 57]. At the same time, improving fully automated policy generation frameworks [1; 2; 4; 5; 6; 7; 13; 14; 49; 50; 51; 52; 53; 41; 53; 64; 69; 72; 73; 75; 76; 77; 78; 79; 83; 85; 93; 94; 96; 97; 99] alone will also not effectively reduce access control failure, as they are not 100% accurate to operate without human supervision [37]. Therefore, while we agree the fact that both of those approaches should be improved by addressing their existing limitations, we further argue that the correct balance between manual and automated policy generation techniques will help develop more usable as well as reliable policy generation tools. To do that, first, the limitations of graphical policy configuration tools (i.e., manual approach) and the limitations of automated policy generation frameworks (i.e., automated approach) should be identified. Nevertheless, to date, there is no publication that primarily analyzes the literature on both of the above approaches from the administrator's perspective and discusses their limitations, as shown in Table 1. For example, according to Table 1, even though Deleat et al. have focused on the administrator's perspective, they only briefly mentioned access control policy configuration tools in their survey [24]. Furthermore, even though Paci et al. have analyzed the graphical access control policy configuration as a secondary focus in their SLR, they analyzed their usability from the end-user's perspective [59]. Therefore, to remedy this lack, we conducted this SLR by focusing on both the graphical policy authoring and visualization tools and NLP-based policy generation frameworks from the administrator's perspective, to point out their limitations. Addressing those limitations will help improve those tools and frameworks further and develop more effective policy generation frameworks that leverage the automation capabilities of reliable NLP techniques as well as the administrator's expertise via a usable interface. \begin{table} \begin{tabular}{l c c} \hline \hline & **Ours** & **Delaet et al.**[24] & **Paci et al.**[59] \\ \hline \hline **Considered aspect** & & & \\ \hline Graphical access control policy configuration & & \\ Automated access control policy generation & & \\ \hline **Perspective** & & \\ \hline System administrator & & \\ End user & & \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of our SLR to other related survey/SLR articles. The glyph indicates the primary focus in the survey, indicates a secondary focus in the survey, indicates the respective aspect is briefly mentioned in the survey. ## 3. Planning and conducting the slr To systematically identify the previous attempts in developing tools/frameworks to configure and generate access control policies and their limitations, we conducted a Systematic Literature Review (SLR) via a scientific and reproducible approach according to two main stages: planning the review and conducting the review (Selvar in natural language [49]. Then, in RQ2, we aim to ascertain the limitations of those tools and frameworks in order to develop effective frameworks to generate access control policies in the future. #### 3.1.3. Development of the search strings We identified a search string that is more likely to be used to extract the literature relevant to the area of interest. We used two techniques to decide which keywords to be included in each search string. 1. Extract keywords from the literature used for an initial assessment in the area of access control/privacy policy generation. 2. Decide keywords based on the population in the defined scope as underlined in Table 2. Based on the above two techniques, we decided our main keywords are "administrator", "access control", "privacy policy", "natural language", "generation", and "configuration". Even though we mainly focused on the access control domain, we considered "privacy policy" as one of the main keywords. Because in our initial literature assessment, we found that potentially relevant publications can also be found in the privacy policy domain. Apart from the above main keywords, we added several other keywords, such as "tool" and "interface", as part of our objective is to find tools or interfaces that are designed to help administrators generate or configure access control policies, even without knowing about complex access control languages or their syntax. We used the wild card notation (") to include the different forms of the same word in the search process. For example, instead of using "administrator" we used "admin" to represent different forms of the word, such as "administators", "admin", "administration" and "admins". According to the aforementioned criteria, the developed search string is as follows. admin* AND ("access control" OR "privacy policy" OR "privacy policies") AND ("natural language" OR ("natural language" AND "generata") OR ("configur*" AND ("interface" OR "tool"))) #### 3.1.4. Selection of data sources Publications were extracted from two main sources: 1) Scientific digital libraries and 2) Top-tier selected cybersecurity conferences. As the scientific digital library we mainly considered ACM Digital Library1, IEEE Xplore Digital Library2, Springer Link3, and Elsevier Science Direct4. Google Scholar5 was used only to extract publications in Phase 3 of Section 3.2.1. We found that the way we should define the search string for each search engine is different. For instance, ACM provides means to enter keywords in the search string separately and filter the literature based on the venue, even though digital libraries such as Elsevier Science Direct only allow the user to enter the complete search string in the search box. Furthermore, as we later found out, Elsevier Science Direct digital library will not accept search strings that contain wild card notation and strings containing more than eight boolean operators. Therefore, we used, Footnote 1: [https://dl.acm.org](https://dl.acm.org) Footnote 2: [https://ieeexplore.ieee.org/Xplore/home.jsp](https://ieeexplore.ieee.org/Xplore/home.jsp) Footnote 3: [https://link.springer.com](https://link.springer.com) Footnote 4: [https://www.sciencedirect.com](https://www.sciencedirect.com) Footnote 5: [https://scholar.google.com](https://scholar.google.com) "administrator" AND "access control" AND ("natural language" OR ("natural language" AND "generation") OR ("configuration" AND "tool")) as the search string in that occasion to search publications. To search for exact matches for the search string keywords, quotation marks (") were used (e.g., "access control"). In addition, to ensure the keywords are searched together, we combined the keywords with boolean operators such as AND. Even though we only used the AND and OR operators, the other basic operator (NOT) is also available in the above platforms to be used. Apart from the literature we extracted from the scientific digital libraries, we searched through the top-tier selected cybersecurity, human-computer interaction (HCI), as well as natural language processing conferences and journals relevant to the topics in this SLR. Even though this SLR is not directly related to Human-Computer Interaction (HCI), we considered such conferences because the policy generation component involves natural language processing techniques to translate human intentions to machine (computer) executable policy. We selected eight conferences and journals, namely, IEEE Symposium on Security and Privacy (IEEE S&P), USENIX Security, ACM Conference on Computer and Communications Security (CCS), ACM Transactions On Privacy and Security (ACM TOPS), Transaction of the Association for Computational Linguistics (TACL), Network and Distributed System Security Symposium (NDSS), ACM Conference on Human Factors in Computing Systems (CHI), Symposium on Usable Privacy and Security (SOUPS) and Conference on Empirical Methods in Natural Language Processing (EMNLP). The breakdown of the selected venues is shown in Table 3. #### 3.1.5. Definition of study selection criteria The study selection criteria help to determine what studies should be included and excluded from the SLR [28; 38]. Therefore, we defined the inclusion (IN) and exclusion (EX) criteria as shown in Table 4 to be relevant according to the scope in Table 2. ### Conducting the review After planning, SLR is conducted. As suggested by [38], we performed three main activities, namely _research identification and study selection, data extraction, and data analysis_. #### 3.2.1. Research identification and Study Selection This activity was performed from November 2022 - August 2023 under three main phases according to the PRISMA framework [61] as shown in Fig. 2. * **Phase 1: Digital Library Search -** We searched each library mentioned in the Section 3.1.4 by applying the search strings developed under the Section 3.1.3. * **Phase 2: Conference and Journal Search -** In order to avoid the publication bias [38], we scanned through the selected journals and conferences mentioned in Section 3.1.4 by applying the search strings developed under the Section 3.1.3. * **Phase 3: Backward Snowballing Search -** To ensure the relevant publications are not overlooked, we searched through the references and citations in the retrieved publication from Phase 1 and Phase 2 [91]. In Phase 1 (Search across the Digital Libraries), the developed search string returned a total of 3071 publications from 2013 to 2023. Even though we limited our search from 2013 to 2023, it does not mean we did not consider relevant publications before 2013. Phase 3 allowed us to retrieve such publications using the backward snowballing technique. We screened the returned publications in 2 steps. First, after reading the title and abstract of the identified publications, \begin{table} \begin{tabular}{c c c c} \hline \hline **Type** & **Cybersecurity** & **HCI** & **NLP** \\ \hline Conferences & IEEE S\&P, CCS, NDSS, SOUPS & CHI & EMNLP \\ Journals & USENIX, ACM TOPS & - & TACL \\ \hline \hline \end{tabular} \end{table} Table 3. Selected venues. \begin{table} \begin{tabular}{l l} \hline \hline **ID** & **Criteria** \\ \hline IN1 & The publication presents either an automated access control policy generation framework from natural language or a tool to write and/or visualize access control policies. \\ IN2 & The publication should be related to access control or privacy/security policy authoring and visualization done by the system administrator \\ IN3 & The publication has been peer-reviewed. \\ IN4 & The publication was written in English. \\ \hline EX1 & The publication is a secondary study. \\ EX2 & The publication presents a concept that has not been implemented and tested yet. \\ EX3 & The publication only presents a NLP technique for information extraction without applying it to access control or privacy policy domain. \\ EX4 & The publication mainly focuses on an access control model or a language but not on a policy \\ EX5 & The publication focuses on avoiding access control failures either from the end user’s perspective or the software developer’s perspective. \\ EX6 & The publication presents a bottom-up policy mining/text mining approach. \\ EX7 & The publication focuses on information extraction from privacy notices. \\ EX8 & The publication presents a tool designed to write policies in a standard policy language by following their same strict syntax (e.g., XML editors to write XACML policies [54]). \\ \hline \hline \end{tabular} \end{table} Table 4: Inclusion and Exclusion Criteria Figure 2: PRISMA [61] Flow diagram summarizing the research identification and study selection process. we added 178 publications that match the inclusion and exclusion criteria to Zotero Reference Management Software6 by removing 2893 articles from consideration at the beginning. Secondly, those retained 178 publications were further reviewed by reading the abstract, introduction, and sometimes the entire publication to decide whether or not to include it in the SLR. After removing the publications that did not match the inclusion and exclusion criteria, 20 publications were left in the end as a result of Phase 1. Footnote 6: [https://www.zotero.org](https://www.zotero.org) Searching through the selected conferences and journals mentioned in Table 3 was done under Phase 2. Our search was limited to publications from 2013. By using the same search strings, we extracted 23 potentially relevant publications. It is worth mentioning that we were careful not to extract publications that were already extracted in Phase 1. Once the publications were reviewed, we obtained ten publications after applying the inclusion and exclusion criteria. Additional papers were identified by Backward Snowballing Search [(91)] in Phase 3. This method allows us to go through the references and citations mentioned in the publications obtained in Phase 1 and Phase 2. We extracted 19 relevant papers in this phase without any time limitations. At the end of Phase 3, we have a total of 49 publications retrieved by SLR. The summary of the research identification and study selection process is depicted in Fig. 2, as suggested by [(25)]. #### 3.2.2. Data Extraction When extracting the publications, as we included the terms "privacy policies" and "privacy policy" into the search string, multiple publications that extract information from "privacy notices", such as data practices, were returned. In almost all of those publications, the authors' main focus was to develop techniques to extract information from lengthy privacy notices and present them to the **end user** in a more usable way. Therefore, since the returned publications were neither related to the access control policy configuration domain nor focused on the system administrator, we decided to exclude them. However, privacy notices also contain natural language access control policies that can be used to extract information for policy generation [(13; 14)]. Therefore, after careful consideration, we retained several publications related to the "privacy policy" domain that focus on extracting access control policies and their rules. In order to extract the necessary data from the included publications, we first examined them. From each publication, we collected general quantitative data (e.g., title, author, year, published venue, etc.) as well as the qualitative information that aligns with the formulated research questions in Section 3.1. ### Data Analysis Since we aim to answer the research questions qualitatively by identifying recurring patterns of meaning and concepts within the extracted data, next, we conducted **Thematic Analysis**[(12)] according to the following steps. 1. _Familiarisation_: Extracted data was read and summarised using the Mendeley Reference Management Software7 to have an overview of the data. We used Mendeley instead of Zotero for this step, as it offers more advanced annotation and note-taking features. Footnote 7: [https://www.mendeley.com](https://www.mendeley.com) 2. _Coding_: As we are coding with specific research questions (i.e., RQ1 and RQ2) in mind, we followed the integrated coding procedure [(19)], which allows us to start coding with an initial list of codes, derived based on research questions and authors' knowledge on access control policy generation (deductive) and expand the code list by adding new codes based on the extracted data (inductive). By following that approach, the first author assigned codes that reflect the relevant features to answer the research questions. For instance, the first author assigned the code _"parsing"_ to the text _"In our approach, after identifying the different sentence types, we parse each line (sentence) using the Stanford Natural Language Parser..."_[(79)]. Furthermore, the extracted data were read multiple times to refine the codes and ensure they were assigned correctly. 3. _Generating initial themes_: Upon coding, all the codes were compiled into logical groups (e.g., text editors, templates, etc.) to identify themes that help to answer the research questions (RQ1 and RQ2). 4. _Reviewing the themes_: Initial themes were checked against the data segments extracted to ensure that they tell a compelling story about policy generation tools, frameworks, and their limitation with the involvement of all the authors. To fine-tune the story, we refined the initial themes and sometimes split existing themes. For example, we split the theme graphical policy configuration into policy authoring and policy visualization to highlight the different approaches that help administrators write and understand policies. 5. _Defining and naming higher order themes_: Finally, we defined two main themes: graphical policy authoring and visualization and NLP-based automated policy generation, with their detailed descriptions, and assigned each initial theme to one of those two categories. The co-authors validated the process by reviewing the consistency of codes and themes against the associated data and examining whether the generated themes respond to the research questions RQ1 and RQ2. Several meetings were conducted involving the three authors to discuss the disagreements and issues of generated codes and themes. As a result, we minimized the possible inconsistencies in the coding process. Once the themes and categories were generated, the main author filled out a spreadsheet to classify the articles based on the detailed descriptions of the themes/categories. Later, the agreements and disagreements of the other coders regarding the classification decisions were expressed and recorded in a meeting to calculate the metrics for inter-rater reliability (i.e., Cohen kappa (\(\kappa\)) [(88)] and percentage of agreement). After the calculation, we noticed that the co-authors show a "substantial agreement" (\(\kappa\geq 0.76\)) in classifying articles into the defined categories. However, we came across several disagreements on classifying based on the graphical policy configuration tools (authoring vs. visualization tools). We discussed the disagreements in a meeting and resolved them by examining the descriptions of the categories. Even though we measured the inter-rater reliability here, it is worth noting that we were more focused on incorporating different perspectives of co-authors when developing themes and assigning relevant publications to them than the reliability measurement, as advised by Braun and Clark [(12)]. ## 4. Results Access control failures due to policy configuration mistakes by system administrators can result in drastic data breaches, putting an entire organization at risk of financial loss and reputation damage [(8)]. As we identified, such access control failures occur mainly due to the lack of usability and lack of reliability of the existing access control policy configuration and generation solutions [(34; 47; 66; 79; 80; 93; 94)]. Therefore, to avoid such failures, it is essential to improve their usability and security in terms of reliability (i.e., address usability-security trade-off). To achieve that, first, we have to identify where those usability and reliability issues are in the existing tools and frameworks and provide solutions for them. With that motivation, we conducted this SLR identifying two main types of tools and frameworks developed to configure and generate access control policies : **Graphical policy configuration (i.e., authoring and visualization) tools**[(9; 16; 17; 34; 35; 36; 44; 47; 48; 66; 67; 68; 80; 81; 84; 100)] (e.g., text editor-based tools, template-based tools, access matrix-based tools, and graph-based tools), **NLP-based automated policy generation frameworks**[(4; Figure 3: Thematic map showing the identified access control policy configuration and generation approaches. Under RQ1, we discuss graphical policy configuration (i.e., manual) in terms of policy authoring and visualization tools and NLP-based policy generation (i.e., automated) in terms of 4 steps. Under RQ2, we discuss the limitations of identified tools and frameworks related to the above approaches. (1) pre-processing, (2) text classification, (3) information extraction, and (4) information transformation), and their limitations, as shown in Fig. 3. Identifying those limitations will help discover ways to improve the usability and reliability of access control policy generation and, in turn, help develop more effective access control policy generation frameworks in the future. ### Graphical policy authoring and visualization tools According to our thematic analysis (Han et al., 2017), 16 of the included publications proposed graphical user interfaces (GUIs) to guide administrators in access control policy authoring and visualization. Among those publications, some have proposed **text editors**(K should contain a purpose that has to be checked before granting/denying access, administrators might neglect it when building policies using text editor-based policy authoring tools. In that case, according to the aforementioned example, doctors will be able to edit the patient's records for any reason, causing access control failures. Therefore, in order to avoid such scenarios, template-based policy authoring tools were developed by providing pre-defined templates that support policy components necessary for a particular organization to write policies using pre-defined policy elements (Sutton et al., 2016; Sutton et al., 2017; Sutton et al., 2018). #### 4.1.2. **Templates-based policy authoring tools** A "template" is a specification of the structure of a list of natural language policies (Sutton et al., 2016). Template-based access control policy authoring tools provide one or more such templates with placeholders to input policy components to build a complete access control policy (Sutton et al., 2016; Sutton et al., 2017; Sutton et al., 2018). For example, to write a policy such as _"Database administrators can read database record fields."_, Johnson et al. (2016) has provided a template of (Internal Users) can {Action} {Resource} as shown in Fig. 4(d). However, a single template would not suffice to support all the policy requirements of an organization. Therefore, Johnson et al. also have allowed in their policy authoring interface shown in Fig. 4(d), to create and modify new templates depending on the access requirements (Sutton et al., 2016; Sutton et al., 2017). Nevertheless, user study participants have raised concerns that the template-based policy authoring tool of Johnson et al. is overly flexible as it might allow administrators to make "general" templates (Sutton et al., 2016). For example, if the template is {Subject} can {Read} {Database record fields}, the administrator can input any user (e.g., internal users Figure 4. Examples for graphical policy authoring and visualization tools: (a) “easyXACML” text-editor-based tool (Sutton et al., 2016), (b) Policy authoring interface developed for e-scientists (Sutton et al., 2016) (c) “Expandable Grids” access matrix-based visualization tool (Sutton et al., 2016; Sutton et al., 2017), (d) Template-based tool by Johnson (2016), (e) “VisABAC” visualization tool (Sutton et al., 2018). and external users) as the Subject allowing even external users to read database records, even if it should only be done by internal users as shown in Fig. 4(d). If external users are able to read databases containing the organization's confidential information, such as customers' personal details, it will result in data breaches, harming the organization and its customers (Krishnan et al., 2017). On the other hand, some other template-based policy authoring interfaces (Krishnan et al., 2017) lack the flexibility to support the unique access requirements of different organizations, as they are limited by one pre-defined template specifically designed for one particular type of rules, such as business rules (Krishnan et al., 2017). Therefore, if the policy requirements are different in other domains, such as healthcare (Krishnan et al., 2017), the same template-based tool cannot be easily adapted to support those different access control policies. However, neither text editor-based tools nor template-based policy authoring tools help administrators to graphically visualize the existing policies so that administrators can easily understand the relationships between policies (Krishnan et al., 2017; Krishnan et al., 2017). That understanding is important, especially when making changes to existing policies (Krishnan et al., 2017; Krishnan et al., 2017). For instance, consider a scenario where the administrator has to add a user who is restricted from reading the financial information of an organization to a group that has permission to read that information. In that case, if the administrator cannot clearly see that the user is going to be added to a group with conflicting permissions and the allow rules take precedence (Krishnan et al., 2017), the administrator might accidentally allow that user to access the financial information of the organization, leading to access control failures (Krishnan et al., 2017; Krishnan et al., 2017). Therefore, to avoid such failures due to the lack of holistic awareness of how the policies affect each other, previous literature also proposed access matrix-based policy authoring and visualization tools (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) and graph-based policy visualization tools (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) to visualize access control policies. #### 4.1.3. **Access Matrix-based policy authoring and visualization tools** "Access control matrix" is the most common type of interface we identified that guides the administrator to both configure access control policies and visualize them in a matrix. Lampson first introduced it (Lampson, 1979) as a two-dimensional table, where each row represents a user, each column represents a resource, and each cell contains operations that the subject is allowed to perform on the resource. After that, the access control matrix-based tools were proposed as a method of configuring and visualizing access control policies (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). For example, Reeder et al. proposed "Expandable Grids" shown in Fig. 4(c) (Krishnan et al., 2017; Krishnan et al., 2017). It represents all the users and their associated groups in columns and all the resources and associated resource groups in rows. In contrast to the other access matrix-based interfaces (Krishnan et al., 2017; Krishnan et al., 2017), Expandable Grids uses a sub-grid in each cell to authorize each user to read, write, delete, execute, and administrate resources. Furthermore, to represent whether the above five actions are allowed, denied, or some access allowed when a user inherits access from a group (i.e., effective permission), green, red, and yellow colors were used, respectively, as shown in Fig. 4(c). Nevertheless, since access matrices are two-dimensional, they can only represent users and resources by their two axes. Therefore, access matrix-based policy authoring and visualization tools often cannot represent access control policies that contain policy components such as conditions and purposes, even though they significantly affect the authorization decision (Krishnan et al., 2017; Krishnan et al., 2017). For instance, consider a similar example we used earlier "_The doctor can read the patient_'s _records to prescribe medicine only if the patient agrees_." Suppose the condition (i.e., if the patient agrees) and the purpose (i.e., to prescribe medicine) are neglected because access matrices do not support them. In that case, the configured policy implies that the doctors neither need the patient's consent nor a reason to access and write the patient's medical records. As a result, not only does it cause a privacy violation, but also anyone with doctors' credentials can gain access to someone's medical history easily. Furthermore, the access matrix-based visualization approach can be cumbersome when dealing with a large number of users and resources in the organization, resulting in policy misconfigurations [47; 48]. For example, consider an access matrix containing many users (i.e., columns) and many resources (i.e., rows). To give permission to a user to access a resource, the administrator might need to navigate through many columns to find the correct user and follow the column through many rows using the mouse until the correct resource is found. When navigating, if the administrator's mouse accidentally slips from the desired row/column to an adjacent row/column, the administrator might incorrectly identify the wrong resource and/or wrong user and give permission, causing an access control misconfiguration (i.e., "off-by-one error" [66]). Therefore, to avoid such scenarios due to difficulties of navigation of the conventional access matrix, graph-based policy visualization tools were developed [47; 48]. #### 4.1.4. **Graph-based policy visualization tools** Graphs/trees can be in different forms, such as layered graphs [9], and treemaps [47]. Each type of graph has nodes and edges that connect nodes. In some policy visualization tools, nodes represent policy components such as users, and edges represent relationships between them [9]. In several other cases, nodes represent rules in a policy, and edges represent the relationships between those rules and how they are being combined to form the policy [47; 48]. For example, Morisset et al. introduced a treemap-based visualization tool, "VisABAC" to visualize Attribute-based Access Control (ABAC) policies [47; 48] as shown in Fig. 4(e). They utilized a special form of treemaps named "Circular Treemap," which represents tree nodes as circles. Therefore, a parent node with two children nodes will be represented as a circle containing two sub-circles in the circular treemap. In the access control domain, the parent circle would be a policy, and the children circles would be either policies or rules that are being combined to form the policy [47; 48]. In contrast to access matrices, VisABAC focuses on rules, policies, and their relationships instead of policy components such as subjects, actions, and resources. Therefore, no matter how high the number of users and resources in the organization is, it does not affect the visualization, making the visualization easy to navigate compared to access matrices [47]. However, the existing graph-based policy visualization may not always be easily explainable to all administrators. Different administrators might interpret the same graphical policy representation differently, leading to different conclusions about the policies based on their level of expertise [16]. The use of colors, line styles, and symbols used in the interface to represent rules and policies can be subject to such misinterpretations as their meanings may not be saliently described within the policy visualization interface or hidden inside separate windows [9; 47; 48]. For example, to get to the point where the line styles and colors used are explained in VisABAC interface, administrators have to navigate through multiple windows each time they attempt to interpret a policy while memorizing the visualization diagrams [47; 48]. That would negatively affect the efficiency of policy configuration [44]. Furthermore, even if they found that information, the line styles used to denote operations "Deny Unless Permit (DUP)" and "Permit Unless Deny (PUD)" look almost similar in the interface as shown in Fig. 5, even though they have completely opposite meanings. If administrators cannot correctly identify such subtle differences in the line styles, they might misinterpret such visualization features, resulting in an incorrect understanding of the policies. Figure 5. Line conventions used by VisABAC to represent operations (a) Deny overrides (b) Permit overrides (c) Deny unless permit (d) Permit unless deny (e) First applicable (f) Only one applicable [47; 48]. The line styles used to represent Permit Unless Deny and Deny Unless Permit look almost similar, increasing the chance to misidentify. Even though graphical policy authoring and visualization tools attempt to guide the administrator to write and visualize access control policies, it is a manual, repetitive, and laborious task that increases the administrator's overhead (Stenberg et al., 2017). This overhead can lead to fatigue and stress for the system administrator, increasing the likelihood of mistakes when configuring policies (Stenberg et al., 2018). The consequences of such mistakes could become even more severe as none of the identified policy authoring and visualization tools provide adequate feedback (Stenberg et al., 2019), mentioning the configuration mistake (e.g., policy conflict due to an incorrectly written policy), its location (e.g., conflicting policies), the severity of the mistakes (e.g., how permissions will change if the conflicting policy is applied to the system), and solutions (e.g., how to resolve the conflict) when a mistake happens. As a result, administrators might attempt trial and error to find and resolve such mistakes, ending up adding more misconfigurations to the authorization system, causing access control failures leading to data breaches (Stenberg et al., 2019). Therefore, previous literature then proposed to remove the human factor entirely from the policy generation by proposing NLP based automated policy generation frameworks (Bahdan et al., 2016; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al. #### 4.2.1. **Step 1: Pre-processing** Pre-processing is cleaning, transforming, and preparing textual data before they can be used to train or infer ML/NLP models [(79; 14; 94)]. Some of the widely used pre-processing techniques in extracted literature are, * _Sentence Tokenization_: Sentence tokenization is the process of splitting the sentences in NL documents using punctuation marks indicating the sentence boundaries [(76; 77; 78; 49; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 83; 94). This pre-processing step is necessary when processing documents containing NLACPs as followed in [(79; 94)]. * _Word and subword Tokenization_: Word tokenization is the process of breaking down a sentence into word tokens based on the white spaces [(75; 75; 75; 75; 76; 77; 78; 49; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 75; 76; 77; 78; 79; 83; 96; 84; 85; 86; 87; 88; 89; 97; 10; 98; 11; 99; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 78; 79; 83; 94; 95; 96; 97; 98; 10; 99; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 30; 32; 33; 35; 36; 37; 38; 39; 41; 42; 43; 45; 46; 47; 49; 50; 51; 52; 53; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 67; 68; 69; 71; 72; 73; 76; 79; 83; 94; 95; 96; 97; 98; 10; 99; 11; 12; 13; 14; 15; 16; 17; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 34; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 50; 52; 53; 54; 56; 57; 59; 60; 63; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 83; 94; 95; 96; 97; 10; 99; 11; 12; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 45; 46; 47; 48; 49; 52; 53; 54; 56; 57; 58; 59; 61; 62; 63; 64; 65; 67; 68; 69; 70; 73; 75; 76; 78; 79; 83; 95; 96; 97; 10; 98; 11; 12; 13; 14; 15; 16; 17; 18; 19; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 39; 40; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 56; 57; 58; 59; 60; 61; 63; 65; 67; 68; 69; 70; 74; 75; 76; 77; 78; 79; 83; 95; 96; 97; 10; 98; 11; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 23; 24; 26; 28; 29; 31; 32; 33; 34; 36; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 76; 78; 79; 83; 99; 95; 97; 11; 12; 13; 14; 15; 16; 17; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 50; 51; 52; 53; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 75; 76; 78; 79; 83; 96; 97; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 23; 24; 26; 27; 28; 29; 30; 31; 32; 35; 36; 37; 38; 39; 40; 41; 42; 43; 43; 44; 45; 46; 47; 49; 50; 51; 52; 53; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 75; 76; 77; 78; 79; 83; 94; 95; 96; 97; 10; 98; 11; 10; 11; 11; 12; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 33; 34; 36; 37; 38; 39; 40; 41; 42; 43; 46; 47; 48; 49; 50; 51; 52; 53; 57; 58; 59; 60; 61; 62; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 75; 76; 77; 78; 79; 83; 99; 94]. This pre-processing step is necessary when processing documents containing NLACPs as followed in [(79; 94)]. * _Word and subword Tokenization_: Word tokenization is the process of breaking down a sentence into word tokens based on the white spaces [(75; 75; 75; 75; 75; 76; 77; 77; 783; 96; 77; 79; 83; 96; 97; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 41; 42; 43; 43; 44; 55; 46; 47; 48; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 83; 96; 97; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 45; 46; 47; 48; 49; 50; 51; 52; 53; 57; 58; 59; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 75; 76; 78; 79; 83; 99; 95; 96; 11; 10; 11; 12; 13; 14; 15; 16; policies. Similarly, if the NLACP is ambiguous and, in turn, does not agree with the grammatical patterns, it will also not be considered a legitimate NLACP by rule-based text classification techniques. For example, as Slankas et al. found out, only 34.4% of the sentences in the dataset used by them agree with the four patterns identified by Xiao et al. shown in Table 5, leaving the rest of the NLACPs undetected [79]. These situations are particularly problematic, especially if the authorization system operates under the default-allow principle [34]. For example, in default-allow systems, only those actions explicitly denied will be restricted. Suppose a policy that restricts nurses from accessing patients' personal medical records was neglected by the text classification algorithm because the policy was ambiguous or not written according to any grammatical pattern in the pattern database. In that case, since the nurses' access to personal medical records will not explicitly be restricted in the default-allow system, they will automatically gain access to those records, causing access control failures. Therefore, without being limited by the number of pre-defined hardcoded patterns, machine learning (ML) based algorithms were also utilized to classify NLACPs. In contrast to rule-based techniques, machine learning-based techniques learn common patterns in non-NLACP and NLACP sentences from a given training set and classify an unseen sentence based on the learned patterns [76, 77, 78, 79]. For instance, the most prevalent ML-based classification algorithm used to identify NLACPs is the k-NN (k-Nearest Neighbours) algorithm (7 articles) [51, 52, 53, 76, 77, 78, 79] as shown in Table 6. k-NN classifies a given data point based on the majority vote of the existing classifications of the k nearest neighbors to the data point (i.e., the most frequent label of k data points of the training dataset closest to the given data point) [76]. However, finding the closest k sentences (NLACP or non-NLACP) to a given NL sentence can be tricky compared to finding the closest numerical values, as the number of attributes of each sentence is different depending on the number of words [76]. Therefore, Slankas et al. used a modified version of Levenshtein distance to calculate the distances between the query sentence and sentences in the training datasets [76, 79]. Instead of using the number of edits needed to transform one string to another as the traditional Levenshtein distance metric does, Slankas et al. used the number of word transformations needed to convert the query sentence into a training sentence as the distance metric [76, 79]. Apart from the traditional machine learning-based classification techniques such as Support Vector Machines (SVM) and Decision Trees, deep learning-based classification was also employed in the included literature [3, 4, 5, 30, 50, 93]. For instance, with the new developments in the NLP domain, transformer-based language models [87] were used on several occasions to identify NLACPs [93, 30] as shown in Table 6. For example, Heaps et al. [30] utilized (fine-tuned) Bi-directional Encoder Representation from Transformers (BERT) [26] to classify user stories as NLACP, non-NLACP, and ambiguous as well as to classify their access type (i.e., read, write, etc.). However, the reliability of the ML/NLP-based text classification techniques used in existing policy generation frameworks is affected by the lack of domain-related datasets [5, 49]. Domain-related datasets help the ML/NLP \begin{table} \begin{tabular}{c l} \hline \hline **Semantic pattern** & **Example** \\ \hline Modal Verb in Main Verb Group & An \(\text{HCP}_{\text{[subject]}}\)**can view \({}_{\text{[action]}}\)** the patient’s account\({}_{\text{[resource]}}\). \\ Passive Voice followed by To-infinitive & An \(\text{HCP}_{\text{[subject]}}\)**is disallowed to update \({}_{\text{[action]}}\)** patient’s \\ Phrase & account\({}_{\text{[resource]}}\). \\ Access Expression & An \(\text{HCP}_{\text{[subject]}}\)**has read\({}_{\text{[action]}}\) access to patient’s account\({}_{\text{[resource]}}\). \\ Ability Expression & An \(\text{HCP}_{\text{[subject]}}\)**is able to read \({}_{\text{[action]}}\)** patient’s account \({}_{\text{[resource]}}\). \\ \hline \hline \end{tabular} \end{table} Table 5. Semantic patterns in Access Control Sentences identified by Xiao et al. [94]. algorithms to be trained and adopted to the access control domain so that the model understands the patterns that are unique to access control policies. However, since there are not enough high-quality, annotated data for access control policy classification, ML/NLP models used to classify policies were not properly trained (Cheng et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020; Chen et al., 2021). As a result, the existing policy generation frameworks sometimes might not identify actual necessary NLACPs to generate their machine-executable counterpart in the first place, resulting in missing access control policies at the end, generating security holes in the authorization system. After identifying the NLACP sentences from the NL documents, the necessary components and rules of the NL policy should be extracted to generate the machine-executable policy. Therefore, as the next step, information extraction was carried out. #### 4.2.3. **Step 3: Information Extraction** Information extraction is the process of extracting structured information from unstructured or semi-structured data sources such as NL sentences and NL documents (Kang et al., 2019). Table 7 shows the information extraction techniques used in the previous literature with the associated references. According to Table 7, the most prevalent technique used to extract information from NLACPs (23 publications) is syntactic parsing. Syntactic parsing is a method of analyzing the grammatical structure of a sentence (Sutton et al., 2017). It identifies the syntactical relationships between words of a sentence and ultimately creates a structured representation of the sentence named "Parse tree". There are two main syntactic parsing techniques that were widely utilized to extract access control policy components and access control rules: shallow parsing (Glorot et al., 2016; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020; Chen et al., 2021) and dependency parsing (Cheng et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020; Chen et al., 2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2020); Chen et al. (2021); Chen et al. (2021); Chen et al. following three simple grammar rules can be used. Rule 1: NP \(\Rightarrow\) {<JJ>?(<NNS> | <NN>)*}, Rule 2: VP \(\Rightarrow\) {<VB>}, and Rule 3: NP \(\Rightarrow\) {<NP><POS><NP>}. The above grammar rules instruct the shallow parser to create chunks in three steps. First, the shallow parser will create chunks by combining adjectives (JJ) and singular nouns (NN) or plural nouns (NNS), and tag the chunks as noun phrases (NP) according to the Rule 1. The affected chunks are highlighted inside green boxes in Fig. 7. Secondly, the shallow parser will tag verbs (VB) as a verb phrase (VP) according to the Rule 2 as shown inside an orange colored box in Fig. 7. Finally, it will create chunks by combining a noun phrase (NP), a genitive marker (e.g., "s") (POS), and another noun phrase (NP) to tag them as a noun phrase (NP) as instructed by the Rule 3. The generated chunks using Rule 3 are highlighted inside a blue colored box in Fig. 7. Then the policy components such as subjects, actions, and resources can easily be extracted by identifying noun phrases that contain adjectives and nouns as the "Subjects", verb phrases as "Actions", and noun phrases built using a noun phrase, a genitive marker followed by another noun phrase as "Resources-8. Following a similar procedure, Brodie et al. designed a set of grammar rules for the "SPARCLE policy workbench" that operate in order (sequentially) to add tags to the NLACPs indicating the places where policy components start and end. They used the cascaded structure to first extract the hard-to-extract components such as conditions, obligations, and purposes, and then to extract user categories, data categories, and finally actions [14]. However, to be able to extract components from shallow parsing, NLACPs should be written according to a specific template that was used to define grammar rules and will not be able to extract desired components otherwise. For example, if the mentioned NLACP is written in a different structure, such as _"Patient's record can only be read and written by the doctor"_, the aforementioned grammar rules may not be able to parse the NLACP and identify all the necessary components. That will result in incorrect access control policies [73]. Footnote 8: Modal auxiliaries (MD) and determiners (DT) will not play a role in identifying policy components. On the other hand, 11 of the extracted publications utilized dependency parsing to extract policy components with their relations (i.e., access control rule extraction) [1, 4, 5, 79, 83, 96, 77, 49, 97, 4, 98, 93, 96]. In contrast to shallow parsing, dependency parsing identifies the relationships between words in a sentence and generates a directed graph containing \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline & **Information** & **Information extraction task** & **References** & **Highest reported performance** \\ \hline & & & [1, 7, 13, 14, 27, 33, 41, 49, 69, 72, 73, 77, 78, 85, 93, 94, 96, 97] & F1: 0.96 (Shallow parsing - CNL) [14], F1: 0.57 (Dependency parsing) [79] \\ \hline & & & \\ & & NLACP attribute extraction & [4, 5, 93] & Not reported \\ \hline & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ \hline \hline \end{tabular} \end{table} Table 7. Information extraction techniques. Performances of each information extraction technique are shown in the fourth column in terms of F1 score [79] and Accuracy (Acc.) [94] with the used algorithm within brackets. The full breakdown of information extraction can be found in [https://cutt.ly/gwkrEbks](https://cutt.ly/gwkrEbks). the tokens as the nodes and relationships as the edges. For instance, the dependency parse tree of the NLACP _"The doctor can read patient's record"_ is shown in Fig. 8. The relationships between word tokens are mentioned close to arrows, which cannot be extracted in shallow parsing. According to the figure, between the subject, "doctor", and the action (i.e., VERB in Fig. 8), "read" has the nominal subject (nsubj) relationship. Furthermore, the direct object (dobj) relationship can be seen between the action and the resource, "records". This nsubj - VERB - dobj relationship is also the most common pattern identified by Slankas et al. according to Table VIII. Therefore, by searching for patterns that contain the nsubj and dobj relationships, access control rules were extracted in previous literature [1, 49, 77, 78, 93, 96, 97, 79]. To identify similar dependency patterns, Slankas et al. used the "bootstrapping" mechanism to identify different dependency relationships in NLACPs, starting from known ten seed patterns with three vertices (subject, operation, and resource) and expanded the pattern database with new patterns along the way [77, 79]. All the known seed patterns were the same except the verb [79]. The subject and the resource nodes were kept as wildcards to match and extract any nouns associated with the verb. After extracting subjects and resources using the seed patterns, they expanded their pattern database in 2 ways: (1) by extracting additional dependency patterns that contain the known subjects and resources, (2) by applying a series of transformations to the existing patterns (e.g., transforming the patterns in active voice into passive voice). Some of those identified patterns in the process are listed in the 1st column of Table VIII. In contrast to Slankas et al., Alohaly et al. manually identified the five most common relations that encode subject-attributes and object-attributes of NLACPs in [5] as shown in columns 2 and 3 of Table VIII. Nevertheless, it is important to note that the effectiveness of using a dependency parser for policy component extraction depends on the quality and quantity of the identified patterns in the pattern database [79]. An access control rule/attribute relation in a NLACP could not be able to extract if it does not match with a dependency pattern in the database [77]. Fig. 8: Dependency parse tree of the sentence _"The doctor can read the patient’s records"_. det: relationship between determiner (DT) and noun (NOUN), nsubj: relationship between the noun and verb (VERB), aux: relationship between auxiliary verb (AUX) and main verb, dobj: relationship between verb and object, poss: relationship between a noun and its possessive modifier, case: relationship between a noun and its case marker (PART). Fig. 7: Shallow parse tree generated using NLTK library for the sentence _“The doctor can read the patient’s record"_ according to the grammar rules NP \(\Rightarrow\) (<JJ>?(<NNS | <NN>)*), VP \(\Rightarrow\) (<W>), NP \(\Rightarrow\) (<NP><POS><NP>). NP: Noun phrase, NN: Noun, NNS: Plural noun, JJ: Adjective, VP: Verb phrase, VB: Verb in its base form, POS: Genitive marker (i.e. ”s”), MD: Modal auxiliary. However, NLACPs are often ambiguous and complex [33; 52; 94], making it harder to parse and extract policy components correctly, especially using rule-based parsing techniques such as shallow parsing or dependency parsing [14; 73]. Because those ambiguous and complex NLACPs might not agree with the parsing rules used to build parsers such as shallow parsers as it is difficult to create rules that cover all possible ambiguous and non-ambiguous NLACP sentence structures [43]. Therefore, to avoid the ambiguities of unconstrained NLACPs, some publications have used a CNL (Controlled Natural Language) to write policies in NL, including [13; 14; 27; 33; 41; 72; 73; 85]. Even though CNLs are not as strict as policy languages such as XACML, they were designed by restricting the grammar and vocabulary allowed to write policies, to prevent ambiguities, and to ensure that policies are interpreted consistently [92]. For example, Brodie et al. restrict the user to follow only two semantic structures to write the policy [14], while Shi et al. provided five semantic structures [73] to write policies to avoid parsing failures. Nevertheless, CNLs limit the ability to express complex policies according to their limited syntax [33], making the existing rule-based policy generation frameworks less flexible. Therefore, to improve the flexibility of policy generation frameworks by allowing the administrator to generate machine-executable policies from unconstrained NLACPs, deep learning-based information extraction techniques were then utilized by the previous literature [30; 50; 52; 52; 53; 53; 93; 96; 50]. The two most common deep learning-based techniques utilized in the extracted articles to extract information are Named Entity Recognition (NER) [30; 50; 52; 53; 99] and Semantic Role Labeling (SRL) [50; 52; 53; 93; 96] as shown in Table 7. NER is a NLP task that identifies the named entities of a given sentence. Therefore, several extracted publications used NER to extract entities related to the access control domain, such as user, actions, resources, etc., from NLACPs [30; 99]. For example, Heaps et al. fine-tuned the BERT language model [26] using a dataset containing user stories [21] to extract such named entities to build access control policies [30]. However, there is one major problem with using NER to identify policy components. Consider a NLACP, _'The doctor can write the patient's record, and the nurse can only read the patient's records'_. A properly fine-tuned NER model can identify named entities, doctor and nurse as users, read and write as actions, and patient's record as a resource. However, since NER only focuses on extracting entities, it does not indicate which action is associated with which subject and what resource (i.e., access control rule). In the above example, the action "_write_" belongs to the subject "_doctor_" and the resource "_patient's records_". Since NER does not identify such relationships, it cannot extract access control rules in complex access control policies that contain multiple rules representing different users performing different actions on different resources [30]. As a result, it will cause access control failures, leading to data breaches. For instance, someone can generate a policy using the extracted \begin{table} \begin{tabular}{c l l} \hline \hline **Patterns between policy** & **Patterns between** & **Patterns between** \\ **components [79]** & **subject-attributes [5]** & **object-attributes [5]** \\ \hline (VB root (NN nsubj) (NN dobj)) & nsubj, amod & dobj, amod \\ (VB root (NN nsubjpass)) & nsubj, prep & pobj, amod \\ (VB root (NN nsubj) (NN prep)) & nsubjpass, amod & dobj, prep \\ (VB root (NN dobj)) & nsubj, compound & dobj, compound \\ (VB root (NN prep\(\_\)\%)) & nsubj, ROOT, amod & nsubjpass, amod \\ \hline \hline \end{tabular} \end{table} Table 8: Dependency patterns that encode relationships between the policy components (column 1), subject-attributes (column 2), and object-attributes (column 3) of Access Control Sentences. VB: verb, NN: noun, nsubj: nominal subject, dobj: direct object, nsubjpass: passive nominal subject, amod: adjectival modifier, prep: prepositional modifier, obj: object of preposition entities from the NLACP mentioned earlier by allowing the nurse to write the patient's records since the NER output does not mention that the nurse can only read them. In such cases, NER cannot be used to generate access control policies accurately [30]. As a solution, previous literature then utilized SRL algorithms to extract policy components that can handle multiple rule scenarios [50, 51, 52, 53, 93, 96]. SRL is used to analyze the meaning of the sentence by extracting its predicate-argument structure, determining "_who did what to whom_", "_when_", "_where_", etc. [74]. Since SRL explicitly detects the subject (_who_), action (_what_), resource (_whom_), as well as other environment attributes such as location (_where_) and time (_when_), previous literature extensively employed different semantic role labeling tools to extract access control rules and attributes. Some of them are SENNA (Semantic/Syntactic Extraction using a Neural Network Architecture) [18], neural network-based semantic role labeler used by Narouei et al. [50, 51, 52, 53], SwiRL [82] used by Narouei et al. [53] and Yang et al. [96], EasySRL [40] used by Narouei et al. [53], Mate-tools Semantic Role Labeler [10] used by Narouei et al. [53] and BERT-based SRL [74] used by Xia et al. [93]. Nevertheless, almost all the publications that used SRL for access control rule extraction did not properly adapt the used SRL algorithms to the access control domain using a domain-related dataset [50, 51, 52, 93, 96] even though adapting them to access control domain increases the access control rule extraction accuracy [53]. For example, Narouei et al. showed that adapting the SwiRL SRL model to the access control domain with even a small amount of labeled domain-related data, increased the rule extraction F1-score bt 2% [53]. Instead, most of the existing works used general-purpose SRL models mentioned earlier to extract components without domain adaptation [50, 51, 52, 93, 96]. This has raised two main problems. While SRL extracts most of the required policy components, it only extracts one user and one resource for a given predicate/action of the NLACP [74]. For instance, BERT-based SRL by Shi et al. [74] identifies _"The doctor and the nurse_" as a single user in the NLACP _"The doctor and the nurse can read patient's records_.", despite having 2 users, _"The doctor"_ and _"The nurse"_ that belong to two rules. Therefore, to extract components with more granularity, another technique such as NER [50, 52, 53] or dependency parsing should be used on the users and resources extracted by SRL [93]. Secondly, general-purpose SRL models were often trained to generate multiple labels associated with each predicate/action in the input sentence [93]. As a result, the SRL model will extract subjects and resources associated with predicates that are not related to access control policies, such as "is", "are", etc. In the above example BERT-based SRL model [74] outputs two sequences of labels related to the two predicates, _can_ and _read_. These additional subjects and resources related to unwanted predicates will generate redundant and incorrect access control policies, bringing the overall rule extraction accuracy down and making the maintainability of policies difficult [93]. Therefore, a pruning technique should be employed to filter the unwanted predicate-based label sequences to extract access control rules from the correct predicate-based output [93]. Up to this point, we discussed the techniques used in previous literature to pre-process NL documents, identify ACP sentences using text classification, and extract required policy components/rules from the identified NLACPs. As the last step, several publications then utilized information transformation formats to represent those extracted components as machine-executable codes. #### 4.2.4. **Step 4: Information Transformation** The most common transformation format among those articles was XACML (eXtensible Access Control Markup Language), which was used in 7 of the identified articles [13, 14, 27, 73, 83, 85, 94]. Apart from XACML representation of the policy, other XML-based representations such as PERMIS (PrivilEge and Role Management Infrastructure Standards) [73, 33], and EPAL (Enterprise Privacy Authorization Language) [13, 14, 85] were also employed in the extracted publications. However, if the policy generation pipeline outputs the generated policies in a specific policy language, its compatibility is reduced. For example, if the policy generation pipeline generates policies in XACML, even though the administrators need them in PERMIS language, they have to put in extra effort to translate XACML policy into a PERMIS policy. Therefore, to make the policy generation more compatible, several publications propose intermediate representations to transform generated access control policies into ontologies (Friedman et al., 2017; Goyal et al., 2018; Goyal et al., 2019; Goyal et al., 2019) or JSON (JavaScript Object Notation) format (Kumar et al., 2019; Goyal et al., 2019). ## 5. Discussion In this systematic literature review (SLR), we analyzed 49 publications by following the guidelines proposed by Kitchenham (Kitchenham, 2017) and reported according to the PRISMA framework to identify the tools and frameworks used for access control policy configuration and generation. We reported the unique features and limitations of the previous attempts to generate access control policies from high-level natural language requirements using graphical policy authoring and visualization tools and NLP-based automated policy generation frameworks to answer the research questions RQ1 and RQ2. ### Graphical policy authoring and visualization Through the SLR, we revealed that the graphical policy authoring and visualization tools provide graphical interfaces that allow administrators to write and visualize policies with less cognitive load. As we reported in Section 4.1, previous literature proposed graphical tools such as text editor-based tools (Kumar et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), template-based tools (Kumar et al., 2019; Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019), access matrix-based tools (Friedman et al., 2017; Goyal et al., 2019; Goyal et al., 2019) and graph-based visualization tools (Goyal et al., 2019; Goyal et al., 2019; Goyal et al., 2019). These tools provide a higher level of abstraction, allowing administrators to focus on the high-level access control requirements of the organization rather than low-level technical details of the access control model, language, or syntax. However, despite having those advantages, we identified and discussed several limitations of the graphical policy authoring and visualization tools in Section 4.1. As our SLR revealed, all those discussed limitations make the existing policy authoring and visualization tools less usable for effective access control configuration, causing access control failures (Kumar et al., 2019; Goyal et al., 2019; Goyal et al., 2019). Therefore, to improve the usability of graphical policy authoring and visualization, by following Nielsen's usability guidelines (Nielsen, 2017), we suggest improving the learnability, memorability, and user satisfaction of those tools while improving the accuracy and efficiency of the policy configuration process. _Improving the learnability_ - According to Nielsen's usability components, "Learnability" measures how easy it is to perform a given policy configuration task for the first time using a policy authoring or a visualization tool (Nielsen, 2017). In order to improve the learnability of those tools, they can be designed in a way that the tools are easily explainable to administrators using words, phrases, and concepts familiar to the user (Nielsen, 2017), so that administrators will be able to easily interpret the functionalities of the interface and successfully configure access control policies(Kumar et al., 2019; Goyal et al., 2019). Brostoff et al. utilized the mentioned learnability improvement technique to improve their policy authoring interface by simplifying its label names used to define its policy configuration features to make the interface easily explainable to administrators (Kumar et al., 2019). In their user study, they found that the study participants were able to successfully understand the access control mechanism by referring to the labels alone, as the label names were more explainable to the participants compared to the previous versions of the interface (Kumar et al., 2019). On the other hand, to make the policy visualization more explainable, the visualization features (e.g., colors, shapes, line styles, etc.) that were used to visualize policies, and their meanings can be described clearly in the visualization interface (Kumar et al., 2019; Goyal et al., 2019; Goyal et al., 2019). To clearly describe that information, Reeder et al. (Reeder et al., 2019; Goyal et al., 2019) have used a legend displayed at the top of their access matrix-based policy authoring and visualization interface "Expandable Grids". As a result, their interface users were able to quickly learn the interface and complete the given policy configuration tasks easily, compared to Windows XPFP (Windows XP File Permission) interface users [66]. Therefore, based on the mentioned empirical evidence, we suggest making the policy authoring and visualization tools easily explainable to administrators to improve their learnability. _Improving the memorability_ - To improve the memorability of policy authoring and visualization tools, they can be designed in a way that the administrator can easily remember them and the functionality of their features [55]. Making the interface simple and utilizing visual cues such as icons and colors was one method followed by previous literature to make policy authoring and visualization tools memorable [56; 70; 73; 80; 81]. Stepien et al. used a simple structure to develop their text editor-based policy authoring tool containing only four text boxes to input subject, action, resource, and condition and two radio buttons to select whether the policy is an allow policy or deny policy [80; 81]. Therefore, since the interface is simple, even if administrators stop using the interface for some time, they will be able to gain the same level of proficiency in the interface quickly when they return to the interface. Furthermore, previous research found that visualization techniques, as well as colors and visual cues such as icons, help improve the memorability of the user interface significantly [70]. For example, instead of having to read many lines of code to understand and memorize relationships between access control rules, displaying all the rules in an easily explainable visual representation such as access matrices [44; 66] would help to improve the memorability of the interface. Therefore, embedding policy visualization techniques with colors and visual cues into policy authoring tools would be another way of improving the memorability of policy configuration tools. _Improving the efficiency_ - "Efficiency" measures how quickly the administrators can perform configuration tasks once they learn the tool [55]. As we revealed in this SLR, one of the main reasons that prevent the administrator from efficiently configuring policies is poor representation of task-relevant information [44]. Suppose the information relevant to configuring and understanding access control policies, such as the user's stated permissions and the user's effective permissions (i.e., permissions derived based on the user's individual permissions and permissions of the groups that the user belongs to), is either not displayed at all or hidden inside different windows of the interface. In that case, the administrator might not have a holistic idea about how the access control mechanism works and how Fig. 9: Different policy authoring approaches of SPARCLE Policy Workbench [36] that reported high user satisfaction. (a) NL with a guide approach: that provides a guide (highlighted in blue) to write access control policies in controlled natural language (CNL). (b) The Structured list approach: that allows the administrator to select policy components to build the policy as a sentence. access control rules affect one another to derive the final access decision of the policy [44; 66; 67]. Therefore, if the administrator tries to write policies by searching for that information each time, the administrator's policy authoring efficiency will be decreased [44; 66]. To avoid such situations, Reeder et al. [66] designed their interface "Expandable Grids" by displaying all the information relevant to configuring and understanding access control policies (e.g., stated permissions, effective permissions, etc.) in a single access matrix. Consequently, in the user study, Reeder et al. found that the average policy configuration task completion time of their interface users (i.e., 53.0s) is lower (by 35.3s) than the average task completion time of the Windows XPFP interface users (i.e., 88.3s) [66]. Therefore, we suggest displaying all the task-relevant information clearly and saliently within the interface to improve the policy configuration efficiency. _Reducing errors_ - When configuring a policy, human errors can occur during four stages [44]. Stage 1: Identify and interpret information relevant to policy configuration and decide whether or not the policy is properly configured. Stage 2: If not, formulate a sub-goal based on the interpreted information to configure the policy step by step; if the entire policy is properly configured, exit the loop. Stage 3: Formulate the plan to achieve the sub-goal. Stage 4: Execute the plan [44]. However, if the information relevant to identifying whether the policy is correctly configured (in Stage 1) or to creating sub-goals (in Stage 2) is unavailable, incorrect, or misinterpreted, "goal errors" can occur, resulting in access control failures [44]. The solution for those errors is to make the relevant information available to administrators in a correct, easily understandable form [44; 66]. Therefore, by displaying all the information relevant to creating sub-goals in an easily understandable access matrix, Reeder et al. designed their access matrix-based policy authoring and visualization tool, "Expandable Grids" [66; 67]. As a result, in their user study, Reeder et al. found that the overall policy configuration accuracy of their interface users is 83.6%, which is 27.1% higher than the Windows XPFP users who did not have proper task-relevant information displayed in their interface [66]. On the other hand, even if the relevant information is available, if the interface does not support complex and unique access control policies, administrators might not be able to generate sub-goals (in Stage 2) correctly to configure complex and unique policies, leading to access control failures, as we discussed in Section 4.1. As a solution, by following Johnson et al., we can allow administrators to write policies with different structures and policy components through the graphical policy authoring tool [35]. Nonetheless, errors can still occur when applying those policies to the authorization system, such as a policy conflict in Stage 4 [67] (e.g., writing a policy by allowing a user to access a resource that is already restricted by another policy.). In such cases, administrators have to know about the exact location where the error occurred, how severe the error is, and what the possible solutions are [56; 95]. To do that, the policy authoring interfaces can be improved to provide feedback in a timely manner as usable error messages and warnings by emphasizing the consequences if the incorrectly written policy is applied to the system (e.g., if the conflicting policy is applied to the system, the user might gain access to confidential information of the organization.) [56; 95]. As a result, administrators will become more cautious when writing access control policies, leading to reduced error rates [56]. _Improving the subjective satisfaction_ - To make the administrator satisfied with the policy authoring experience, one technique used in previous literature is to improve the "naturalness" of the language used to write policies [33; 36; 56]. By doing so, administrators were able to easily translate their mental plans into a machine-executable policy without doubting the quality of their work, leading to higher satisfaction [33]. Shi et al. [73] confirmed that theory by evaluating the satisfaction of their policy authoring interface against the traditional PERMIS policy authoring GUI through the Post-Study System Usability Questionnaire (PSSUQ) from IBM [29]. PSSUQ scale ranges from 1 (no effort to use the tools) to 7 (the tool is unusable) [73]. As Shi et al. found out, since their interface improves the "naturalness" of policy authoring compared to the traditional PERMIS GUI, their interface received the overall satisfaction score of 3.01, while the traditional GUI received the satisfaction score of 3.87 [73]. Nevertheless, subjective satisfaction can further be improved by providing guidelines to write access control policies (Karate et al., 2017). To test that hypothesis, Karat et al. conducted a user study by evaluating the satisfaction of 36 policy authors when they used different policy authoring approaches (Karat et al., 2017). The satisfaction was evaluated according to a questionnaire using a 7-point Likert scale (7 being the highest satisfaction) (Karat et al., 2017). In that study, Karat et al. found that when the administrators were provided with either a guide to write a complete access control policy as shown in Fig. 9(a) or a template to fill its blanks with the provided policy components in lists as shown in Fig. 9(b), the interface achieved a higher user satisfaction (satisfaction scores of 4.9 and 4.6 respectively) compared to unguided policy authoring (satisfaction score of 3.8), which did not provide either a set of guidelines or lists of policy components (Karat et al., 2017). Therefore, we suggest utilizing natural language to write policies with a clear set of guidelines on writing complete and correct access control policies to improve subjective satisfaction while improving the quality of written policies. ### NLP-based automated policy generation Our SLR revealed that NLP-based automated policy generation frameworks possess the potential to generate accurate access control policies with minimum human involvement. Among many NLP techniques, previous literature employed rule-based (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018), machine learning-based techniques (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018), and deep learning-based techniques (Karat et al., 2017; Karat et al., 2018) to generate access control policies from high-level NL requirements. However, we revealed that the existing automated policy generation frameworks are not reliable enough to generate accurate access control policies without human supervision. As we reported in Section 4.2, it is mainly because most of those frameworks inherit the limitations of the NLP techniques utilized to build those frameworks and lack domain adaptation due to the scarcity of domain-related datasets. Therefore, to develop a more reliable policy generation framework, first, it is necessary to identify what are the best (in terms of accuracy) and most prevalent techniques used by the existing automated policy generation frameworks in each step of the policy generation process: (1) pre-processing, (2) text classification, (3) information extraction, and (4) information transformation. Identifying those best techniques would help researchers to improve and combine them together to develop more reliable and secure policy generation frameworks in the future. _Pre-processing_ - Pre-processing was carried out in the existing policy generation pipelines by using several techniques such as sentence tokenization (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018), word tokenization (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018), stop-word removal (Karate et al., 2017; Karat et al., 2018), and Filtering as we identified in Section 4.2.1. Among those techniques, filtering, sentence tokenization, and subword tokenization can be considered the most important and necessary steps to perform when generating access control policies (Karate et al., 2018). After a high-level requirement specification document is obtained, first, it is necessary to filter and remove unnecessary parts of the document, such as titles, headers, footers, etc. (Karate et al., 2017), as we can safely assume that NLACPs will not contain within those sections. Then, the document should be tokenized to separate paragraphs, lists, etc., into sentences (i.e., sentence tokenization) and sentences into meaningful subwords (i.e., subword tokenization) (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018) to convert the sentences in the document to a set of integers based on a vocabulary (i.e., a lookup table). We recommend subword tokenization over word tokenization, as it reduces the vocabulary size, making the lookup operation faster compared to the word tokenization (Karate et al., 2018). The aforementioned pre-processing steps ensure that the data used to train and infer a ML/NLP model is properly cleaned and meaningful. Therefore, we suggest the above three pre-processing steps, namely, (1) filtering, (2) sentence tokenization, and (3) subword tokenization, to perform before feeding the NL documents into a ML/NLP algorithm as shown in Fig. 10. _Text classification_ - After pre-processing, pre-processed data were often fed to a text classification algorithm first to identify NLACPs (Karate et al., 2017; Karat et al., 2018; Karat et al., 2018). Among many text classification techniques reported in previous access control policy generation research, a transformer-based language model (LM) named BERT (Zhu et al., 2019) has achieved the highest NLACP classification performance (i.e., F1 score of 0.92 (Zhu et al., 2019) for the dataset shown in Table 9), according to Table 6. These LMs were pre-trained on gigabytes of data, enriching them with a significant understanding of NL compared to other techniques, such as rule-based parsing techniques (Zhu et al., 2019; Zhu et al., 2019). Therefore, those LMs are inherently better at understanding nuances of English sentences, in turn, handling ambiguous and complex NLACP structures. Thus, upon carefully adopting them to the access control domain by training them with the dataset introduced by Slankas et al. (2019), shown in Table 9, Xia et al. were able to produce state-of-the-art results in policy identification (Zhu et al., 2019). However, adopting a LM to access the control domain requires a relatively larger annotated dataset, since a LM often contains millions if not billions of trainable parameters to update when training (Zhu et al., 2019; Zhu et al., 2019). Therefore, if a sufficient dataset is available, utilizing transformer-based LMs would be a promising approach for a more reliable access control policy identification. On the other hand, if it is difficult to collect such sufficient real-world datasets due to privacy implications, data augmentation techniques such as back translation (Zhu et al., 2019), could be used to generate more data, and annotate them manually (Zhu et al., 2019) or automatically (Zhu et al., 2019), which we discuss later in the section. Once the dataset is expanded with more data with annotations, it can be used to fine-tune transformer-based LMs to extract access control rules, as shown in Fig. 10. _Information extraction_ - Upon identifying NLACPs, their policy components were extracted (i.e., subject, action, resource, etc.) next. As we revealed in this SLR, the overall performance of techniques used to extract policy components depends on the language used to write NLACPs as depicted in Fig. 10. For example, if the NLACP is written in a controlled natural language (CNL) (i.e., written according to a specific template), shallow parsing (Beng et al., 2019) was the most promising approach (F1-score of 0.96) to extract policy components according to Table 7, as it is easy to design grammar rules for known sentence structures to achieve a higher parsing accuracy (Beng et al., 2019; Zhu et al., 2019). However, high-level requirement specification documents are often written in unconstrained natural language (Zhu et al., 2019), which makes it difficult for shallow parsers to correctly identify policy components in them using pre-defined grammar rules as we discussed in Section 4.2.3. Therefore, in that case, according to previous literature, we suggest utilizing transformer-based LMs to extract access control rules when developing a policy generation framework in the future. Because according to Table 7, they were able to achieve F1 scores of 0.87 in extracting policy components using NER when there is only one rule in the policy and 0.72 in extracting access control policy components as meaningful rules via SRL when there are multiple rules in a policy respectively. _Information transformation_ - Choosing the policy language that can represent extracted access control rules depends on the type of policies that the organization uses (Zhu et al., 2019). For example, according to previous literature, if the organization is using ABAC (Attribute-based Access Control) policies, the recommended language would be XACML as it is specifically designed to represent ABAC policies (Beng et al., 2019; Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019). On the other hand, previous literature recommended PERMIS language when they are dealing with RBAC (Role-based Access Control) policies (Beng et al., 2019; Zhu et al., 2019). However, if the generated policies are in a specific policy language such as XACML, they might not be compatible with an organization that uses PERMIS and vice versa. Therefore, by keeping compatibility, we suggest generating the final access control policies in an intermediate representation such as an ontology (Beng et al., 2019; Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019) or JSON format (Zhu et al., 2019), which can be easily processed and extract rules to generate any machine-executable policies in any policy language. #### 5.2.1. **Datasets** - According to the above discussion, one of the key factors that decide what technique to use to identify NLACPs and extract their policy components/rules accurately is the availability of datasets. However, as we revealed in this SLR, many previous studies have highlighted that the access control policy engineering domain suffers from a scarcity of domain-related data (Zhu et al., 2019; Zhu et al., 2019). Nevertheless, we came across one dataset which is widely used among the extracted literature, including [3, 4, 50, 51, 6, 52, 3], introduced by Slankas et al. in [79]. The dataset consists of five data sources containing 2477 sentences from multiple real-world systems, such as iTrust [45], IBM course registration system, CyberChair, and the Collected ACP data from [94]. Detailed information about the dataset is shown in Table 9, with the highest F1 scores achieved for each data source in the Text Classification (TC F1) and Information Extraction (IE F1) steps of the access control policy generation process. However, the above dataset is not large and diverse enough to train a transformer-based LM [93]. Therefore, previous literature used data augmentation techniques such as back translation, which translates a sentence from the dataset Figure 10. Framework to design reliable access control policy generation frameworks according to provided suggestions. \begin{table} \begin{tabular}{c c c c|c c} \hline \hline \multirow{2}{*}{**Data source**} & **Total** & **ACP** & **non-ACP** & \multirow{2}{*}{**TC F1**} & **IE F1** \\ & **sentences** & **sentences** & **sentences** & & \\ \hline iTrust for Text2Policy [94] & 471 & 418 & 53 & 0.98 [49, 79] & 0.8 [53] \\ iTrust for ACRE [79] & 1160 & 549 & 611 & 0.9 [49] & 0.72 [93] \\ IBM Course Management & 401 & 168 & 233 & 0.97 [94] & 0.72 [93] \\ CyberChair & 303 & 140 & 163 & 0.79 [93] & 0.71 [93] \\ CollectedACP [94] & 142 & 114 & 28 & 0.92 [49] & 0.82 [53] \\ \hline **Total sentences** & 2477 & 1389 & 1088 & - & - \\ **Proportions** & - & 56\% & 44\% & - & - \\ \hline \hline \end{tabular} \end{table} Table 9. Statistics of the dataset compiled by Slankas et al. [79] and highest F1 scores achieved for each dataset in Text Classification (TC F1) and Information Extraction (IE F1). into a different language and translates it back to the original language, to generate more synthetic data points from the existing data (Savav et al., 2018). Once the dataset is expanded with synthetic data, it should be annotated to train a model in a supervised manner (Savav et al., 2018). The annotation process can be done mainly in two ways: manual (Savav et al., 2018; Savav et al., 2018) and automated (Savav et al., 2018). In the manual annotation process, experienced human annotators were used to generate labels for the dataset manually (Savav et al., 2018; Savav et al., 2018). However, manual labeling is laborious, expensive, and time-consuming (Savav et al., 2018). Therefore, Narouei et al. used a semi-supervised learning technique named "pseudo labeling" to automatically generate pseudo labels for the unlabeled data using a pre-trained SRL model, SwiRL (Savav et al., 2018). Then, the pseudo-labeled small in-domain dataset was mixed with a large out-of-domain dataset and re-trained the model to achieve a 2% increment in F1 score in access control rule extraction (Savav et al., 2018). By using the aforementioned techniques, sufficient and annotated datasets can be created to adapt NLP models to generate access control policies with higher reliability. ## 6. Limitations This SLR was conducted thoroughly to provide an extensive overview of the topic by preserving the reproducibility of the reported results in the literature. Nevertheless, while conducting the SLR, as we first filter the returned articles from the search query based on their titles and abstracts alone using our inclusion and exclusion criteria, a relevant article may be excluded during the selection phase. Therefore, to avoid such situations as much as possible, we performed an additional manual search, a backward snowballing search to include publications cited by the publication extracted in digital library and conference/journal search phases (Savav et al., 2018). Then, we thematically analyzed the publications (Savav et al., 2018) to answer the research questions mentioned in Section 3.1.2. The first author performed the analysis in a systematic way to generate codes and identify themes (patterns) that help answer the research questions. However, the generated codes and themes can be biased depending on the experience, knowledge, and point of view of the coder. To reduce this bias, as Braun and Clerk advised (Sav et al., 2018), we considered the perspectives of all the authors when developing themes, as we mentioned in Section 3.3. ## 7. Conclusion and Future Works Access control failures due to usability and reliability issues of the existing policy configuration tools and generation frameworks could lead to data breaches (Savav et al., 2018; Savav et al., 2018). Therefore, to improve their usability and reliability, we conducted a SLR analyzing (1) graphical policy authoring and visualization tools and (2) NLP-based automated policy generation frameworks to reveal their limitations. Based on our findings, we have provided several design guidelines that would help improve the usability of policy authoring and visualization tools according to Nielsen's usability components (Nielsen, 1999): learnability, memorability, efficiency, errors, and subjective satisfaction. On the other hand, we further provided guidelines to improve the reliability of automated policy generation frameworks by selecting the best techniques for each step of the policy generation process: (1) pre-processing, (2) text classification, (3) information extraction, and (4) information transformation, as well as developing and annotating datasets. Next, based on the research gaps revealed through the SLR, we will highlight several future works that would help address the usability-security trade-off of access control policy generation approaches in the future. According to our SLR, incorporating the administrator's perspective via user studies to develop policy authoring and visualization tools may improve their usability (Savav et al., 2018; Savav et al., 2018). In the light of the experienced administrators might prefer textual interfaces such as Command Line Interfaces (CLIs) as they like the control that textual interfaces provide compared to graphical interfaces (Sav et al., 2018). On the other hand, less experienced administrators might prefer GUIs as they allow the administrator to write (i.e., policy authoring) and visualize policies without worrying about access control languages and syntax [11; 80]. Therefore, due to those differences, including administrators with different levels of expertise in the tool design process via user studies will help create more usable tools for experienced and inexperienced administrators alike in the future. After designing and developing graphical policy authoring and visualization tools, they should be evaluated to verify their usability [55] with the involvement of human subjects/participants via usability evaluation instruments such as PSSUQ (Post-Study System Usability Questionnaire) [29], or System Usability Scale (SUS) [15]. However, existing studies rarely used those standard usability evaluation instruments to evaluate policy authoring and visualization tools and refine them accordingly [13; 80; 81]. Those instruments provide standard questionnaires to evaluate the usability of user interfaces in terms of the discussed usability components in Section 5.1[15; 29]. Therefore, if access control policy authoring and visualization tools were not evaluated using those questionnaires, there is a chance those tools to be unusable, leading the administrators to make mistakes when writing and interpreting policies, as we revealed in Section 5.1. For example, if the tool developers did not collect feedback from user study participants on how easy it is to learn the tool (which is one of the main items of those usability questionnaires) and refine the tool accordingly, the tool might be harder to learn, resulting in mistakes when configuring access control policies. As a result, administrators might tend to misinterpret the functionalities of the interface, producing incorrect access control policies that lead to data breaches [8; 16; 66]. Thus, utilizing standard usability evaluation instruments [29] such as PSSUQ and SUS to evaluate the usability of access control policy configuration tools can be done as future work. However, those existing instruments might not correctly evaluate the usability of access control configuration tools. Because as we identified in this SLR, their usability also depends on factors such as support for complex access requirements, avoiding misinterpretations of policy visualizations, and the ability to identify and resolve policy conflicts [68], which are not explicitly covered in the general usability evaluation instruments such as PSSUQ and SUS [68]. Therefore, we encourage researchers to develop standard usability evaluation instruments to specifically evaluate access control policy authoring and visualization tools in the future. An important item in PSSUQ is "The system gave error messages that clearly told me how to fix problems.", which evaluates feedback from the system [29]. However, as we discussed in Section 4.1, existing policy authoring tools do not provide sufficient feedback by pointing out policy authoring mistakes, their locations, their severities, and how to resolve the mistakes [56; 95]. If such feedback is not provided clearly, as Xu et al. found out, administrators often try trial and error to find and correct the mistakes, sometimes introducing more errors to the authorization system [95]. Therefore, to provide such feedback, first, that feedback should be carefully designed in a precise and concise way that is easily understandable by highlighting the severity of mistakes [56], irrespective of the administrator's expertise. To do that, Explainable Security (XSec) concepts [89] can be used to decide what information should be presented in the feedback, where and when to display the feedback, and how the feedback should be displayed while emphasizing its severity. Therefore, we suggest conducting research on designing feedback (i.e., error messages, warnings [56]) with the help of XSec specifically for access control policy configuration systems as future works. Once the feedback is designed, it should be generated based on the written policy automatically. Therefore, future research can then focus on improving the existing automated policy generation frameworks to automatically generate feedback/insights on the poorly written (i.e., ambiguous, incomplete, etc.) NLACP, by taking the administrator's expert level into account. By doing so, administrators will become more cautious when writing policies, and they do not need to use trial and error to find solutions, leading to fewer access control failures. However, to adapt ML/NLP techniques to generate insights on poorly written NLACPs or even to generate machine-executable policies, diverse, correctly annotated datasets are required, as we discussed in Section 5.2. Hence, developing such datasets should also be done as a part of future research, which will help accurately generate machine-executable policies and feedback on poorly written NLACPs automatically. Once datasets are developed, NLP/ML models such as transformer-based LMs can be trained using techniques such as transfer learning [90] or Parameter Efficient Fine Tuning (PEFT) [32] to generate feedback on the poorly written NLACPs. Furthermore, as we revealed in this SLR, automated policy generation frameworks might not be 100% reliable even if the most advanced NLP techniques that are proven to provide more accurate results in general text classification and information extraction tasks such as transformer-based LMs [26, 42, 58, 65] are used to build them [30, 93]. As a solution, while the policy generation framework provides feedback to the administrator, the administrator's expertise can also be utilized to provide feedback on incorrectly generated policies by the policy generation frameworks via a usable interface. That feedback can be used to re-train the underlying policy generation framework with Reinforcement Learning with Human Feedback (RLHF) [58] to improve the automated access control policy generation framework further. Adapting these techniques to optimize the automated policy generation framework (especially text classification and information extraction steps) and combining it with a usable interface that supports the aforementioned feedback mechanisms can be another important future research direction. That combined framework will help avoid data breaches due to access control failures in two ways. First, since it improves the administrator's policy authoring experience via a usable interface and a usable feedback mechanism that provides insights on poorly written policies, human mistakes that lead to data breaches will be alleviated. Secondly, since it uses advanced NLP techniques to improve the reliability of the underlying policy generation process with human (i.e., administrator) feedback, errors of automated policy generation will also be reduced, leading to fewer access control failures in the future.
2306.01363
Quantifying Sample Anonymity in Score-Based Generative Models with Adversarial Fingerprinting
Recent advances in score-based generative models have led to a huge spike in the development of downstream applications using generative models ranging from data augmentation over image and video generation to anomaly detection. Despite publicly available trained models, their potential to be used for privacy preserving data sharing has not been fully explored yet. Training diffusion models on private data and disseminating the models and weights rather than the raw dataset paves the way for innovative large-scale data-sharing strategies, particularly in healthcare, where safeguarding patients' personal health information is paramount. However, publishing such models without individual consent of, e.g., the patients from whom the data was acquired, necessitates guarantees that identifiable training samples will never be reproduced, thus protecting personal health data and satisfying the requirements of policymakers and regulatory bodies. This paper introduces a method for estimating the upper bound of the probability of reproducing identifiable training images during the sampling process. This is achieved by designing an adversarial approach that searches for anatomic fingerprints, such as medical devices or dermal art, which could potentially be employed to re-identify training images. Our method harnesses the learned score-based model to estimate the probability of the entire subspace of the score function that may be utilized for one-to-one reproduction of training samples. To validate our estimates, we generate anomalies containing a fingerprint and investigate whether generated samples from trained generative models can be uniquely mapped to the original training samples. Overall our results show that privacy-breaching images are reproduced at sampling time if the models were trained without care.
Mischa Dombrowski, Bernhard Kainz
2023-06-02T08:37:38Z
http://arxiv.org/abs/2306.01363v1
# Quantifying Sample Anonymity in Score-Based Generative Models with Adversarial Fingerprinting ###### Abstract Recent advances in score-based generative models have led to a huge spike in the development of downstream applications using generative models ranging from data augmentation over image and video generation to anomaly detection. Despite publicly available trained models, their potential to be used for privacy preserving data sharing has not been fully explored yet. Training diffusion models on private data and disseminating the models and weights rather than the raw dataset paves the way for innovative large-scale data-sharing strategies, particularly in healthcare, where safeguarding patients' personal health information is paramount. However, publishing such models without individual consent of, e.g., the patients from whom the data was acquired, necessitates guarantees that identifiable training samples will never be reproduced, thus protecting personal health data and satisfying the requirements of policymakers and regulatory bodies. This paper introduces a method for estimating the upper bound of the probability of reproducing identifiable training images during the sampling process. This is achieved by designing an adversarial approach that searches for anatomic fingerprints, such as medical devices or dermal art, which could potentially be employed to re-identify training images. Our method harnesses the learned score-based model to estimate the probability of the entire subspace of the score function that may be utilized for one-to-one reproduction of training samples. To validate our estimates, we generate anomalies containing a fingerprint and investigate whether generated samples from trained generative models can be uniquely mapped to the original training samples. Overall our results show that privacy-breaching images are reproduced at sampling time if the models were trained without care. ## 1 Introduction Maintaining privacy and anonymity is of utmost importance when working with personal identifiable information, especially if data sharing has not been individually consented and thus cannot be shared with other institutions Jin et al. (2019). The potential of privacy preserving consolidating of private datasets would be significant and could potentially solve many problems, including racial bias Larrazabal et al. (2020) and the difficulty of applying techniques such as robust domain adaptation Wang et al. (2022). Recent advances in generative modeling, _e.g._, effective diffusion models Song et al. (2020); Dhariwal and Nichol (2021); Rombach et al. (2022); Ruiz et al. (2022), enabled the possibility of model sharing Pinaya et al. (2022). However, it remains unclear to what extent a shared model reproduces training samples and whether or not this raises privacy concerns. In general the idea of our research is to take a dataset \(D\) of samples from the image distribution \(p_{data}(\mathbf{x})\). Then the goal is to train a generative model \(s\), which learns only on private data. Direct privacy breaches would occur if the generative model exhibits a non-zero probability for memorizing and reproducing samples from the training set. Guarantees that such privacy breaches will not occur would ultimately allow to train models based on proprietary data and share the models instead of the underlying data sets. Healthcare providers would be able to share complex patient information like medical images on a population basis instead of needing to obtain individual consent from patients, which is often infeasible. Guarantees that no personal identifiable information is shared would furthermore pave the way to population studies on a significantly larger scale than currently possible and allow to investigate bias and fairness of downstream applications on anonymous distribution models of sub-populations. However, currently trained and published models can be prompted to reproduce training data at sampling time. Somepalli et al. (2023) have observed that diffusion models are able to reproduce training samples and Carlini et al. (2023) have even shown how to retrieve faces of humans from training data, which raises serious privacy concerns. Other generative models are directly trained for memorization of training samples Cong et al. (2020). We propose a scenario with an adversarial that has some prior information about a training sample and would therefore be able to filter out the image based on this information. In medical imaging this could be any medical device, a skin tattoo, an implant, or heart monitor; any detectable image with visual features that are previously known. Then an attacker could generate enough samples and filter images until one of the generated samples contains this feature. If the learned marginal distribution of the generative model that contains this feature is slim, then all images generated with it will raise privacy concerns. We will refer to such identifiable features as fingerprints. To estimate the probability of reproducing fingerprints, we propose to use synthetic anatomical fingerprints (SAF), which can be controlled directly through synthetic manipulations of the training dataset and reliably detected in the sampling dataset. Our main contributions are: * We formulate a realistic scenario in which unconditional generative models exhibit privacy problems due to the potential of training samples being reproduced. * We propose a mathematical method for finding the upper bound for the probability of generating sensitive data from which we derive an easily computable indicator. * We evaluate this indicator by computing it for different datasets and show evidence for its effectiveness. ## 2 Background Consider \(D\) containing samples from the real image distribution \(p_{data}(\mathbf{x})\). In general, highly effective generative methods like diffusion models Rombach et al. (2022) work by modeling different levels of perturbation \(p_{\sigma}(\tilde{\mathbf{x}})\coloneqq\int p_{data}(\mathbf{x})p_{\sigma}( \tilde{\mathbf{x}}\mid\mathbf{x})\mathrm{d}\mathbf{x}\) of the real data distribution using a noising function defined by \(p_{\sigma}(\tilde{\mathbf{x}}\mid\mathbf{x})\coloneqq\mathcal{N}(\tilde{ \mathbf{x}};\mathbf{x},\sigma^{2}\mathbf{I})\). In this case \(\sigma\) defines the strength of the perturbation, split into N steps \(\sigma_{1},\ldots,\sigma_{N}\). The assumption is that \(p_{\sigma_{1}}(\tilde{\mathbf{x}}\mid\mathbf{x})\sim p_{data}(\mathbf{x})\) and \(p_{\sigma_{N}}(\tilde{\mathbf{x}}\mid\mathbf{x})\sim\mathcal{N}(\mathbf{x}; \mathbf{0},\sigma_{N}^{2}\mathbf{I})\) Then we can define the optimization as a score matching objective by training a model \(\mathbf{s}_{\boldsymbol{\theta}}(\mathbf{x},\sigma)\) to predict the score function \(\nabla_{\mathbf{x}}\log p_{\sigma}(\mathbf{x})\) of the noise level \(\sigma\in\{\sigma_{i}\}_{i=1}^{N}\). \[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\sum_{ i=1}^{N}\sigma_{i}^{2}\mathbb{E}_{p_{data}(\mathbf{x})}\mathbb{E}_{p_{ \sigma_{i}}(\tilde{\mathbf{x}}|\mathbf{x})}\big{[}\left\|\mathbf{s}_{ \boldsymbol{\theta}}(\tilde{\mathbf{x}},\sigma_{i})-\nabla_{\tilde{\mathbf{x} }}\log p_{\sigma_{i}}(\tilde{\mathbf{x}}\mid\mathbf{x})\right\|_{2}^{2}\big{]}. \tag{1}\] For sampling, this process can be reversed, for example, using Markov chain Monte Carlo estimation methods following Song and Ermon (2019). Song et al. (2020) extended this approach to a continuous formulation by redefining the diffusion process as a process given by a stochastic differential equation (SDE): \[\mathrm{d}\mathbf{x}=\mathbf{f}(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d} \mathbf{w}, \tag{2}\] and training a dense model on predicting the score function for different time steps t, where \(\mathbf{w}\) models the standard Wiener process, \(\mathbf{f}\) the drift function of \(\mathbf{x}(t)\), that models the data distribution, and \(\mathbf{x}(t)\) the drift coefficient. Therefore, the continuous formulation of the noising process, denoted by \(p_{t}(\mathbf{x})\) and \(p_{st}(\mathbf{x}(t)\mid\mathbf{x}(s))\), is used to characterize the transition kernel from \(\mathbf{x}(s)\) to \(\mathbf{x}(t)\), where \(0\leq s<t\leq T\). Anderson (1982) show that the reverse of this diffusion process is also a diffusion process. The backward formulation is \[\mathrm{d}\mathbf{x}=[\mathbf{f}(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}} \log p_{t}(\mathbf{x})]\mathrm{d}t+g(t)\mathrm{d}\bar{\mathbf{w}}, \tag{3}\] which also extends the formulation of the discrete training object to the continuous objective: \[\boldsymbol{\theta}^{*}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\mathbb{ E}_{t}\Big{\{}\lambda(t)\mathbb{E}_{\mathbf{x}(0)}\mathbb{E}_{\mathbf{x}(t) |\mathbf{x}(0)}\big{[}\left\|\mathbf{s}_{\boldsymbol{\theta}}(\mathbf{x}(t),t )-\nabla_{\mathbf{x}(t)}\log p_{0t}(\mathbf{x}(t)\mid\mathbf{x}(0))\right\|_ {2}^{2}\big{]}\Big{\}}. \tag{4}\] In Eq. 4, \(\lambda(t):[0,T]\rightarrow\mathbb{R}_{>0}\) is a weighting function, often neglected in practice. Song et al. (2020) show that the reverse diffusion process of the SDE can be modeled as a deterministic process as the marginal probabilities can be modeled deterministically in terms of the score function. As a result, the problem simplifies to an ordinary differential equation: \[\mathrm{d}\mathbf{x}=\Big{[}\mathbf{f}(\mathbf{x},t)-\frac{1}{2}g(t)^{2} \nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\Big{]}\mathrm{d}t, \tag{5}\] and can therefore be solved using any black box numerical solver such as the explicit Runge-Kutta method. This means that we can perform exact likelihood computation, which is typically done in literature, to estimate how likely the generation of test sample, _e.g._ images, is. This means that low negative log-likelihood (NLL) is desirable. In our case, we want to estimate the likelihood of reproducing training samples at test time. Ideally, this probability would be zero or very close to zero. ## 3 Method Typically, NLL measures how likely generating test samples at training time is. To use it to evaluate the memorization of training data, we compute the NLL of the training dataset. A limitation of using NLL is that it only computes the likelihood of the exact sample to be reproduced at sampling time and therefore is insufficient for giving estimates of the likelihood of generating samples that raise privacy concerns. We can compute the likelihood of the exact sample but this does not mean that the images in the immediate neighborhood are not leading to privacy issues. We assume that all images of the probability distribution are within a certain distance to the real image. As a result, we propose to estimate the upper bound of the likelihood of reproducing samples from the entire subspace that belongs to the class of private samples. First, we define the sample \(\mathbf{x}_{p}\) that we consider to be a potential privacy breach and augment this sample by adding a synthetic anatomic fingerprint (SAF) to it. This SAF is used to identify the sample, which raises privacy concerns. Then we repeatedly apply the diffusion and reverse diffusion process and check when the predicted sample starts to diverge to a different image. ### Estimation Method Let \(p_{s}(\mathbf{x}_{p})\) define the likelihood of the model \(\mathbf{s}\) to reproduce the private sample \(\mathbf{x}_{p}\) at test time. Following Eq. (5), we can compute the likelihood of this exact sample. However, this does not account for the fact that images in the immediate neighborhood, like slightly noisy versions of \(\mathbf{x}_{p}\), are not anonymous. Consequently, we are interested in computing \(q(p)\), which is defined as the likelihood of reproducing any sample that is similar enough to the target image that it raises privacy concerns: \[q(p)=\int_{\Omega_{p}}p_{s}(\mathbf{x})\mathrm{d}\mathbf{x}, \tag{6}\] where \(\Omega_{p}\) is defined as the region of the image \(\mathbf{x}_{p}\) that is private. We determine this region by training a classifier tasked with detecting whether the image belongs to the image class, as explained in Sec. 3.4. To search through the image manifold, we make use of the reverse diffusion process centered around the SAF image \(\mathbf{x}_{p}\) defined as \(p_{t,b}\coloneqq p(\mathbf{x}_{t}\mid\mathbf{x}_{p})=\mathcal{N}(\tilde{ \mathbf{x}};\mathbf{x}_{p},\sigma_{t}^{2}\mathbf{I})\) for \(\mathbf{x}(s)\) to \(\mathbf{x}(t)\), where \(0\leq t\leq T\). We can employ the diffusion process centered around this image to sample from the neighborhood and then use the learned reverse diffusion process to generate noisy samples \(\mathbf{x}_{t,p}\). Then we can use this as starting image for the reverse diffusion process to sample \(\mathbf{x}^{\prime}_{t,p}\): \[q(p)=\int_{\Omega_{p}}p_{s}(\mathbf{x})\mathrm{d}\mathbf{x}\approx\int_{0}^{t ^{\prime}}p_{s}(\mathbf{x}_{t,p})\mathrm{d}\mathbf{t}=\int_{0}^{t^{\prime}} \mathbb{E}_{p(\mathbf{x}_{t,p})}\big{[}p(\mathbf{x}^{\prime}_{t,p})\big{]} \mathrm{d}\mathbf{t}. \tag{7}\] Technically, we could employ exact likelihood computation to estimate \(q(p)\) but this would require integrating over the continuous image-conditioned diffusion process, which would be intractable in practice. Therefore, we propose to approach and estimate this integral by computing the Riemann sum of this integral and give an upper bound estimate for it using the upper Darboux sum: \[\int_{0}^{t^{\prime}}\mathbb{E}_{p(\mathbf{x}_{t,p})}\big{[}p( \mathbf{x}^{\prime}_{t,p})\big{]}\mathrm{d}\mathbf{t}=\sum_{t}(\sigma_{t}- \sigma_{t-1})\mathbb{E}_{p(\mathbf{x}_{t,p})}\big{[}p(\mathbf{x}^{\prime}_{t,p })\big{]}\leq \tag{8}\] \[\sum_{i=0}^{t^{\prime}}\sup_{t\in[t_{i},t_{i+1}]}(\sigma_{t_{i+1} }-\sigma_{t_{i}})\mathbb{E}_{p(\mathbf{x}_{t,p})}\big{[}p(\mathbf{x}^{\prime}_{ t,p})\big{]}, \tag{9}\] which approaches the real value for steps that are small enough. We can compute this value by using \(\mathbf{x}_{p}\) as a query image and then estimating the expectation by performing Monte-Carlo sampling but this would still require a lot of time due to the computational complexity of exact likelihood estimation. ### Method intuition Intuitively, we model the image space using the learned distribution of the score function \(\nabla_{\tilde{\mathbf{x}}}\log p_{\sigma_{i}}(\tilde{\mathbf{x}}\mid\mathbf{ x})\) by reversing the diffusion process and checking when the model starts to "break out" by generating images classified as different samples. For large \(t\), the learned marginals \(p(\mathbf{x},t)\) span the entire image space. Importantly, by definition of the diffusion process, the distribution approaches the same distribution as the sampling distribution of the diffusion process if \(\sigma_{t}\) gets large enough \(p_{\sigma_{N}}(\tilde{\mathbf{x}}\mid\mathbf{x}_{p})\sim\mathcal{N}(\mathbf{x };\mathbf{0},\sigma_{N}^{2}\mathbf{I})\). However, for lower \(t\) the model has learned that the distribution collapses towards a single training image \(\mathbf{x}_{p}\). Essentially, it has modeled part of the subspace as a delta distribution around \(\mathbf{x}_{p}\). We want to find out how far back in the diffusion process we have to go for the model to start to produce different images. The boundary \(\Omega_{p}\) is then defined as all images that would collapse towards this training image and estimated using the classifiers. Fig. 1 illustrates this process in one dimension. Note that this is different from simply defining a variance that is large enough for the classifiers to fail, as \(s_{\theta}(\mathbf{x}_{p},\sigma_{t})\) was trained to revert this noise. ### Synthetic Anatomic Fingerprint Let \(\tilde{D}\) be our real dataset of size \(N\) without any privacy concerns due to the lack of any identifiable information. Then we synthetically generate a dataset \(D\) which contains a single sample with a fingerprint. Importantly, we remove the non-augmented version of that sample from the \(D_{p}\). In practice, this can be any kind of fingerprint that appears only once in the entire training dataset. To ease the training of identification classifiers, we choose a grey constant circle as shown in 2. Therefore, the SAF sample \(\mathbf{x}_{p}\) is defined as an augmented version of a real sample: \[\mathbf{x}_{p}=\mathbf{x}_{i}*(1-L_{p})+\mathbf{x}_{SAF}*L_{p} \tag{10}\] where \(i\) is a randomly drawn sample from \(D\). The location of the SAF determined by \(L_{p}\) is randomly chosen to lie entirely within the boundary of the image. Then we train a score-based model \(\mathbf{s}_{\theta}(\mathbf{x},\sigma_{t})\) on the augmented dataset \(D_{p}\). To quantify whether or not the trained model is privacy concerning, we define an adversarial attacker that knows of the fingerprint \(\mathbf{x}_{SAF}\) and that we can train on detecting this fingerprint. We will refer to this classifier as \(c_{p}(\mathbf{x})\). The second classifier \(c_{id}(\mathbf{x})\) is trained on \(D\) in a one-versus-all approach to classifying the image's identity. We assume that private information is given away when this classifier correctly detects the generated sample. Importantly, we train \(c_{id}(\mathbf{x})\) with random masking using the same circular patches that were used to generate \(L_{p}\). Therefore we can use \(c_{p}(\mathbf{x})\) to filter out all images that contain the SAF and then determine whether or not this sample raises privacy concerns by computing the prediction for \(c_{id}(\mathbf{x}^{\prime})\) generated samples from the generative model \(s_{\theta}\) ### Boundary Computation To give an estimate for \(q(p)\) we observe that it only depends on the likelihood \(p(\mathbf{x}_{p})\) and \(t^{\prime}\), which is supposed to capture the entire region of \(\Omega_{p}\). Therefore, we use \(\mathbf{x}^{\prime}_{p}\) as input to the classifiers and define \(\Omega_{p}\) as the region where both classifiers give a positive prediction. Since exact likelihood computation and the terms for the variance derived in Eq. (9) reach computationally infeasible value ranges, we can use \(t^{\prime}\) as an estimate of how unlikely it is to generate critical values from the model. Alg. 1 describes the computation of \(t^{\prime}\). We can freely choose \(M\) as parameter and trade-off accuracy for computation time. Given \(\mathbf{x}_{p}\) we define \(q_{M}(p|x_{t,p})\) as the estimate of staying within the boundary of \(\Omega_{p}\) for a given diffusion step \(t\). Then we define \(t^{\prime}\coloneqq\text{max}(\mathbb{T}),\mathbb{T}\coloneqq\{\forall t\colon q _{M}(p|x_{t,p})>0\}\). The full pipeline is illustrated in Fig. 2 ## 4 Related Work Generative models have disrupted various fields by generating new data instances from the same distribution as the input data. These models include Variational Autoencoders (VAEs) Kingma and Welling (2013), Generative Adversarial Networks (GANs) Goodfellow et al. (2014), and more Figure 1: Illustration of our estimation method in 1D. The grey line denotes the query image \(\mathbf{x}_{p}\). The estimation method iteratively increases the search space in the latent space of the generative model. The green area corresponds to image regions resulting in non-privacy concerning generated samples, while the red area is considered critical. recently, Generative Diffusion Models (GDMs). Diffusion models can be categorized as score-based generative models Song and Ermon (2019); Song et al. (2020) and models that invert a fixed noise injection process Sohl-Dickstein et al. (2015); Ho et al. (2020). In this work we focus on score-based generative models. Evaluating data privacy in machine learning has been a longstanding concern Dwork et al. (2006); Abadi et al. (2016); van den Burg and Williams (2021). Research on integrating privacy-preserving mechanisms in generative models is still in its infancy. Xie et al. (2018) proposed a method to make GANs deferentially private by modifying the training algorithm. Jiang et al. (2022) applied differential privacy to VAEs, showcasing the possibility of explicitly integrating privacy preservation into generative models. Despite the progress in privacy-preserving generative models, little work has been done on evaluating inherent privacy preservation in diffusion models and providing privacy guaranteed dependant on the training regime. To the best of our knowledge, our work is the first to investigate natural privacy preservation in generative diffusion models, contributing to the ongoing discussion of privacy in machine learning. ## 5 Experiments ### Dataset For our experiments we use MedMNISTv2 Yang et al. (2021). This dataset consists of a combination of multiple downsampled \(28\times 28\) images from different modalities. Some of them are multichannel, while others are single-channel. For single-channel images we repeat the channel dimension three times. For our main experiments we choose PathMNIST, due to the high amount of samples available for that dataset. Furthermore, we experiment with an a-priori selected set of modalities from this dataset which ranges through multiple sizes and multiple channels of the dataset. ### Models The classifiers are randomly initialized ResNet50 He et al. (2016) architectures. To maximize robustness we employ AugMix Hendrycks et al. (2020) and in the case of \(c_{id}(\mathbf{x})\) we furthermore inject random Gaussian noise into the training images to increase the robustness towards possible artifacts from the diffusion process. Furthermore, we randomly mask out patches of the same shape as the SAF to reduce the effect of SAF on the prediction. The training and sampling of the score-model follows the implementation of Song et al. (2020) with sub-VP SDE sampler due to their reported good performance on exact likelihood computation Song et al. (2020) with a custom U-Net architecture based on von Platen et al. (2022). Training \(s_{\theta}\) is done on a single A100 GPU and takes roughly eleven hours. The classifiers are trained until convergence with a patience of 20 epochs Figure 2: Visual abstract of our evaluation method for privacy problems. Uncritical samples are shown in green and critical samples in red. The SAF is injected into the training pipeline of the generative mode. SAF search and identification is done using supervised training with samples from the training set. Finally, we filter generated image looking for samples that contain this SAF and check if we can identify the image. in less than one hour. Exhaustive search for \(t^{\prime}\), which is done by computing \(q_{M=16}(p|x_{t,p})\) for all \(t\in 0,\ldots,1\), takes four hours. ### Reverse Diffusion Process First, we experiment with the influence of the training length on \(|p|\) by sampling 10000 images from a model trained on \(|N_{D}|=1000\) and show the results in Fig. 4. For the first 14000 steps, the model only learns high-frequency attributes of the data. The visual quality is low and therefore also the probability of reproducing \(\mathbf{x}_{p}\) Around 20000 the quality of the generated samples improves visually, but also the number of memorized training samples. At this point, the model already starts to accurately reproduce \(\mathbf{x}_{p}\) at sampling time. Every detected sample is visually indistinguishable from the training image. The MAE even goes down to \(1\times 10^{-4}\). Based on these observations, we continue our investigations with a fixed training length of 30000 steps. Next we want to investigate the influence of the size of the dataset on its memorization capabilities. Therefore we train models on different \(|N_{D}|\)sample 150000 images for every model and at testing the probability of reproducing our sample at test time. We do this by defining the null-hypothesis \(H_{0}\) that the probability of sampling \(\mathbf{x}_{p}\) is equal to \(1/N_{D}\). \(H_{1}\) claims that the probability is lower. Therefore, we sample 150000 images for every trained model with dataset size \(|N_{D}|\in\{1000,5000,10000,20000,50000\}\). The results are shown in Tab. 1 It can be seen that the model only learned to reproduce samples with the SAF when the dataset size was comparably low. Figure 4: Illustration of the reverse diffusion process. Left shows query images \(\mathbf{x}_{t,p}\) for \(t\in[0,0.7]\). Right shows the resulting sample. Figure 5: Representative samples from trained models on different dataset sizes \(|N_{D}|\). Figure 3: Influence of training length on generative and memorization properties. A positively classified sample can be seen in the top-left corner of the rightmost image. For \(|N_{D}|=1000\) the model was surprisingly close to the expected value, indicating that the size of the data is too small relative to the available parameter space and the model memorizes them as discrete distribution of \(1000\) unrelated images. Every other model produces very few positive predictions from the classifier all of which turn out to be false positives. The combined prediction \(q\coloneqq c_{id}(\mathbf{x})^{+}\cap c_{p}(\mathbf{x})^{+}\) is only positive for the smallest dataset. All the larger models don't have any positive samples in their dataset. The p-value for this is smaller than 5% in all cases, meaning that we can reject the null-hypothesis and assume that the probability of \(\mathbf{x}_{p}\) is smaller. Next we look at the samples of different sizes and show them in Fig. 5. Initial observation suggest that image quality drops for medium-sized datasets. However, upon closer inspection we see that the smallest model simply learns to reproduce training data, which can be seen by the fact that some images appear multiple times. This confirms our observation that the model learned the training distribution in the form a discrete set of 1000 images but never learned to generalize. In the context of data-sharing this would mean that the model is simply a way of saving training data but would still raise privacy concerns. The model trained on 5000 images seems to lie in between generalizing and memorizing the learned distribution but the size of dataset was not large enough to learn a meaningful representation. The result looks like it learned low frequency information such as color or larger structure, but the images are lacking detail. Now we can use our proposed estimation method from Alg. 1 to compute \(t^{\prime}\) for all datasets with \(M=16\). The results are shown in Fig. 6. Clearly, the probability for generating samples \(q_{M}(p|x_{t,p})\) decreases with increasing t. More importantly, the threshold at what point the probability drops, is higher for smaller \(|N_{D}|\), which means \(t^{\prime}\) is indeed an important indicator for \(q(p)\). Additionally, these results show that sharing the model with \(|N_{D}|=5000\) would raise more privacy issues as other modes, as the indicator suggests that the probability for a sample being generated at inference time is high. Finally we validate our results by looking at different datasets in Tab. 2. The results confirm our observations of a high amount of memorization in models with small dataset sizes close to the expected value. There is once again a turning point at around 5000 images where samples are no longer memorized. We can also confirm this gap by comparing the FID by calculating it on the training and test dataset. The drop is large in all cases where training samples are memorized. However, FID fails to measure the extent of this effect. PneumoniaMNIST has a larger drop in performance than RetinaMNIST but barely any memorized samples. Our proposed indicator \(t^{\prime}\) on the other hand captures this observation. Furthermore, it is also lower for the BreastMNIST dataset, \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(|N_{D}|\) & 1000 & 5000 & 10000 & 20000 & 50000 \\ \hline \(\mathbb{E}\left[|q|\right]\) & 150 & 30 & 15 & 7.5 & 3 \\ \hline \(|c_{p}(\mathbf{x})^{+}|\) & 151 & 0 & 0 & 1 & 1 \\ \(|c_{id}(\mathbf{x})^{+}|\) & 151 & 0 & 3 & 3 & 4 \\ \(|q|\) & 151 & 0 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of positive predictions of the classifiers for models trained on different dataset size on 150000 images. All models use the same classifiers. Figure 6: Likelihood of producing \(\mathbf{x}_{p}\) at sampling time as a function of \(t\) for \(t\in\{0,\dots,0.5\}\) and \(M=16\). We stop plotting probabilities after \(t^{\prime}\). Due to the high observed probabilities of the \(N_{D}=1000\) model, we also compute and plot the probabilities for higher t. which according to the high difference between \(\mathbb{E}(|q|)\) and \(|q|\), did not collapse as strongly towards only reproducing \(\mathbf{x}_{p}\). ## 6 Discussion We have shown that \(t^{\prime}\) is a useful indicator towards estimating \(q(p)\) since it can be directly derived from it as shown in Sec. 3.1. Our results show that training and publishing trained models without care can lead to critical privacy breaches due to direct data-sharing. The results also suggest that SAFs are either memorized or ignored. This has important implications on the feasibility of using these models instead of direct data sharing as this impedes the ability to use the shared model for datasets with naturally occurring anomalies. These features are often crucial for medical applications but highly unlikely to be reproduced at sampling time, making detecting these features in downstream applications, such as anomaly detection, even harder. Computation of \(t^{\prime}\) does not necessarily require the existence of \(c_{p}\), only that of \(c_{id}\). Therefore, it can be applied as an indicator of overfitting in diffusion-based generative models. The exhaustive search described in Alg. 1 could be approximated using improved search techniques such as binary search random subsampling of \(t\) or using a reduced search range. ## 7 Limitations Our experiments consider clear synthetic outliers that are not necessarily congruent to the real image distribution. It would be interesting to see if the effect is different if the SAF is closer to the real image. However, the fact that they are visually distinguishable from everything else is necessary for the image to remain detectable and also for the assumption that they pose a privacy concern. Additionally, our experiments focus on a single way of training and sampling the models. However, current approaches such as Meng et al. (2023) often use different samplers, training paradigms, or distillation methods. It remains to be shown whether or not these different approaches change the learned distribution of the underlying score model or if they only improve perceptual properties. Finally, due to the complexity of the problems and the high dimensionality, we do not compute a real estimate for the probability of the data to appear at sampling time but only an indicator. The indicator \(t^{\prime}\) has a high variance for large \(t\) if \(|N_{D}|\) is small due to the high stochasticity involved when sampling \(q_{M}(p|x_{t,p})\). Therefore, results with \(t^{\prime}\) close to 1 are hard to compare against each other. But as we have shown, these are the cases in which the models raise a privacy concern and direct sampling of \(\mathbf{x}_{p}\) is possible according to Tab. 1. ## 8 Conclusion In this work we have described scenarios in which training score-based models on personal identifiable information like image data can lead to data-sharing issues. By defining an adversarial that has prior information about a visual property of the data, we showed that training and publishing these models without care can lead to critical privacy breaches. To illustrate this, we have derived an indicator for the likelihood of reproducing training samples at test time. The results show that generative models \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{2}{c}{Description} & \multicolumn{3}{c}{SAF Classification} & \multicolumn{5}{c}{Data Synthesis} \\ \hline Dataset & \(|N_{D}|\) & SAF (\%) & ID (\%) & FID\({}_{train}\) & FID\({}_{test}\) & \(\mathbb{E}(|q|)\) & \(|q|\) & \(t^{\prime}\) \\ \hline RetinaMNIST & 1080 & 100 & 99.6 & 5.9 & 19.7 & 46.3 & 52 & 0.998 \\ BloodMNIST & 11959 & 100 & 99.5 & 9.3 & 11. & 4.2 & 0 & 0.241 \\ ChestMNIST & 78468 & 99.93 & 99.8 & 3.3 & 3.9 & 0.6 & 0 & 0.206 \\ PneumoniaMNIST & 4708 & 100 & 99.8 & 9.5 & 28.4 & 10.6 & 2 & 0.719 \\ BreastMNIST & 546 & 100 & 98.7 & 9.2 & 62.6 & 91.6 & 57 & 0.886 \\ OrganSMNIST & 13940 & 99.47 & 99.8 & 19.6 & 19.7 & 3.6 & 0 & 0.582 \\ \hline \hline \end{tabular} \end{table} Table 2: Training results for different MedMNIST datasets. We report test accuracy for the SAF classifier but only training accuracy for the ID classifier as identification only makes sense if the sample was part of the training set. For the generative scores we use 50000 samples. trained on small datasets or long training times should not be readily shared. Larger dataset sizes, on the other hand, lead to the model ignoring and never reproducing the detectable fingerprints. In the future, we will work on using \(t^{\prime}\) in an adversarial fashion to train models that are explicitly taught not to sample from these regions in the representation space. **Acknowledgements:** The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b143dc PatROPRI. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683.
2309.00820
Jupiter Atmospheric Models and Outer Boundary Conditions for Giant Planet Evolutionary Calculations
We present updated atmospheric tables suitable for calculating the post-formation evolution and cooling of Jupiter and Jupiter-like exoplanets. These tables are generated using a 1D radiative transfer modeling code that incorporates the latest opacities and realistic prescriptions for stellar irradiation and ammonia clouds. To ensure the accuracy of our model parameters, we calibrate them against the measured temperature structure and geometric albedo spectrum of Jupiter, its effective temperature, and its inferred internal temperature. As a test case, we calculate the cooling history of Jupiter using an adiabatic and homogeneous interior and compare with extant models now used to evolve Jupiter and the giant planets. We find that our model reasonably matches Jupiter after evolving a hot-start initial condition to the present age of the solar system, with a discrepancy in brightness temperature/radius within two per cent. Our algorithm allows us to customize for different cloud, irradiation, and metallicity parameters. This class of boundary conditions can be used to study the evolution of solar-system giant planets and exoplanets with more complicated interior structures and non-adiabatic, inhomogeneous internal profiles.
Yi-Xian Chen, Adam Burrows, Ankan Sur, Roberto Tejada Arevalo
2023-09-02T04:16:54Z
http://arxiv.org/abs/2309.00820v1
# Jupiter Atmospheric Models and Outer Boundary Conditions for Giant Planet Evolutionary Calculations ###### Abstract We present updated atmospheric tables suitable for calculating the post-formation evolution and cooling of Jupiter and Jupiter-like exoplanets. These tables are generated using a 1D radiative transfer modeling code that incorporates the latest opacities and realistic prescriptions for stellar irradiation and ammonia clouds. To ensure the accuracy of our model parameters, we calibrate them against the measured temperature structure and geometric albedo spectrum of Jupiter, its effective temperature, and its inferred internal temperature. As a test case, we calculate the cooling history of Jupiter using an adiabatic and homogeneous interior and compare with extant models now used to evolve Jupiter and the giant planets. We find that our model reasonably matches Jupiter after evolving a hot-start initial condition to the present age of the solar system, with a discrepancy in brightness temperature/radius within two per cent. Our algorithm allows us to customize for different cloud, irradiation, and metallicity parameters. This class of boundary conditions can be used to study the evolution of solar-system giant planets and exoplanets with more complicated interior structures and non-adiabatic, inhomogeneous internal profiles. Jupiter, Gas giants, Exoplanets, Planet Atmospheres, Planet Evolution 0000-0002-2880-8088]Yi-Xian Chen 0000-0002-4073-2886]Adam Burrows 0000-0002-4173-0888]Ankan Sur 0000-0002-4188-7886]Roberto Tejada Arevalo ## 1 Introduction The cooling and contraction of the mostly convective interiors of giant planets occur after their formation by core accretion (Pollack et al., 1996; D'Angelo et al., 2003; Li et al., 2021) or disk instability (Boss, 1997), and is regulated by the radiative properties of the thin layers of atmosphere on the planets' surface. To determine the external boundary conditions for time-dependent evolutionary calculations, it is crucial to generate a set of model atmospheres at various values of surface gravity and internal emission and/or entropy. The interior models, which provide information on the thermodynamic properties and composition of the planet, can therefore be connected over time using boundary conditions obtained through interpolation from this grid of models. Pioneering investigations into the evolution of Jupiter and Saturn, such as Graboske et al. (1975), Pollack et al. (1977), and Hubbard (1977), were conducted before the discovery of hot Jupiters in 1995 (Mayor & Queloz, 1995). Subsequently, new sets of models were developed to cover a wider range of parameters motivated by the need to study exoplanets (Burrows et al., 1997, 2001), and include better prescriptions for irradiation (Sudarsky et al., 2000; Burrows et al., 2006a), opacities (Sharp & Burrows, 2007), and cloud formation (Burrows et al., 2006b). These improvements have resulted in more refined atmospheric boundary conditions for planetary evolution, including ones customized for specific extrasolar sources (e.g. Burrows et al., 2003) or solar system giant planets (Fortney et al., 2011). Emission spectra of gas giants produced in self-consistent atmosphere calculations can be directly compared with observed spectra of giant planets to constrain their properties (Sudarsky et al., 2003). The launch of JWST has ushered in a new era of exoplanet atmospheric detection with unprecedented precision (Carter et al., 2022; Miles et al., 2023), requiring more complete archives of reference theoretical spectra. In this sense, modeling the outer atmospheres of gas giants and generating tabulated spectra are critical for two main reasons. First, theoretical models are necessary to interpret important physical quantities, such as temperature, gravity, radius, composition, and albedo, from spectra obtained through direct detection. Second, theoretical spectral models inform observers concerning which regions of the spectrum are most likely to yield the most insight into a planet's atmosphere. Moreover, spectrum calculations provide more well-calibrated atmospheric boundary conditions, which can be combined with up dated knowledge of opacity and interior equations for evolutionary models that constrain the age of directly imaged planets. This paper is structured as follows: In SS2, we describe the numerical methods used to calculate our models, focusing on additions and alterations, such as ammonia clouds and irradiation, and compare to previous models for gas giant evolution (Burrows et al., 1997; Fortney et al., 2011). In SS3, we calibrate a fiducial set of numerical parameters, including irradiation redistribution and cloud modal size, based not only on Jupiter's metallicity (\(\sim\)3\(\times\) solar), gravity, and measured effective temperature, but also on observed albedo and temperature-pressure profiles. In SS4, we calculate the cooling of gas giants using a standard interior equation of state, and illustrate the effects of cloud and irradiation parameters on the evolution of gas-giant radii, the internal temperature (\(T_{\rm int}\)), and the effective temperature (\(T_{\rm eff}\)). Finally, in SS5, we discuss our conclusions and their implications for the next generation of giant-planet evolutionary models that incorporate more sophisticated thermal and compositional profiles and energy transport modalities. ## 2 Model Atmospheres We generate model atmospheres using the 1D atmosphere and spectral code CoolTLusty (Hubeny and Lanz, 1995; Sudarsky et al., 2003, 2005; Burrows et al., 2008). This code self-consistently calculates the spectrum and structure of a plane-parallel atmosphere with radiative transfer, given a detailed suite of opacity data compiled into thermochemical equilibrium tables (Burrows and Sharp, 1999; Sharp and Burrows, 2007). The opacities are treated using line-by-line sampling and have recently been significantly updated in Lacy and Burrows (2023) (see their Appendix A). Their strategy for calculating absorption cross sections is mainly guided by recent progress by the ExomolOP (Chubb et al., 2021) and EXOPLINES (Gharib-Nezhad et al., 2021) collaborations. A publicly available Fortran code, _exocross_1, was employed to compute absorption cross sections over a grid of temperatures, pressures, and wavenumbers. User-defined aspects of _exocross_ absorption cross-section calculations include line lists, line profiles with pressure-dependent broadening if desired (the Voigt profile is usually applied), line-wing cutoffs, and an optional line strength threshold. For each molecule, the line lists recommended by the ExoMol team were adopted, with additional updates to cover higher temperature and larger wavelength ranges. The most relevant updated molecules include water, ammonia, methane, and molecular hydrogen. The line list for molecular CH\({}_{4}\)(Yurchenko and Tennyson, 2014) is additionally supplemented at shorter wavelengths by cross sections inferred from Jupiter's spectrum in Karkoschka (1994). Footnote 1: [https://exocross.readthedocs.io/en/latest/](https://exocross.readthedocs.io/en/latest/) In our atmospheric calculations, we use 100 atmospheric layers, and 5000 frequency points spaced evenly in log from 0.5 to 300 \(\mu\)m. We neglect disequilibrium chemistry discussed in Lacy and Burrows (2023), but add treatments for stellar irradiation (SS2.2). For fiducial evolutionary calculations of a gas giant in isolation, the metallicity (\(Z\)), the intrinsic temperature (\(T_{\rm int}\); or equivalently the internal flux) and outer planet radius are needed to construct the thermal boundary condition and calculate the specific entropy at the base of the atmosphere's radiative zone. Inverting this table, \(T_{\rm int}\) becomes a function of the surface gravity and entropy per baryon (\(S\)) in the deep interior, sometimes parameterized by \(T_{10}\), the temperature that the interior isentrope would have if extrapolated to a pressure of 10 bars (Burrows et al., 1997; Hubbard et al., 1999; Fortney et al., 2011). When there is no external stellar irradiation source, the total effective temperature (\(T_{\rm eff}\)) is equal to \(T_{\rm int}\), which is why in studies of isolated gas giant evolution \(T_{\rm eff}\) is often simply used in the place of \(T_{\rm int}\)(e.g. Burrows et al., 1997). When stellar irradiation is taken into account, \(T_{\rm eff}\) will differ from \(T_{\rm int}\) due to the fraction of absorbed stellar irradiation by the planet. Though we do not use such a quantity, this fraction is often, though crudely, identified with a \(T_{\rm eq}\), such that \(T_{\rm eff}^{4}\) is set equal to \(T_{\rm eq}^{4}\) + \(T_{\rm int}^{4}\)(Saumon et al., 1996; Sudarsky et al., 2000). In CoolTLusty, the wavelength-dependent geometric albedos are calculated self-consistently and the absorbed stellar heat is intrinsically accounted for (Sudarsky et al., 2003; Burrows et al., 2006). Although only \(T_{\rm int}\) is relevant to the internal evolution, \(T_{\rm eff}\) can be obtained from the total emission and compared to measured values for Jupiter. Hence, when we calculate evolutionary tracks for \(T_{\rm int}\), we simultaneously obtain the associated \(T_{\rm eff}\) evolution, along with the frequency-dependent spectra. ### Cloud Models Since we focus on gas giants with effective temperatures of 80 to 200 K, ammonia clouds will form in the atmosphere and have an impact on their late-time evolution. Such clouds are a much-studied feature of both Jupiter and Saturn (Brooke et al., 1998; Sromovsky and Fry, 2018; de Pater et al., 2019). Water clouds appear earlier when the atmospheric temperatures go below \(\sim\)400 K, but are quickly buried deep early on. Young, more massive, giant planets should evince water clouds that for them would need to be included in evolutionary calculations. However, such is not the case for the current Jupiter and Saturn and we ignore them here. Our treatment of cloud opacity is the same as in Lacy & Burrows (2023), and we use the same cloud shape parameters (the compact "E"-type) and supersaturation factor (0.01). We adopt the Clausius-Clapeyron line for the base of the cloud. The cloud spatial distributions we employ are elaborated in earlier works that published cloudy atmospheric models using CoolTLUSTY (Burrows et al., 2006; Madhusudhan et al., 2011). We assume the cloud species to have a Deirmendjian size distribution (Deirmendjian, 1964): \[n(a)\propto\left(\frac{a}{a_{0}}\right)^{6}\exp\left[-6\left(\frac{a}{a_{0}} \right)\right]\,, \tag{1}\] with a default modal size (\(a_{0}\)) on the order of microns. A Deirmendjian particle size distribution reproduces that of the Earth's clouds for a \(a_{0}\) of 4\(\mu\)m. As we show below (SS3), \(a_{0}\sim 1\mu\)m is consistent with the Bond and geometric albedos of Jupiter. However, we also experiment with larger particle sizes to explore the model dependencies. For a given particle size distribution, the frequency-dependent absorption cross section \(\sigma(\nu,a_{0})\) is then calculated using Mie theory (Kerker, 1969) and converted into an opacity per unit mass (Lacy & Burrows, 2023). ### Irradiation We treat irradiation by a sun-like star using the approach found in Burrows et al. (2006a) and Burrows et al. (2008). The solar power contribution at 5.2AU is calculated by intercepting the incident solar flux with an area of \(\pi R_{J}^{2}\), multiplying by \(f=1/2\) to account for an average zenith angle of \(60^{\circ}\)(Appleby, 1986; Marley & McKay, 1999; Fortney et al., 2011). Note that though Burrows et al. (2008, see their Appendix D) proved that one should apply a factor of 2/3 in the limit of strongly irradiated Hot Jupiters, a factor of 1/2 may be more appropriate in the case of gas giants with moderate to large orbital distances. Therefore, the frequency-integrated stellar flux expressed via the \(H\)-moment is \[H_{\rm ext}=f\left(\frac{R_{*}}{d}\right)^{2}\frac{\sigma}{4\pi}T_{*}^{4}, \tag{2}\] where \(R_{*}\) is stellar radius, \(d\) is the star-planet distance, and \(T_{*}\) is the effective temperature of the stellar surface. Furthermore, we apply a redistribution parameter \(P_{\rm irr}\) to represent the fraction of the flux that is redistributed. Effectively, our treatment removes a fraction of the irradiation \(H_{\rm irr}=P_{\rm irr}H_{\rm ext}\) from the day side (and constitutes an energy input to the night side). Although all profiles we calculate in this paper belong to the day side, it's worth noting that conventional setups with no redistribution (\(P_{\rm irr}=0\)) imply zero redistributed heating to the night side. To redistribute heat from the day side, we add the additional sink term \(-D\) to the radiative transfer equation, expressed in terms of the integrated column mass of the atmosphere \(m\): \[D(m)=\frac{2H_{\rm irr}}{m_{1}-m_{0}}\frac{m_{1}-m}{m_{1}-m_{0}} \tag{3}\] where the parameters \(m_{1}\) and \(m_{0}\) are column masses corresponding to the limiting pressures \(P_{0}\) and \(P_{1}\) (the sink term is set to zero outside this mass/pressure range), such that \(D(m)\) linearly decreases with \(m\), achieving a value of zero at the bottom of the redistribution zone (\(P_{1}\)). For more details, see Appendices A & B of Burrows et al. (2008). Integrating \(D(m)\) over the column mass confined between the two limiting pressures gives \(H_{\rm irr}\). We restrict the redistribution altitudes to be between \(P_{0}=0.05\) and \(P_{1}=0.5\) bar as limiting pressures, guided by the temperature structure of Jupiter. As we show below, larger \(P_{\rm irr}\) leads to a steeper temperature inversion in the stratosphere, and we select a fiducial \(P_{\rm irr}\) of 0.15 based on calibration with measured Jupiter temperature-pressure profiles. ## 3 Calibration with Jupiter ### Fitting the Atmospheric Temperature and Geometric Albedo Profiles Here, we determine a set of fiducial parameters for the incorporation of the effects of clouds and irradiation into our atmosphere boundary conditions by constraining our models to fit observation data, specifically the measured albedo and temperature-pressure (TP) profiles of Jupiter (Seiff et al., 1998; Karkoschka, 1998). For this purpose, we fix \(\log(g[{\rm cgs}])=3.4\) and the metal abundance (\(Z\)) at 3.16 times solar (Niemann et al., 1998). There are some uncertainties in the helium abundance measurements and the fraction may also vary with time in more modern evolutionary calculations, but we choose \(Y=0.25\) as the fiducial value and have tested that all outcomes are quite insensitive to _atmospheric_\(Y\) values between 0.22 and 0.28. In models with irradiation, we assume the central stellar source of irradiation has a blackbody spectrum with a temperature of \(T_{*}=5777\)K, radius of \(R_{\odot}\), and is a distance of \(d=5.2\)AU from the planet. Furthermore, as has been traditional in the literature, we employ the "pseudo" effective temperature \(\tilde{T}_{\rm eff}\) as a constraint to help determine consistent values of \(T_{\rm int}\) for each set of cloud or irradiation parameters. More specifically, we use as a fiducial \(\tilde{T}_{\rm eff}=125.57\pm 0.07\)K, based on Cassini CIRS and VIMS observations (Li et al., 2012). These authors found that while infrared emissions measured using the CIRS (Composite Infrared Spectrometer, \(>7\mu\)m) account for most of Jupiter's emissions, the emissions around 5\(\mu\)m measured in the VIMS band (Visual and Infrared Mapping Spectrometer, 4.5-5.5 \(\mu\)m) have a non-negligible \(\sim 1\%\) contribution to the total emission flux, whose specific fraction varies with latitude. To be consistent in matching this constraint, our \(\tilde{T}_{\rm eff}\) is defined as the sum of the thermal emission larger than 4.5 microns, to differentiate with \(T_{\rm eff}\), the total integrated brightness temperature in theoretical models. Generally, \(\tilde{T}_{\rm eff}\) varies within a factor of 2% from \(T_{\rm eff}\) and does not constitute a major difference from previous evolution models of irradiated Jupiters. Moreover, as we will elaborate below, only the tabulated values of convective zone entropy \(S\) will determine the evolution track, and not \(\tilde{T}_{\rm eff}\). For the purpose of generating an atmospheric boundary condition calibrated on measurements of Jupiter, we explore the parameter space of \(P_{\rm irr}\) and \(a_{0}\) of models with \(\tilde{T}_{\rm eff}\approx 125.57\)K, and converge (see below) on a best-fit model with \(P_{\rm irr}=0.15\) and \(a_{0}=1\mu\)m. During this process, the Temperature-Pressure(TP) profile measured by the _Galileo_ entry probe (Seiff et al., 1998), which boasts a characteristic temperature inversion, places a tight constraint on \(P_{\rm irr}\). This temperature inversion in the "stratosphere" of Jupiter is also seen in Voyager data (Lindal et al., 1981), and attributed to the interaction between alkanes and stellar irradiation (Yelle et al., 2001). The lower stratosphere, where the temperature profile is relatively smooth, is mainly heated by absorption of methane of sunlight in the near-IR wavelengths. In our modeling, the redistribution introduces extra effective cooling between 0.05 and 0.5 bars. In Figure 1, we plot the TP profiles of characteristic atmospheric models against the _Galileo_ entry probe data. For comparison, the blue line corresponds to a model with neither a cloud nor irradiation. In this model, \(T_{\rm int}=T_{\rm eff}\approx\tilde{T}_{\rm eff}\) and no temperature inversion is observed. The orange line shows that the irradiation-only model with \(P_{\rm irr}=0.15\) fits reasonably well with the entry probe measurements, and we found that \(T_{\rm int}\) needed to be \(\sim\)97 K to satisfy the \(\tilde{T}_{\rm eff}=125\) K constraint. This is slightly different from the 99 K estimation of Fortney et al. (2011), which may be a consequence of their neglect of redistribution. In Figure 1, we include three additional models with ammonia clouds of modal particle size \(a_{0}=1\mu\)m and different values of \(P_{\rm irr}\). Generally, at fixed \(\tilde{T}_{\rm eff}\), cloud existence or particle size does not have a strong impact on the TP profile since the stratosphere is not heated by clouds, as can be seen by comparing \(P_{\rm irr}=0.15\) models with and without ammonia clouds. However, the temperature profile transition at the "tropopause" is quite sensitive to \(P_{\rm irr}\). While for \(P_{\rm irr}=0.3\) the thermal inversion at \(\sim 0.5\) bar is too steep, it becomes too smooth without any redistribution cooling, and for \(P_{\rm irr}=0.15\) the TP profile fits well for all cloud particle sizes. Moreover, a value for \(T(1\)bar) of \(166\pm 1\)K, consistent with Galileo entry probe data (Seiff et al., 1998), is well reproduced and comports with other previous theoretical models of Jupiter's atmospheric thermal profile (Guillot et al., 2004). Note that our models still neglect other potentially important heating/cooling sources such as aerosols Zhang et al. (2015), wave-breaking (Young et al., 2005), etc., but we emphasize that we mainly aim to improve upon past published literature interested in boundary conditions for evolutionary models. To break the degeneracy in the selection of cloud modal particle size, the geometric albedo spectrum serves as an additional useful constraint. Inspired by the conclusion that 1-10 micron NH\({}_{3}\) ice particles fit the Infrared Space Observatory data (Brooke et al., 1998), we experimented with clouds with modal particle size (\(a_{0}\)) of 1, 3, and 10 microns. We note there are more recent studies, based on New Horizons LEISA (Linear Etalon Imaging Spectral Array) data, arguing that the characteristic particle size is more likely between \(\sim\)2 and 4 microns (Sromovsky & Fry, 2018), with a relatively wide distribution, and also that NH\({}_{4}\)SH solids can contribute to the clouds. Exploring this relevant range of parameters, we compare our geometric albedo spectrum with the high resolution sub-micron geometric albedo spectrum from the European Southern Observatory (Karkoschka, 1998). Our results for different models and particle sizes are shown in Figure 2, plotted against the data of Karkoschka (1998). We note that for the 1-micron case our geometric albedo is consistent with observations for \(\gtrsim 0.5\mu m\). However, the irradiation-only model (red solid line) seriously underestimates the geometric albedo profile; even the overall shape cannot be matched. Note that we do not attempt to model the extra scatterer/chromophores (Carlson et al., 2016) below \(0.5\mu\)m wavelength, about which there are significant uncertainties (Lacy et al., 2019). For cloud sizes of 3 and 10 microns, although the change in particle size does not affect the temperature profile, the geometric albedo spectrum itself is slightly below the observations. In conclusion, we find that the general geometric albedo spectrum can be reproduced only with the inclusion of clouds and that its values sensitively increase with decreasing cloud particle size. We also find that for \(a_{0}=1\mu\)m the albedo spectrum matches well the measured values in Karkoschka (1998). For this set of parameters, the internal temperature (\(T_{\rm int}\)) at a \(\tilde{T}_{\rm eff}\) of \(\sim 125.57\) K is constrained to be \(\approx\)102 K, in contrast with a \(T_{\rm int}\) of \(\approx 97\) K for the no-cloud case. ### A Discussion Concerning the Bond Albedo In theory, one can also relate the scattered/reflected fraction of solar irradiation with an effective average Bond albedo \(A_{B}\), where the fraction that contributes to \(\sigma T_{\rm eff}^{4}\) is \(1-A_{B}\). Li et al. (2018) analyzed the Cassini VIMS and ISS (Imaging Science Subsystem) data to estimate a Bond albedo of 0.503\(\pm\)0.012, which is significantly larger than the previous adopted values around 0.343 (Conrath et al., 1989; Fortney & Hubbard, 2003). By applying the incident solar flux at Jupiter's orbital radius, they derive an internal flux of 7.485 \(\pm\) 0.163 W/m\({}^{2}\), consistent with Jupiter's total emitted power of 14.098\(\pm\)0.031 W/m\({}^{2}\) (from the earlier measurement of \(\tilde{T}_{\rm eff}=125.57\pm 0.07\)K in Li et al. (2012)) and corresponding to a \(T_{\rm int}\) of \(\approx 107\) K. As we noted, our cloudless model with or without redistribution, as well as the cloudless Jupiter model of Fortney et al. (2011), constrained by \(\tilde{T}_{\rm eff}\approx 125\) K, both give \(T_{\rm int}\lesssim 100\)K, which is inconsistent with the latest measurements of Jupiter's heat balance. As a matter of fact, this discrepancy implies there should be significant cloud effects at work scattering away a larger fraction of incident irradiation. However, we caution that given our planar calculations we have no associated phase integrals to directly provide the Bond albedo from the geometric albedo spectrum. The spectrum-integrated Bond albedo can be calculated from a spectroscopic and atmospheric model by integrating the monochromatic spherical albedo, weighted by the stellar flux \(F_{*}(\lambda)\): \[A_{B}=\frac{\int F_{*}(\lambda)A_{s}(\lambda)d\lambda}{\int F_{*}(\lambda)}\,, \tag{4}\] where \(A_{s}(\lambda)\) is the spherical albedo (monochromatic Bond albedo). The spherical albedo and monochromatic Bond albedo are the same and given by the product of \(A_{G}(\lambda)\) and the phase integral (\(q(\lambda)\)). Due to the stellar flux weighting, the optical wavelengths dominate the total Bond albedo. However, \(q(\lambda)\) cannot be self-consistently determined from a 1D planar model, and the scattering of light over all phase angles must be studied in 2D (Marley & McKay, 1999; Cahoy et al., 2010; Madhusudhan & Burrows, 2012). Li et al. (2018) measured the realistic \(q(\lambda)\) for Jupiter, averaged over all phase angles, to be \(\approx 1.3\) in the optical wavelength range of interest (see their Figure 3). This means that in realistic multidimensional models, we should expect a smaller fraction of the irradiation to contribute to the planet emissions calculated in 1D; hence, it's natural that the internal \(T_{\rm int}\) is larger in their measurements than in our fiducial model (which boasts a consistent geometric albedo spectrum but lacks the phase integral). We note that, informed by their model, if we multiply our \(A_{G}\) by their factor of 1.3 to mimic a spherical albedo, we obtain a consistent frequency-integrated value for the Bond albedo of 0.5. However, since the phase integral is measured only for Jupiter and subject to many uncertainties, we make no attempt to reconcile this mismatch with artificial treatments that could lead to significant confusion 2. The scattering phase function can be treated realistically only with multi-dimensional radiative transfer simulations. These uncertainties and caveats not withstanding, we conclude that our choice of fiducial parameters: \(P_{\rm irr}=0.15\) and \(a_{0}=1\mu\)m (with both irradiation and ammonia clouds included) suitably reproduces the observation data and is sufficient to provide practical atmosphere boundary conditions for the evolution of Jupiter-like planets. Since redistribution and clouds seem necessary to reproduce temperature inversions and the geometric albedo spectrum of Jupiter, the inclusion of these effects is an improvement over earlier realizations of atmospheric boundary conditions for evolutionary calculations (Burrows et al., 1997; Fortney et al., 2011). Footnote 2: However, see the next section for tests varying \(f\) that might inform the general dependence of \(T_{\rm int}\) on the phase integral. To demonstrate its structure, in Figure 3 we plot the atmospheric entropy per baryon at depth as a function of either \(T_{\rm int}\) or \(\tilde{T}_{\rm eff}\) and surface gravity using our fiducial boundary table. The entropy surface in the \(T_{\rm int},g\) plane is shown in blue (with wireframe) and the entropy surface in the \(\tilde{T}_{\rm eff},g\) plane is shown in red. With a planetary evolution code one follows a gas giant along entropy surfaces from high entropy and low \(g\) (upper left) to low entropy and high \(g\) (lower right), interpolating the table within grid points. It's apparent that at high entropy and temperature the surfaces converge, while at low temperature \(\tilde{T}_{\rm eff}\) starts to deviate from \(T_{\rm int}\) due to the increasing contribution of irradiation. In the remainder of this paper, we will omit the tilde symbol in \(\tilde{T}_{\rm eff}\) for simplicity. ## 4 Homogeneous Adiabatic Evolutionary Calculations In traditional adiabatic cooling calculations, the internal flux and entropy are related by energy conservation: \[\frac{dL}{dm}=-T\frac{dS}{dt}, \tag{5}\] where \(L\) is the luminosity, \(S\) is the specific entropy per mass, and \(t\) is the time. At a given time, \(S\) would be the same throughout an adiabatic envelope and independent of radius. \(m\) is the integrated mass (\(\int^{r}4\pi r^{\prime 2}\rho(r^{\prime})dr^{\prime}\)) and \(T\) is the local temperature, both functions of shell radius \(r\). From this equation we can relate the timestep between two models \(\Delta t\) that differ in entropy by \(\Delta S\). The luminosity (\(L\)) is derived physically from the planet interior and is equal to \(4\pi R^{2}\sigma T_{\rm int}^{4}\), where the value of \(T_{\rm int}\) at a given \(S\) and surface gravity is inverted from a model atmosphere table (see Figure 3). \(T_{\rm eff}\) at a given time is also inverted from the model atmosphere table, but does not go into the calculation of the variation of internal entropy and the evolution of the temperature profile with time. In Figure 4, we plot the evolutionary tracks of a "hot-start" Jupiter for our fiducial boundary conditions, as well as those from the Burrows et al. (1997) and Fortney et al. (2011) models. All models used the Saumon et al. (1995) equation of state. The left panel shows \(T_{\rm int},\tilde{T}_{\rm eff}\) and the right panel shows the radius evolution. The Fortney et al. (2011) evolution track for Jupiter is taken from their Figure 7, for which they included a \(10M_{\odot}\) metal core and approximately \(15M_{\odot}\) of extra metallicity in the form of water in the H/He adiabatic envelope. In their paper, they also used their adiabatic evolution code to calculate the evolution of Jupiter with Burrows et al. (1997) tables that are for \(Z=Z_{\odot}\) and do not include irradiation. They obtain a low \(T_{\rm int}\) of \(\approx 100\)K at 4.56 Gyrs (see their Figure 7) 3. However, we are unable to reproduce this result from Fortney et al. (2011) using the Burrows et al. (1997) boundary condition table at our disposal for any equivalent core/heavy-element mass between 0 to \(30M_{\oplus}\). Instead, in Figure 4, we show an evolution track produced by our code with \(25M_{\oplus}\). This model yields at 4.567 Gyrs a \(T_{\rm int}\) 110 K, which is larger than the \(T_{\rm int}\) from Fortney et al. (2011) models, but consistent with earlier calculations performed by CoolTLusty. Footnote 3: Fortney et al. (2011) created effective temperatures for the Burrows et al. (1997) boundary condition table through post-processing of irradiation, assuming a constant Bond albedo of 0.343. To avoid confusion, we do not adopt this tactic. Hence, no corresponding \(\tilde{T}_{\rm eff}\)s are plotted for the Burrows et al. (1997) calculation. Nevertheless, with our fiducial table, and a core mass of \(25M_{\oplus}\), we obtain \(\tilde{T}_{\rm eff}\sim 127K\) at 4.56 Gyrs. While \(T_{\rm int}\) in our calculation is smaller than found using the Burrows et al. (1997) table, the total emission or effective temperature is reasonably close to the Li et al. (2018) estimation. Moreover, adding the fiducial core mass has a pronounced influence on the radius evolution. In the right panel of Figure 4, we also compare the evolution of planetary radius with (solid line) and without (dashed line) the core mass for the fiducial boundary condition table. Generically, with a metal core or equivalent heavy-element mass in the envelope, the gas giant has a larger mean density and is more compact. Figure 1: Comparison of the temperature-pressure profiles of different Jupiter models at \(\tilde{T}_{\rm eff}=125\)K. Only the isolated (no irradiation or cloud) models show no temperature inversion. All other models are irradiated at 5.2 A.U., either without clouds or with clouds of modal particle size of \(1\mu\)m. We observe that \(P_{\rm irr}\) is closely related to the temperature valley at the inversion around 0.1 bar, and that \(P_{\rm irr}=0.15\) best reproduces the observational data. Once the redistribution factor is set, the existence of clouds does not alter the temperature-pressure profile. Other particle sizes also have TP profiles that nearly coincide with the green line. The Galileo entry probe data from Seiff et al. (1998) is plotted in black. Figure 2: Geometric albedo spectra for different irradiated atmospheric models without clouds (red) or with clouds of different modal particle sizes, fixing \(P_{\rm irr}=0.15\). The black spectrum is from Karkoschka (1998). ### Dependence on the \(f\) factor In the fiducial models, \(f=1/2\) is adopted to approximate the average zenith angle for a moderately irradiated planet. For strong irradiation, this factor approaches 2/3 (Burrows et al., 2008) so there is a degree of freedom here not constrained by planar models. In the left panel of Figure 5, we compare \(T_{\rm int}\) and \(T_{\rm eff}\) evolutionary tracks using \(f=0.67\) and our fiducial tables (keeping \(a_{0}=1\mu\)m). We find that the \(T_{\rm int}\) and \(T_{\rm eff}\) values at the current age of Jupiter increase and decrease, respectively, with decreasing \(f\), such that low \(f\) evolutionary curves are generally anchored between those of high \(f\), although there are some sharp non-linear transitions at \(T_{\rm int}\sim\)150-200K due to the onset of ammonia clouds. This is consistent with the expectation that these temperatures should converge towards the isolated planet case with \(T_{\rm eff}=T_{\rm int}\) (similar to the Burrows et al. (1997) evolutionary curve) when \(f\) approaches zero (i.e., when irradiation is ignored). Interestingly, a decrease in \(f\) may also generally represent the effect of an extra scatterer (SS3.2). If the fact that \(A_{B}>A_{G}\) is a consequence of \(q>1\), then one can decrease \(f\) by a factor of \(\approx(A_{B}-A_{G})/(1-A_{G})\) to account for this reduction. This will result in a larger \(T_{\rm int}\) at 4.56 Gyrs, in the direction of being slightly more consistent with Li et al. (2018). However, the inclusion of a "phase-integral correction" in \(f\) is quite artificial, and we certainly do not expect \(q\) or this modification of the scattering fraction to be constant in time, Nevertheless, this crude \(f\) pseudo-dependence illuminates to zeroth-order the dependence of \(T_{\rm eff}\) and \(T_{\rm int}\) on the phase integral, such that for multi-dimensional radiative transfer codes that Figure 3: The specific entropy in the convective zone under the atmosphere’s thin radiative layer, plotted as a function of gravity, and effective temperature \(\tilde{T}_{\rm eff}\) (red) or internal temperature \(T_{\rm int}\) (blue) for our fiducial model (\(f=0.5,P_{\rm irr}=0.15,a_{0}=1\mu\)m). At low temperatures, \(\tilde{T}_{\rm eff}\) starts to become significantly larger than \(T_{\rm int}\) due to the increasing contribution of stellar irradiation. do include this effect, we expect evolutionary tracks to move slightly closer to the observations. ### Dependence on Particle Size In the right panel of Figure 5, we display the evolution of \(T_{\rm int}\) and \(T_{\rm eff}\) for fiducial and \(a_{0}=3\mu\)m models (fixing \(f=1/2\)). Just as with the dependence on smaller \(f\), for smaller particle size, \(T_{\rm int}\) at 4.56 Gyrs is larger, but \(T_{\rm eff}\) is not necessarily also larger; this results from the smaller contribution from irradiation due to larger albedo. For larger particle sizes, the two temperatures tend to diverge, and the contrast between \(T_{\rm int}\) and \(T_{\rm eff}\) becomes large. In the small-particle-size limit, for a negligible irradiation contribution (when incident radiation is completely scattered), we expect these temperatures to converge, similar to what is found in the \(f\to 0\) limit. ### Dependence on Abundances All of the models above are for \(Y=0.25\) and \(Z=3.16Z_{\odot}\) in the atmosphere. We also compare evolutionary tracks with \(Y=0.25,Z=10Z_{\odot}\) and \(Y=0.22,Z=3.16Z_{\odot}\) and find that varying \(Y\) in the atmosphere has little or no effect on the planetary evolution. However, an increase in \(Z\) leads to an increase in \(T_{\rm int}\) and \(T_{\rm eff}\) and slows planetary cooling by raising the atmospheric opacity. This effect arises even at earlier times, unlike Figure 4: Left panel: Evolution of internal brightness temperature \(T_{\rm int}\) and emission temperature \(T_{\rm eff}\) for our fiducial Jupiter model (blue) evolved in our adiabatic code, the Fortney et al. (2011) boundary condition (taken from their Figure 7), and the Burrows et al. (1997) boundary condition evolved in our adiabatic code. Right panel: the evolution of the model Jupiter radius. An extra evolutionary track for zero core mass is plotted with a dashed line to show that a solid core is needed to make the planet more compact. Figure 5: Left panel: Jupiter’s temperature evolution comparing different zenith angle factors \(f\); Right panel: Jupiter’s temperature evolution comparing different cloud particle size parameters \(a_{0}\). the effect of ammonia clouds which appears only at late stages. Note again that these abundance variations are in the atmosphere, and that our internal adiabats still have \(Y=0.27,Z=3.16Z_{\odot}\) fixed. The dependence of gas-giant evolution on internal compositions and their profiles (e.g. effect of helium depletion) is properly a subject of future work. ## 5 Conclusion In this study, we develop a set of atmospheric models using the 1D atmosphere and spectral code CoolTLusty. Our goal is to create state-of-the-art boundary condition tables that could be used for studying the evolution of gas giant planets. The atmospheric opacities employed were significantly updated by Lacy & Burrows (2023) and we implement realistic treatments of clouds and irradiation to calibrate our parameters with the observed temperature structure and albedo spectrum of Jupiter. We simulated the internal evolution of a "hot-start" Jupiter using this set of tables in the context of the traditional adiabatic paradigm. This approach is being challenged by the new Juno data (Wahl et al., 2017; Bolton et al., 2017), but the viability of these atmospheric boundary conditions for any evolutionary model is not compromised. We find that with reasonable irradiation and cloud parameters we obtain an atmospheric boundary condition table that cools down a "hot-start" Jupiter to close to its current measured thermal state with its measured geometric albedos. In addition to providing a useful atmosphere reference model for future giant-planet cooling calculations, we explored the dependence on various physical parameters. The average zenith angle parameter \(f\) and cloud modal particle size \(a_{0}\) have particular sway over the influence of irradiation. If the absorption fraction of irradiation is very high, then the effective temperature \(T_{\rm eff}\) may not be able to cool down to near the measured value of \(\sim\)125 K at Jupiter's current age. However, by using fiducial sets of cloud parameters to ensure a reasonable geometric albedo spectrum, we observed that \(T_{\rm eff}\) cools down faster than observed using previous approaches (Burrows et al., 1997; Fortney et al., 2011), better matching the surface observations of Jupiter. The average zenith angle also has an impact, and for a slight reduction in \(f\), \(T_{\rm eff}\) also cools down a bit faster. In addition, the atmospheric metallicity affects the cooling rate, with a higher \(Z\) raising the atmosphere opacity and resulting in slower cooling. The helium fraction \(Y\) in the atmosphere has a minimal effect. For this study, we fixed the stellar luminosity and implemented only ammonia clouds. Nevertheless, we believe incorporating time-changing stellar insolation and allowing for the formation of water clouds at higher atmospheric temperatures should have a less significant effect than ammonia clouds, since a giant's initial evolution across this parameter space is much more rapid than late-stage cooling. Our collection of tables, calibrated using data from Jupiter and covering a relatively broad range of parameters that includes varying \(Z\), \(a_{0}\), and \(f\), serves as a foundational resource for advancing our understanding of the evolution of Jupiter-like exoplanets and for modeling their spectral evolution. This new boundary-condition dataset is available to the scientific community for future research into the evolution of both solar-system giants and giant exoplanets. Nevertheless, the effects of the composition gradients inferred from Juno data (Wahl et al., 2017; Bolton et al., 2017) and of helium rain (Stevenson, 1975; Stevenson & Salpeter, 1977; Fortney & Hubbard, 2004; Mankovich et al., 2016; Mankovich & Fortney, 2020), as well as updates to the H/He equation of state (Nettelmann et al., 2012; Militzer & Hubbard, 2013; Miguel et al., 2016; Howard et al., 2023; Howard & Guillot, 2023), demand a more generalized view beyond the adiabatic paradigm. We used the latter here merely to test our atmosphere calculations in light of previous work. In summary, these new boundary tables and atmospheres are meant to support the next generation of comprehensive giant planet models, which is already well underway (Nettelmann et al., 2015; Pustow et al., 2016; Vazan et al., 2016; Mankovich et al., 2016; Vazan et al., 2018; Mankovich & Fortney, 2020; Miguel & Vazan, 2023). We thank Brianna Lacy for updated opacity tables and helpful discussions. Funding (or partial funding) for this research was provided by the Center for Matter at Atomic Pressures (CMAP), a National Science Foundation (NSF) Physics Frontier Center, under Award PHY-2020249. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation. CoolTLusty(Hubeny & Lanz, 1995; Sudarsky et al., 2000, 2003, 2005; Burrows et al., 2008). Relevant boundary condition entropy tables used for generating our figures are presented at [https://doi.org/10.5281/zenodo.8297690](https://doi.org/10.5281/zenodo.8297690).
2304.00604
Investigation of the strange pentaquark candidate $P_{ψ s}^Λ(4338){}^0$ recently observed by LHCb
The recently observed strange pentaquark candidate, $P_{\psi s}^{\Lambda}(4338){}^0$, is investigated to provide information about its nature and substructure. To this end, its mass and width through the decay channels $P_{\psi s}^{\Lambda}(4338){}^0 \rightarrow J/\psi \Lambda$ and $P_{\psi s}^{\Lambda}(4338){}^0 \rightarrow \eta_c \Lambda$ are calculated by applying two- and three-point QCD sum rules, respectively. The state is considered as a $\Xi_c\bar{D}$ meson-baryon molecular structure with spin-parity quantum numbers $J^P=\frac{1}{2}^-$. The obtained mass, $m_{P_{\psi s}^{\Lambda}(4338){}^0}=4338\pm 130~\mathrm{MeV}$, and width, $\Gamma_{P_{\psi s}^{\Lambda}(4338){}^0}= 10.40\pm 1.93~\mathrm{MeV}$, are consistent with the experimental data within the presented uncertainties. This allows us to assign a $\Xi_c\bar{D}$ molecular structure of $J^P=\frac{1}{2}^-$ for the $P_{\psi s}^{\Lambda}(4338){}^0$ state.
K. Azizi, Y. Sarac, H. Sundu
2023-04-02T19:14:00Z
http://arxiv.org/abs/2304.00604v2
Investigation of the strange pentaquark candidate \(P^{\Lambda}_{\psi s}(4338)^{0}\) recently observed by LHCb ###### Abstract The recently observed strange pentaquark candidate, \(P^{\Lambda}_{\psi s}(4338)^{0}\), is investigated to provide information about its nature and substructure. To this end, its mass and width for the decay channel \(P^{\Lambda}_{\psi s}(4338)^{0}\to J/\psi\Lambda\) are calculated by applying two- and three-point QCD sum rule methods, respectively. The state is considered as a \(\Xi_{c}\bar{D}\) meson-baryon molecular structure with spin-parity quantum numbers \(J^{P}=\frac{1}{2}^{-}\). The obtained mass and width are \(m_{P^{\Lambda}_{\psi s}(4338)^{0}}=4338.27\pm 129.99\) MeV and \(\Gamma(P^{\Lambda}_{\psi s}(4338)^{0}\to J/\psi\Lambda)=(7.22\pm 1.78)\) MeV, which are consistent with the experimental data. This allows us to assign a \(\Xi_{c}\bar{D}\) molecular structure of \(J^{P}=\frac{1}{2}^{-}\) for the \(P^{\Lambda}_{\psi s}(4338)^{0}\) state. ## I Introduction Exotic states such as pentaquarks and tetraquarks have become one of the focus of investigations in particle physics since the proposal of the quark model [1]. Because their existence was not prohibited either by the quark model or QCD, they attracted attention from the beginning and were investigated extensively for a long time. Finally, expectations eventuated and the announcement of the first observation of such states was made in 2003 for a tetraquark state, \(X(3872)\), by the Belle Collaboration [2]. Later, the confirmation of this state came from various collaborations [3; 4; 5; 6; 7; 8]. In 2015 a different member of the exotic states, namely the pentaquark state, containing five valance quarks was announced to be observed by the LHCb Collaboration [9]. The two states, \(P_{c}(4380)\) and \(P_{c}(4450)\), were observed in the \(J/\psi+p\) decay channel [9] and later, in 2019, the analyses with a larger data sample revealed that the previously announced \(P_{c}(4450)\) state split into two states, \(P_{c}(4440)\) and \(P_{c}(4454)\), and another pick, \(P_{c}(4312)^{+}\), also came into sight [10]. The reported resonance parameters for these states were as follows [9; 10]: \(m_{P_{c}(4380)^{+}}=4380\pm 8\pm 29\) MeV, \(\Gamma_{P_{c}(4380)^{+}}=205\pm 18\pm 86\) MeV, \(m_{P_{c}(4440)^{+}}=4440.3\pm 1.3^{+4.1}_{-4.7}\) MeV, \(\Gamma_{P_{c}(4440)^{+}}=20.6\pm 4.9^{+8.7}_{-10.1}\) MeV, \(m_{P_{c}(4457)^{+}}=4457.3\pm 0.6^{+4.1}_{-1.7}\) MeV, \(\Gamma_{P_{c}(4457)^{+}}=6.4\pm 2.0^{+5.7}_{-1.9}\) MeV, \(m_{P_{c}(44312)^{+}}=4311.9\pm 0.7^{+6.8}_{-0.6}\) MeV and \(\Gamma_{P_{c}(4312)^{+}}=9.8\pm 2.7^{+3.7}_{-4.5}\) MeV. In 2021 and 2022 there occurred two more pentaquark states' reports which possess a strange quark. These two states \(P_{cs}(4459)\)[11] and \(P_{c}(4337)\)[12] were reported to have the following masses and widths: \(m_{P_{cs}(4459)^{0}}=4458.8\pm 2.9^{+4.7}_{-1.1}\) MeV, \(\Gamma_{P_{c}(4459)^{0}}=17.3\pm 6.5^{+8.0}_{-5.7}\) MeV and \(m_{P_{cs}(4337)^{+}}=4337^{+7+2}_{-4.2}\) MeV, \(\Gamma_{P_{c}(4337)^{+}}=29^{+26+14}_{-12-14}\) MeV. The experimental observations of these non-conventional states have increased the theoretical interest in these states and triggered extensive theoretical investigations over their identifications and various properties. Their substructures were still obscure, which has motivated many affords to explain this point by assigning them either being molecules or compact states. In Refs. [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34] the pentaquark states were investigated by taking their substructure as diquark-diquark-antiquark or diquark-triquark forms. Owing to their proximity to the relevant meson baryon threshold and small widths, the molecular structure has been another commonly considered structure for the pentaquark states. With molecular structure assumption, the properties of these states, such as their mass spectrum and various interactions, were investigated with the application of different approaches including the contact-range effective field theory [35; 36; 37; 38], the effective Lagrangian approach [39; 40; 41; 42; 43], the QCD sum rule method [44; 45; 46; 47; 48; 49; 50; 51; 52; 53], one-boson exchange potential model [54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65] and quasipotential Bethe-Salpeter [66; 67; 68; 69]. Besides, one can find other works in Refs.[70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 87; 88; 89; 91; 85; 89; 92; 86; 87; 88; 89; 93; 94; 95; 96; 97; 98; 99; 100; 101] and the references therein adopting the molecular interpretation for pentaquark states. They were also investigated with the possibility that they were arising from kinematical effects [102; 103; 104; 105; 106]. Though there exist so many works over them, they were in need of many more to clarify or support their still uncertain properties. On the other hand, the possible pentaquark states other than the observed ones and possessing strange, bottom or charm quarks were also quoted for with their expectation to be observed in the future [107; 108; 109; 110; 111; 112; 113; 114; 115; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. Among these pentaquark states, the present work focuses on the one which was observed very recently by the LHCb collaboration [152] in the amplitude analyses of \(B^{-}\to J/\psi\bar{p}\) decay. The measured mass and width for the state, which was labeled as \(P^{\Lambda}_{\psi s}(4338)^{0}\), were reported as \(m_{P^{\Lambda}_{\psi s}}=4338.2\pm 0.7\pm 0.4\,\mathrm{MeV}\) and \(\Gamma_{P^{\Lambda}_{\psi s}}=7.0\pm 1.2\pm 1.3\,\mathrm{MeV}\), respectively with the preferred spin-parity quantum numbers \(J^{P}=\frac{1}{2}^{-}\). Having a mass and narrow width in consistency with meson-baryon molecular interpretation this structural form is adopted in Refs. [85; 88]. In Ref. [85], a coupled-channel calculation was applied considering molecular states and the results obtained for \(\Xi_{c}\bar{D}\) interaction indicated a wider peak than the observed one in the experiment. With the constituent quark model formalism \(P^{\Lambda}_{\psi s}(4338)^{0}\) was suggested to be a baryon-meson molecule state with \((I)J^{P}=(0)\frac{1}{2}^{-}\) and mass and width \(m_{P^{\Lambda}_{\psi s}(4338)^{0}}=4318.1\) MeV and \(\Gamma_{P^{\Lambda}_{\psi s}(4338)^{0}}=0.07\) MeV, respectively [88]. In the Ref. [153], the light cone QCD sum rules method is implemented to calculate the magnetic moments of the \(P^{\Lambda}_{\psi s}(4338)^{0}\) and \(P^{\Lambda}_{\psi s}(4459)^{0}\) states. The \(P^{\Lambda}_{\psi s}(4338)^{0}\) state and the other candidate pentaquark states were investigated in Ref. [48] using QCD sum rules method and adopting the molecular structure, and the analyses were in favor of the \(P^{\Lambda}_{\psi s}(4338)^{0}\) state having a \(\Xi_{c}\bar{D}\) molecular structure with spin-parity and isospin quantum numbers \(J^{P}=\frac{1}{2}^{-}\) and \((I,I_{3})=(0,0)\), respectively. The references [154; 155; 116; 156] also investigated the \(P^{\Lambda}_{\psi s}(4338)^{0}\) state in association to the molecular form. As already mentioned, there exist many studies devoted to describing the nature of the pentaquark states. These studies, performed with various approaches covering different structures for the pentaquark states, gave results consistent with the experimentally observed parameters. This fact makes the subject more intriguing and open to new investigations. Therefore it is necessary to provide further information to support or check the proposed alternative structures for a better identification of their obscure sub-structure. Moreover, the works over these exotic states both test our knowledge and provide support for the improvement of our understanding of the QCD in its non-perturbative regime. With these motivations, in the present work, we investigate the strong decay of the recently observed pentaquark state \(P^{\Lambda}_{\psi s}(4338)^{0}\) into \(J/\psi\Lambda\) states, sticking to a meson-baryon molecular interpretation. To this end, we apply the QCD sum rule method, which put forward its success with plenty of predictions consistent with the experimental observations [157; 158; 159]. The interpolating field for the state is chosen in the \(\Xi_{c}\bar{D}\) molecular form. For completeness, firstly, we obtain the mass of the state and current coupling constant using the considered interpolating current, which are subsequently to be used as inputs in strong coupling constant analyses. The rest of the paper has the following organization. In Sec. II the QCD sum rule for the mass of the considered state is presented with the numerical calculation of the corresponding results for the mass and current coupling constant. Sec. III contains the details of the QCD sum rules to calculate the strong coupling constants for the \(P^{\Lambda}_{\psi s}(4338)^{0}\to J/\psi\Lambda\) decay and their numerical analyses as well. Last section gives a short summary and conclusion. ## II QCD sum rule for the mass of \(P^{\Lambda}_{\psi s}(4338)^{0}\) state To better understand the substructure of the pentaquark states, one way is the comparison of the observed properties of these particles with the related theoretical findings. One of the important observables is the mass of these states. Beside the mass, the current coupling constant is also a very important input that is needed to calculate the observables related to the decays of the particles like their width. The present section gives the details of QCD sum rules calculations for the mass and current coupling of the strange pentaquark candidate \(P^{\Lambda}_{\psi s}(4338)^{0}\). The calculations start with the following two-point correlation function: \[\Pi(q)=i\int d^{4}xe^{iq\cdot x}\langle 0|{\cal T}\{J_{P_{cs}}(x)\bar{J}_{P_{cs}} (0)\}|0\rangle. \tag{1}\] In this equation, \({\cal T}\) is the time ordering operator and \(J_{P_{cs}}\) represents the interpolating current for the \(P^{\Lambda}_{\psi s}(4338)^{0}\) pentaquark state, which is denoted as \(P_{cs}\) in what follows. The current to interpolate this state is the \(\Xi_{c}\bar{D}\) molecular type with spin-parity \(J^{P}=\frac{1}{2}^{-}\) : \[J_{P_{cs}}=\epsilon^{abc}d_{a}^{T}C\gamma_{5}s_{b}c_{c}\bar{c}_{d}i\gamma_{5}u_ {d}, \tag{2}\] where \(C\) represents the charge conjugation operator, subindices \(a,\ b,\ c,\ d\) are used to represent the color indices, and \(u,\ d,\ s,\ c\) are the quark fields. To proceed in the calculations, one follows two separate paths resulting in two corresponding expressions containing the hadronic parameters on one side and QCD fundamental parameters on the other side. They are therefore called as the hadronic and QCD sides, respectively. The physical parameter under quest is obtained via a match of these two sides by means of a dispersion relation. Both sides contain various Lorentz structures and the matching is carried out considering the same structures obtained in these representations. The Borel transformation and continuum subtraction are the final operations applied on both sides to suppress the contributions of higher states and continuum. For the computation of the hadronic side, a complete set of the intermediate states with same quark content and carrying the same quantum numbers of the considered state is inserted inside the correlator. Treating the interpolating currents as annihilation or creation operators, and performing the integration over four-\(x\) the correlator becomes \[\Pi^{\rm Had}(q)=\frac{\langle 0|J_{P_{cs}}|P_{cs}(q,s)\rangle\langle P_{cs}(q,s)| \bar{J}_{P_{cs}}|0\rangle}{m_{P_{cs}}^{2}-q^{2}}+\cdots\,, \tag{3}\] where the contributions coming from higher states and continuum are represented by \(\cdots\), and one particle pentaquark state with momentum \(q\) and spin \(s\) is represented by \(|P_{cs}(q,s)\rangle\). To proceed, we need the following matrix element: \[\langle 0|\eta_{P_{cs}}|P_{cs}(q,s)\rangle=\lambda_{P_{cs}}u_{P_{cs}}(q,s). \tag{4}\] given in terms of the Dirac spinor \(u_{P_{cs}}(q,s)\) and current coupling constant \(\lambda_{P_{cs}}\). Substituting Eq. (4) into Eq. (3) and applying the summation over spin \[\sum_{s}u_{P_{cs}}(q,s)\bar{u}_{P_{cs}}(q,s)=\not{q}+m_{P_{cs}}, \tag{5}\] the result for the hadronic side is achieved as \[\Pi^{\rm Had}(q)=\frac{\lambda_{P_{cs}}^{2}(\not{q}+m_{P_{cs}})}{m_{P_{cs}}^{ 2}-q^{2}}+\cdots\,, \tag{6}\] which turns into following final form after the Borel transformation: \[\tilde{\Pi}^{\rm Had}(q)=\lambda_{P_{cs}}^{2}e^{-\frac{m_{P_{cs}}^{2}}{M^{2}} }(\not{q}+m_{P_{cs}})+\cdots\,, \tag{7}\] where \(\tilde{\Pi}^{\rm Had}(q)\) denotes the Borel transformed form of the correlator and \(M^{2}\) is the Borel mass parameter. The QCD side of the calculations requires the usage of the interpolating field explicitly in the correlator, Eq. (1). This is followed by the possible contractions of the quark fields via Wick's theorem, which turns the result into the one containing quark propagators as \[\Pi^{\rm QCD}(q)=-i\int d^{4}xe^{iqx}\epsilon_{abc}\epsilon_{a^{\prime}b^{ \prime}c^{\prime}}\Big{\{}{\rm Tr}[S_{s}^{bb^{\prime}}(x)\gamma_{5}CS_{d}^{ Taa^{\prime}}(x)C\gamma_{5}]{\rm Tr}[S_{u}^{dd^{\prime}}(x)\gamma_{5}S_{c}^{ d^{\prime}d}(-x)\gamma_{5}]\Big{\}}S_{c}^{cc^{\prime}}(x). \tag{8}\] The light and heavy quark propagators necessary for further calculations have the following explicit forms [160; 161]: \[S_{q,ab}(x) = i\delta_{ab}\frac{\not{x}}{2\pi^{2}x^{4}}-\delta_{ab}\frac{m_{q }}{4\pi^{2}x^{2}}-\delta_{ab}\frac{\langle\overline{q}q\rangle}{12}+i\delta_ {ab}\frac{\not{x}m_{q}\langle\overline{q}q\rangle}{48}-\delta_{ab}\frac{x^{2} }{192}\langle\overline{q}g_{s}\sigma Gq\rangle+i\delta_{ab}\frac{x^{2}\not{x} m_{q}}{1152}\langle\overline{q}g_{s}\sigma Gq\rangle \tag{9}\] \[-i\frac{g_{s}G_{ab}^{\alpha\beta}}{32\pi^{2}x^{2}}\left[\not{x} \sigma_{\alpha\beta}+\sigma_{\alpha\beta}\not{x}\right]-i\delta_{ab}\frac{x^{2 }\not{x}g_{s}^{2}\langle\overline{q}q\rangle^{2}}{7776},\] and \[S_{c,ab}(x) = \frac{i}{(2\pi)^{4}}\int d^{4}ke^{-ik\cdot x}\left\{\frac{\delta _{ab}}{\not{k}-m_{c}}-\frac{g_{s}G_{ab}^{\alpha\beta}}{4}\frac{\sigma_{\alpha \beta}(\not{k}+m_{c})+(\not{k}+m_{c})\sigma_{\alpha\beta}}{(k^{2}-m_{c}^{2})^ {2}}\right. \tag{10}\] \[\left.+\frac{\pi^{2}}{3}\langle\frac{\alpha_{s}GG}{\pi}\rangle \delta_{ij}m_{c}\frac{k^{2}+m_{c}\not{k}}{(k^{2}-m_{c}^{2})^{4}}+\cdots\right\},\] with \(G_{ab}^{\alpha\beta}=G_{A}^{\alpha\beta}t_{ab}^{A}\), \(GG=G_{A}^{\alpha\beta}G_{A}^{\alpha\beta}\); \(a,\ b=1,\ 2,\ 3\); \(A=1,\ 2,\cdots,8\) and \(t^{A}=\frac{\lambda^{A}}{2}\) where \(\lambda^{A}\) are the Gell-Mann matrices. The propagator for \(u\), \(d\) or \(s\) quark is represented by the sub-index \(q\). The final results for this side are obtained after the Fourier and Borel transformations as \[\tilde{\Pi}_{i}^{\rm QCD}(s_{0},M^{2})=\int_{(2m_{c}+m_{s})^{2}}^{s_{0}}dse^{ -\frac{s}{M^{2}}}\rho_{i}(s)+\Gamma_{i}(M^{2}), \tag{11}\] where \(s_{0}\) is the threshold parameter entering the calculations after the continuum subtraction application using the quark hadron duality assumption. \(\rho_{i}(s)\) represents the spectral densities which are the imaginary parts of the results obtained as \(\frac{1}{\pi}\text{Im}\Pi_{i}^{\text{QCD}}\) with \(i\) corresponding to either the result obtained from the coefficient of the Lorentz structure \(\not{q}\) or \(I\). The results of such calculations contain long expressions and, to avoid giving overwhelming expressions in the text, the explicit results of spectral densities will not be presented here. The quantities that we seek in this section, namely mass and the current coupling constant of the pentaquark state, are obtained by the match of the coefficients of the same Lorentz structures obtained in both the hadronic and QCD sides. These matches are represented as \[\lambda_{P_{cs}}^{2}e^{-\frac{m_{P_{cs}}^{2}}{M^{2}}}=\tilde{\Pi}_{\not{q}}^{ \text{QCD}}(s_{0},M^{2}), \tag{12}\] and \[\lambda_{P_{cs}}^{2}m_{P_{cs}}e^{-\frac{m_{P_{cs}}^{2}}{M^{2}}}=\tilde{\Pi}_{I }^{\text{QCD}}(s_{0},M^{2}). \tag{13}\] The next step is the analysis of the obtained results, for which one may apply any of the present structures. To this end, we choose the \(I\) structure. The input parameters needed in the calculation of the mass and current coupling constant are given in Table 1, which are also used for the coupling constant calculations to be given in the next section. In addition to the given input parameters, there are two auxiliary parameters needed in the analyses: the Borel parameter \(M^{2}\) and the continuum threshold \(s_{0}\). Following the standard criteria of the QCD sum rule method, their suitable intervals are fixed. These criteria include a relatively slight variation of the results with the change of these auxiliary parameters, the dominant contribution of the focused state compared to the higher states and continuum, and the convergence of the operator product expansion (OPE) used in the QCD side's calculation. Sticking to these criteria, we establish working regions of these parameters from the analyses. Seeking a region, for which the higher-order terms on OPE side contribute less compared to the lowest ones, and the ground state dominates over the higher ones, the working interval of the Borel parameter is determined as: \[3.0\text{ GeV}^{2}\leq M^{2}\leq 4.0\text{ GeV}^{2}. \tag{14}\] The determination of the continuum threshold interval has a connection to the energy of the possible excited states of the considered pentaquark state. With this issue in mind, we fix its interval as \[23\text{ GeV}^{2}\leq s_{0}\leq 25\text{ GeV}^{2}. \tag{15}\] By using all the inputs as well as the working windows of the auxiliary parameters, we depict the variation of the mass with respect to the auxiliary parameters for the considered structure in Fig. 1. This figure shows the mild dependence of the mass on the variations of the auxiliary parameters in their working windows. The residual dependence appear as the uncertainties in the results. The resultant values for the mass and the current coupling constant are: \[m_{P_{cs}}=4338.27\pm 129.99\text{ MeV},\qquad\text{and}\qquad\lambda_{P_{cs}}=(7.24\pm 0.21)\times 10^{-4}\text{ GeV}^{6}. \tag{16}\] \begin{table} \begin{tabular}{|c|c|} \hline \hline Parameters & Values \\ \hline \(m_{c}\) & \(1.27\pm 0.02\) GeV [162] \\ \(m_{b}\) & \(4.18^{+0.03}_{-0.02}\) GeV [162] \\ \(m_{s}\) & \(93^{+11}_{-5}\) MeV [162] \\ \(\langle\bar{q}q\rangle\)(1GeV) & \((-0.24\pm 0.01)^{3}\) GeV\({}^{3}\)[163] \\ \(\langle\bar{s}s\rangle\) & \(0.8\langle\bar{q}q\rangle\)[163] \\ \(m_{0}^{2}\) & \((0.8\pm 0.1)\) GeV\({}^{2}\)[163] \\ \(\langle\overline{q}g_{s}\sigma Gq\rangle\) & \(m_{0}^{2}(\bar{q}q)\) \\ \(\langle\frac{g_{s}}{\pi}G^{2}\rangle\) & \((0.012\pm 0.004)\) GeV\({}^{4}\)[164] \\ \(m_{J/\psi}\) & \((3096.900\pm 0.006)\) MeV [162] \\ \(m_{\Lambda}\) & \((1115.683\pm 0.006)\) MeV [162] \\ \(\lambda_{\Lambda}\) & \((0.013\pm 0.02)\) GeV\({}^{3}\)[165] \\ \(f_{J/\psi}\) & \((481\pm 36)\) MeV [166] \\ \hline \hline \end{tabular} \end{table} Table 1: Some input parameters used in the analyses of mass, current coupling constants and coupling constant of the \(P_{cs}\to J/\psi\Lambda\) decay. The result obtained for the mass has a good consistency with the observed mass of \(P_{\psi s}^{\Lambda}(4338)^{0}\) state announced as \(m_{P_{\psi s}^{\Lambda}}=4338.2\pm 0.7\pm 0.4\) MeV [152]. As is mentioned, the results obtained in this section are necessary inputs for the next section which is devoted to the strong decay of the considered pentaquark state, namely \(P_{\psi s}^{\Lambda}(4338)^{0}\to J/\psi\Lambda\). ## III QCD sum rule to analyze the \(P_{\psi s}^{\Lambda}(4338)^{0}\to J/\psi\Lambda\) decay The bare mass investigations of pentaquark states present in the literature, performed to explain the properties of the newly observed states, indicated that different assumptions for the substructures of these states might give consistent predictions with the observed ones. These necessitate deeper investigations which serve as support for previous findings. With this motivation, to clarify more the substructure and the quantum numbers of the observed \(P_{\psi s}^{\Lambda}(4338)^{0}\) state, in this section, we investigate the \(P_{\psi s}^{\Lambda}(4338)^{0}\to J/\psi\Lambda\) decay and calculate its width. To this end, the main ingredients are the strong coupling constants entering the low energy amplitude of the decay. To calculate these coupling constants via the QCD sum rule method, we use the following three-point correlation function: \[\Pi_{\mu}(p,q)=i^{2}\int d^{4}xe^{-ip\cdot x}\int d^{4}ye^{ip^{ \prime}\cdot y}\langle 0|\mathcal{T}\{J^{\Lambda}(y)J_{\mu}^{J/\psi}(0) \bar{J}^{P_{cs}}(x)\}|0\rangle, \tag{17}\] with the interpolating currents given in Eq. (2) and \[J^{\Lambda} = \frac{1}{\sqrt{6}}\epsilon^{lmn}\sum_{i=1}^{2}\Big{[}2(u_{l}^{T} CA_{1}^{i}d_{m})A_{2}^{i}s_{n}+(u_{l}^{T}CA_{1}^{i}s_{m})A_{2}^{i}d_{n}+(d_{n}^{T} CA_{1}^{i}s_{m})A_{2}^{i}u_{l}\Big{]},\] \[J_{\mu}^{J/\psi} = \bar{c}_{l}\gamma_{\mu}c_{l}. \tag{18}\] In Eq. (18) sub-indices, \(l,\ m,\ n\), are used to represent the color indices, and \(u,\ s,\ c\) stand for quark fields, \(A_{1}^{1}=I\), \(A_{1}^{2}=A_{2}^{1}=\gamma_{5}\), \(A_{2}^{2}=\beta\) which is a mixing parameter, and \(C\) represents the charge conjugation operator. Similar steps of the calculation followed in the previous section also apply here. Calculation of hadronic and QCD sides are followed by their proper matches considering the coefficients of the same Lorentz structures from both sides. For the hadronic side, we insert complete sets of hadronic states that have the same quantum numbers with the interpolating fields. Taking the four integral results in \[\Pi_{\mu}^{\rm Had}(p,q)=\frac{\langle 0|J^{\Lambda}|\Lambda(p^{\prime},s^{ \prime})\rangle\langle 0|J_{\mu}^{J/\psi}|J/\psi(q)\rangle\langle J/\psi(q) \Lambda(p^{\prime},s^{\prime})|P_{cs}(p,s)\rangle\langle P_{cs}(p,s)|\bar{J}^ {P_{cs}}|0\rangle}{(m_{\Lambda}^{2}-p^{2})(m_{J/\psi}^{2}-q^{2})(m_{P_{cs}}^{2 }-p^{2})}+\cdots, \tag{19}\] with \(\cdots\) denoting the contribution of the higher states and continuum; and \(p\), \(p^{\prime}\) and \(q\) being the respective momenta of the \(P_{cs}\), \(\Lambda\) and \(J/\psi\) states. The required matrix elements for calculations have the following forms: \[\langle 0|J^{P_{cs}}|P_{cs}(p,s)\rangle = \lambda_{P_{cs}}u_{P_{cs}}(p,s),\] \[\langle 0|J^{\Lambda}|\Lambda(p^{\prime},s^{\prime})\rangle = \lambda_{\Lambda}u_{\Lambda}(p^{\prime},s^{\prime}),\] \[\langle 0|J_{\mu}^{J/\psi}|J/\psi(q)\rangle = f_{J/\psi}m_{J/\psi}\varepsilon_{\mu}, \tag{20}\] Figure 1: **Left:** Variation of the the mass as function of \(M^{2}\) at different values of threshold parameter \(s_{0}\). **Right:** Variation of the the mass as function of \(s_{0}\) at different values of threshold parameter \(M^{2}\). where \(\varepsilon_{\mu}\) and \(f_{J/\psi}\) represent the polarization vector and the decay constant of the \(J/\psi\) state ; and \(\lambda_{P_{cs}}\) and \(\lambda_{\Lambda}\) are the current coupling constants of the \(P_{cs}\) and \(\Lambda\) states, respectively. \(|P_{cs}(p,s)\rangle\) corresponds to the one-particle pentaquark state with its spinor \(u_{P_{cs}}\) and \(u_{\Lambda}\) is the spinor of \(\Lambda\) state. The matrix element,\(\langle J/\psi(q)\Lambda(p^{\prime},s^{\prime})|P_{cs}(p,s)\rangle\) is given in terms of the considered strong coupling constants, \(g_{1}\) and \(g_{2}\), as \[\langle J/\psi(q)\Lambda(p^{\prime},s^{\prime})|P_{cs}(p,s)\rangle= \varepsilon^{*\mu}\bar{u}_{\Lambda}(p^{\prime},s^{\prime})\big{[}g_{1}\gamma_{ \mu}-\frac{i\sigma_{\mu\alpha}}{m_{\Lambda}+m_{P_{cs}}}q^{\alpha}g_{2}\big{]} \gamma_{5}u_{P_{cs}}(p,s). \tag{21}\] Substituting the matrix elements, Eq. (20) and Eq. (21), into the Eq. (19) using the summation over the spins of the spinors and polarization vector given as \[\sum_{s}u_{P_{cs}}(p,s)\bar{u}_{P_{cs}}(p,s) = (p\!\!\!/+m_{P_{cs}}),\] \[\sum_{s^{\prime}}u_{\Lambda}(p^{\prime},s^{\prime})\bar{u}_{ \Lambda}(p^{\prime},s^{\prime}) = (p\!\!\!/^{\prime}+m_{\Lambda}),\] \[\varepsilon_{\alpha}\varepsilon_{\beta}^{*} = -g_{\alpha\beta}+\frac{q_{\alpha}q_{\beta}}{m_{J/\psi}^{2}}, \tag{22}\] the hadronic side is achieved as \[\tilde{\Pi}_{\mu}^{\rm Had}(p,q) = e^{-\frac{m_{P_{s}}^{2}}{M^{2}}}e^{-\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{f_{J/\psi}\lambda_{\Lambda}\lambda_{P_{cs}}m_{\Lambda}}{m_{J/ \psi}(m_{\Lambda}+m_{P_{cs}})(m_{J/\psi}^{2}+Q^{2})}\big{[}-g_{1}(m_{\Lambda} +m_{P_{cs}})^{2}+g_{2}m_{J/\psi}^{2}\big{]}\not{p}p_{\mu}\gamma_{5} \tag{23}\] \[+ e^{-\frac{m_{P_{s}}^{2}}{M^{2}}}e^{-\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{f_{J/\psi}\lambda_{\Lambda}\lambda_{P_{cs}}m_{J/\psi}m_{\Lambda}}{ (m_{\Lambda}+m_{P_{cs}})(m_{J/\psi}^{2}+Q^{2})}\big{[}g_{1}(m_{\Lambda}+m_{P_ {cs}})+g_{2}(m_{\Lambda}-m_{P_{cs}})\big{]}\not{p}\gamma_{\mu}\gamma_{5}\] \[+ {\rm other\ structures}+\cdots\,.\] Among the present Lorentz structures the ones used in the analyses are given explicitly, and the remaining ones are represented by other structures. Here \(Q^{2}=-q^{2}\). The Borel parameters \(M^{2}\) and \(M^{\prime 2}\), present in the last result, are determined from the analyses following similar criteria given in the previous section. As for the QCD side, the insertion of the interpolating currents given in Eqs. (2) and (18) inside the correlator in Eq. (17), and after the possible contractions of quark fields using the Wick's theorem, the result takes the following form in terms of the quark propagators: \[\Pi_{\mu}^{\rm QCD}(p,q) = i^{2}\int d^{4}xe^{-ip\cdot x}\int d^{4}ye^{ip^{\prime}\cdot y} \frac{1}{\sqrt{6}}\epsilon^{abc}\epsilon^{a^{\prime}b^{\prime}c^{\prime}} \sum_{i=1}^{2}\Big{[}{\rm Tr}[S_{s}^{bb^{\prime}}(y-x)\gamma_{5}CS_{d}^{Tc^{ \prime}a^{\prime}}(y-x)CA_{1}^{i}]A_{2}^{i}S_{u}^{ad^{\prime}}(y-x) \tag{24}\] \[\times \gamma_{5}S_{e}^{df^{\prime}1}(x)\gamma_{\mu}S_{c}^{lc^{\prime}} (-x)-2A_{1}^{i}S_{s}^{cb^{\prime}}(y-x)\gamma_{5}CS_{d}^{Tba^{\prime}}(y-x)CA_ {1}^{i}S_{u}^{ad^{\prime}}(y-x)\gamma_{5}S_{e}^{df^{\prime}1}(x)\gamma_{\mu}S_ {c}^{lc^{\prime}}(-x)\] \[- A_{2}^{i}S_{d}^{ca^{\prime}}(y-x)\gamma_{5}CS_{s}^{Tb^{\prime}}(y -x)CA_{1}^{i}S_{u}^{ad^{\prime}}(y-x)\gamma_{5}S_{c}^{cd^{\prime}1}(x)\gamma_{ \mu}S_{c}^{lc^{\prime}}(-x)\Big{]}.\] Considering the same Lorentz structures given explicitly in Eq. (23), we obtain the QCD side and represent the lengthy results shortly as in the following form: \[\Pi_{\mu}^{\rm QCD}(p,q)=\Pi_{1}\not{p}p_{\mu}\gamma_{5}+\Pi_{2}\not{p} \gamma_{\mu}\gamma_{5}+{\rm other\ structures}. \tag{25}\] To proceed in the calculations, the light and heavy quark propagators given in Eqs. (9) and (10) are used explicitly and the four-dimensional Fourier integrals are performed. The imaginary parts of the obtained results constitute the spectral densities to be used in the following relation: \[\Pi_{i}=\int ds\int ds^{\prime}\frac{\rho_{i}^{\rm pert}(s,s^{\prime},q^{2})+ \rho_{i}^{\rm non-pert}(s,s^{\prime},q^{2})}{(s-p^{2})(s^{\prime}-p^{\prime 2})}, \tag{26}\] where \(\rho_{i}(s,s^{\prime},q^{2})=\frac{1}{\pi}{\rm Im}[\Pi_{i}]\); and \(\rho_{i}^{\rm pert}(s,s^{\prime},q^{2})\) and \(\rho_{i}^{\rm non-pert}(s,s^{\prime},q^{2})\) represent the results of the perturbative and non-perturbative parts, respectively, with \(i=1,\ 2,..,12\) corresponding to all the Lorentz structures existing in the results. The analyses in the present work are performed via resulting matches of the hadronic and QCD sides obtained from the structures \(i=1,\ 2\), which results in two coupled sum rules equations including both \(g_{1}\) and \(g_{2}\): \[e^{-\frac{m_{P_{s}}^{2}}{M^{2}}}e^{-\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{f_{J/\psi}\lambda_{\Lambda}\lambda_{P_{cs}}m_{\Lambda}}{m_{J/ \psi}(m_{\Lambda}+m_{P_{cs}})(m_{J/\psi}^{2}+Q^{2})}\big{[}-g_{1}(m_{\Lambda}+m_ {P_{cs}})^{2}+g_{2}m_{J/\psi}^{2}\big{]}=\tilde{\Pi}_{1}, \tag{27}\] \[e^{-\frac{m_{P_{s}}^{2}}{M^{\prime 2}}}e^{-\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{f_{J/\psi}\lambda_{\Lambda}\lambda_{P_{cs}}m_{J/\psi}m_{\Lambda}}{(m_{ \Lambda}+m_{P_{cs}})(m_{J/\psi}^{2}+Q^{2})}\big{[}g_{1}(m_{\Lambda}+m_{P_{cs}})+g_ {2}(m_{\Lambda}-m_{P_{cs}})\big{]}=\tilde{\Pi}_{2}, \tag{28}\] where the Borel transformations on the variables \(-p^{\prime 2}\) and \(-p^{2}\) have been performed, and \(\tilde{\Pi}_{i}\) in the results represent the Borel transformed \(\Pi_{i}\) expressions obtained in the QCD side. Solution of these equations for \(g_{1}\) and \(g_{2}\) give \[g_{1} = e^{\frac{m_{P_{ex}}^{2}}{M^{2}}}e^{\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{m_{J/\psi}(m_{J/\psi}^{2}+Q^{2})\left[(m_{P_{ex}}-m_{\Lambda}) \tilde{\Pi}_{1}+\tilde{\Pi}_{2}\right]}{f_{J/\psi}\lambda_{\Lambda}\lambda_{P_ {ex}}m_{\Lambda}(m_{\Lambda}^{2}+m_{J/\psi}^{2}-m_{P_{ex}}^{2})},\] \[g_{2} = e^{\frac{m_{P_{ex}}^{2}}{M^{2}}}e^{\frac{m_{\Lambda}^{2}}{M^{ \prime 2}}}\frac{(m_{P_{ex}}+m_{\Lambda})(m_{J/\psi}^{2}+Q^{2})\left[m_{J/ \psi}^{2}\tilde{\Pi}_{1}+(m_{P_{ex}}+m_{\Lambda})\tilde{\Pi}_{2}\right]}{f_{ J/\psi}\lambda_{\Lambda}\lambda_{P_{ex}}m_{\Lambda}m_{J/\psi}(m_{\Lambda}^{2}+m_{J/ \psi}^{2}-m_{P_{ex}}^{2})}. \tag{29}\] The numerical analyses of the \(g_{1}\) and \(g_{2}\) given in the Eq. (29) require the input parameters given in Table 1 and some additional auxiliary parameters such as Borel parameters \(M^{2}\), \(M^{\prime 2}\) and threshold parameters \(s_{0}\) and \(s_{0}^{\prime}\) and the mixing parameter \(\beta\) present in the interpolating current of the \(\Lambda\) state. The similar standard criteria of the method used for the mass calculation in the previous section, namely weak dependence on the auxiliary parameters, pole dominance, and the convergence of the operator product expansion (OPE) used on the QCD side, are applied in the determination of the auxiliary parameters of this section as well. Taking into account their relations and considering the possible excited resonances of the considered states, the threshold parameters are fixed as: \[23.0\ {\rm GeV}^{2} \leq s_{0}\leq 25.0\ {\rm GeV}^{2},\] \[1.7\ {\rm GeV}^{2} \leq s_{0}^{\prime}\leq 2.3\ {\rm GeV}^{2}, \tag{30}\] in which the interval of \(s_{0}\) is the same as the one used in the previous section. For the Borel parameters, considering the pole dominance and convergence of the OPE lead us to the following intervals: \[3.0\ {\rm GeV}^{2} \leq M^{2}\leq 4.0\ {\rm GeV}^{2},\] \[1.5\ {\rm GeV}^{2} \leq M^{\prime 2}\leq 2.5\ {\rm GeV}^{2}. \tag{31}\] \(M^{2}\) again spans the same interval given in the previous section. The working interval of the last auxiliary parameter, \(\beta\), is determined from a parametric plot of the results given as a function of \(\cos\theta\) with \(\beta=\tan\theta\) in which the relatively stable regions are considered to fix \(\beta\) intervals. These analyses give the following intervals: \[-1.0\leq\cos\theta\leq-0.5\ \ \ \ \ {\rm and}\ \ \ \ \ \ 0.5\leq\cos\theta\leq 1.0. \tag{32}\] In all of these intervals for the auxiliary parameters, we expect weak dependence of the results on these parameters. To depict this, we provide the graphs of the strong coupling constant \(g_{1}\) as functions of these auxiliary parameters in Fig. 2 as examples: The criteria are satisfied, dependencies are mild and the uncertainties remain inside the limits allowed by the method. The analyses give the results reliable only for some regions of the \(Q^{2}\), and therefore to get the coupling constants' values at \(Q^{2}=-m_{J/\psi}^{2}\), we need to expand the analyses to the region of interest using a proper fit function given as \[g_{i}(Q^{2})=g_{0}e^{\frac{c_{1}-Q^{2}}{m_{P_{ex}}^{2}}+c_{2}(\frac{Q^{2}}{m_{P_ {ex}}^{2}})^{2}}. \tag{33}\] Figure 2: **Left:** Variation of the the coupling constant \(g_{1}\) as function of \(M^{2}\) and \(M^{\prime 2}\) at central values of threshold parameters \(s_{0}\) and \(s_{0}^{\prime}\) and at \(Q^{2}=2.5\ {\rm GeV}^{2}\). **Right:** Variation of the the coupling constant \(g_{1}\) as function of \(s_{0}\) and \(s_{0}^{\prime}\) at central values of Borel parameters \(M^{2}\) and \(M^{\prime 2}\) and at \(Q^{2}=2.5\ {\rm GeV}^{2}\). The fit parameters providing a good overlap with the results in the reliable region of the QCD sum rule results and the values of the coupling constants obtained from the fit functions at \(Q^{2}=-m_{J/\psi}^{2}\) are presented in Table 2. The results contain the errors arising from the uncertainties inherited from both the input parameters and the determinations of the intervals of the auxiliary parameters. The strong coupling constants determined from the QCD sum rules analyses are applied for the width calculation of the decay \(P_{cs}\to J/\psi\Lambda\), which is performed via the relation \[\Gamma =\frac{f(m_{P_{cs}},m_{J/\psi},m_{\Lambda})}{16\pi m_{P_{cs}}^{2}} \Bigg{[}-\frac{2(m_{J/\psi}^{2}-(m_{\Lambda}+m_{P_{cs}})^{2})}{m_{J/\psi}^{2}( m_{\Lambda}+m_{P_{cs}})^{2}}\Big{(}g_{2}^{2}m_{J/\psi}^{2}(m_{J/\psi}^{2}+2(m_{ \Lambda}-m_{P_{cs}})^{2})\] \[+6g_{1}g_{2}m_{J/\psi}^{2}(m_{\Lambda}-m_{P_{cs}})(m_{\Lambda}+m_ {P_{cs}})+g_{1}^{2}(2m_{J/\psi}^{2}+(m_{\Lambda}-m_{P_{cs}})^{2})(m_{\Lambda}+ m_{P_{cs}})^{2}\Big{)}\Bigg{]}. \tag{34}\] The function \(f(x,y,z)\) in the width formula is given as \[f(x,y,z)=\frac{1}{2x}\sqrt{x^{4}+y^{4}+z^{4}-2xy-2xy-2yz}. \tag{35}\] The result obtained for the width is \[\Gamma(P_{cs}\to J/\psi\Lambda)=(7.22\pm 1.78)\,\,\,\mathrm{MeV}, \tag{36}\] in a nice agreement with the experiment. ## IV Summary and conclusion In a recent report, the LHCb collaboration announced the observation of a new candidate pentaquark state with strangeness in \(J/\psi\Lambda\) channel. The observed mass and the width of the state were reported as \(m=4338.2\pm 0.7\pm 0.4\,\,\mathrm{MeV}\) and \(\Gamma=7.0\pm 1.2\pm 1.34\,\,\mathrm{MeV}\), respectively [152] with preferred spin and parity quantum numbers being \(J^{P}=\frac{1}{2}^{-}\). To elucidate its inner structure and certify its quantum numbers, further theoretical investigations are necessary. With this purpose, in the present work, the \(P_{\psi s}^{\Lambda}(4338)^{0}\) state was assigned a molecular \(\Xi_{c}\bar{D}\) structure with spin parity \(J^{P}=\frac{1}{2}^{-}\) and its decay to \(J/\psi\Lambda\) states was investigated using the three-point QCD sum rule approach. For completeness, firstly, the chosen interpolating current was applied to calculate the mass and the current coupling constant of the considered state using the two-point QCD sum rule method. These quantities are main inputs in the decay calculations. The obtained mass, \(m_{P_{cs}}=4338.27\pm 129.99\,\,\mathrm{MeV}\), is in good consistency with the observed one. Our prediction for the mass is also consistent with the mass predictions based on the molecular assumption present in the literature, such as \(m=4327.4\,\,\mathrm{MeV}\)[84], \(m=4341.0\,\,\mathrm{MeV}\)[88], \(m=4336.34\,\,\mathrm{MeV}\) and \(m=4329.11.34\,\,\mathrm{MeV}\)[87], and \(m=4.34^{+0.07}_{-0.07}\,\,\mathrm{GeV}\)[48]. As stated, the predicted mass and the current coupling constant comprise the main input parameters for the width calculation of the \(P_{\psi s}^{\Lambda}(4338)^{0}\to J/\psi\Lambda\) channel as dominant decay of the considered state. To compute the width of this channel, we first calculated the relevant strong coupling constants and subsequently used them to get the corresponding width. The resultant width is obtained as \(\Gamma(P_{cs}\to J/\psi\Lambda)=(7.22\pm 1.78)\,\,\mathrm{MeV}\) which also agrees well with the experimentally observed width. The results obtained for the mass and width in this study are consistent with the experimental findings and favor the \(\Xi_{c}\bar{D}\) molecular nature of the \(P_{\psi s}^{\Lambda}(4338)^{0}\) state with quantum numbers \(J^{P}=\frac{1}{2}^{-}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline Coupling constant & \(g_{0}\) & \(c_{1}\) & \(c_{2}\) & \(g_{i}(-m_{J/\psi}^{2})\) \\ \hline \hline \(g_{1}\) & \(-1.10\pm 0.13\) & \(6.43\) & \(-26.13\) & \((-4.71\pm 0.52)\times 10^{-5}\) \\ \(g_{2}\) & \(15.57\pm 1.86\) & \(4.43\) & \(-3.81\) & \(0.61\pm 0.07\) \\ \hline \hline \end{tabular} \end{table} Table 2: Values of the fit parameters for the fit functions of coupling constants, \(g_{1}\) and \(g_{2}\) and the coupling constant values at \(Q^{2}=-m_{J/\psi}^{2}\). ## Acknowledgements K. Azizi is thankful to Iran Science Elites Federation (Saramadan) for the partial financial support provided under the grant number ISEF/M/401385.
2307.02543
Discovery of Dipolar Chromospheres in Two White Dwarfs
This paper reports the ULTRACAM discovery of dipolar surface spots in two cool magnetic white dwarfs with Balmer emission lines, while a third system exhibits a single spot, similar to the prototype GD 356. The light curves are modeled with simple, circular, isothermal dark spots, yielding relatively large regions with minimum angular radii of 20 deg. For those stars with two light curve minima, the dual spots are likely observed at high inclination (or colatitude), however, identical and antipodal spots cannot simultaneously reproduce both the distinct minima depths and the phases of the light curve maxima. The amplitudes of the multi-band photometric variability reported here are all several times larger than that observed in the prototype GD 356; nevertheless, all DAHe stars with available data appear to have light curve amplitudes that increase toward the blue in correlated ratios. This behavior is consistent with cool spots that produce higher contrasts at shorter wavelengths, with remarkably similar spectral properties given the diversity of magnetic field strengths and rotation rates. These findings support the interpretation that some magnetic white dwarfs generate intrinsic chromospheres as they cool, and that no external source is responsible for the observed temperature inversion. Spectroscopic time-series data for DAHe stars is paramount for further characterization, where it is important to obtain well-sampled data, and consider wavelength shifts, equivalent widths, and spectropolarimetry.
J. Farihi, J. J. Hermes, S. P. Littlefair, I. D. Howarth, N. Walters, S. G. Parsons
2023-07-05T18:00:03Z
http://arxiv.org/abs/2307.02543v2
# Discovery of Dipolar Chromospheres in Two White Dwarfs+ ###### Abstract This paper reports the ULTRACAM discovery of dipolar surface spots in two cool magnetic white dwarfs with Balmer emission lines, while a third system exhibits a single spot, similar to the prototype GD 356. The light curves are modeled with simple, circular, isothermal dark spots, yielding relatively large regions with minimum angular radii of \(20^{\circ}\). For those stars with two light curve minima, the dual spots are likely observed at high inclination (or colatitude), however, identical and antipodal spots cannot simultaneously reproduce both the distinct minima depths and the phases of the light curve maxima. The amplitudes of the multi-band photometric variability reported here are all several times larger than that observed in the prototype GD 356; nevertheless, all DAHe stars with available data appear to have light curve amplitudes that increase toward the blue in correlated ratios. This behavior is consistent with cool spots that produce higher contrasts at shorter wavelengths, with remarkably similar spectral properties given the diversity of magnetic field strengths and rotation rates. These findings support the interpretation that some magnetic white dwarfs generate intrinsic chromospheres as they cool, and that no external source is responsible for the observed temperature inversion. Spectroscopic time-series data for DAHe stars is paramount for further characterization, where it is important to obtain well-sampled data, and consider wavelength shifts, equivalent widths, and spectropolarimetry. keywords: stars: evolution-- stars: magnetic field-- white dwarfs ## 1 Introduction The origin of magnetism in white dwarf stars is an outstanding astrophysical puzzle more than a half century old, but recent and ongoing developments are now shedding light on this fundamental, and still poorly understood aspect of stellar evolution. The first signatures of white dwarf magnetism resulted from the detection of circular polarization in spectra that were quasi-featureless or with unidentified absorption bands (Kemp et al., 1970; Angel & Landstreet, 1971; Landstreet and Angel, 1971) that were later understood to be shifted hydrogen and (neutral) helium lines, mostly consistent with centered or offset dipole field geometries (Kemic, 1974; Garstang, 1977; Wickramasinghe & Martin, 1979). A summary of magnetic white dwarf research over the first several decades can be found in two published reviews (Wickramasinghe & Ferrario, 2000; Ferrario et al., 2015). One of the key developments was the recognition that magnetic white dwarfs are nearly exclusively found as isolated stars, or in cataclysmic variables (Liebert et al., 2005). This empirical finding led to the hypothesis that fields are generated during common envelope evolution (Tout et al., 2008; Nordhaus et al., 2011; Belloni & Schreiber, 2020), a process that may function effectively for stars, brown dwarfs, and giant planets that are engulfed during the post-main sequence (Farihi et al., 2011; Kissin & Thompson, 2015; Guidarelli et al., 2019). And while fast-spinning and massive magnetic white dwarfs are known, and thus consistent with a stellar merger origin (Ferrario et al., 1997); Garcia-Berro et al. (2012); Kilic et al. (2021); Williams et al. (2022), it is also clear that magnetism, high remnant mass, and rapid rotation are far from tightly correlated (Ferrario & Wickramasinghe, 2005; Brinkworth et al., 2013). It has been suspected for decades that cooler white dwarfs are more often found to be magnetic (Liebert & Sion, 1979; Liebert et al., 2003). However, luminosity and sensitivity biases exist, where the coolest white dwarfs essentially require metal pollution to detect Zeeman splitting (Kawka & Vennes, 2014; Hollands et al., 2015; Bagnulo & Landstreet, 2019). Despite these uncertainties, the possibility that magnetic fields first emerge in cool and isolated white dwarfs is intriguing, as substantial cooling is necessary for core crystallization, which has been hypothesized to be a source of an internal dynamo powered by the liquid-solid phase separation at the core boundary (Isern et al., 2017). In this scenario, magnetic field generation is decoupled from external sources of mass and angular momentum, but nevertheless, all else being equal, more rapidly rotating remnants should have stronger fields. In a pioneering effort to overcome the aforementioned biases, and determine the actual frequency of magnetism as a function of white dwarf characteristics, Bagnulo & Landstreet (2021) carried out a nearly complete census of (\(N\approx 150\)) white dwarfs within 20 pc. This volume-limited survey used sensitive circular spectropolarimetry and resulted in the first unbiased study of white dwarf magnetism, where the principal findings can be summarized as follows. 1. All spectral classes have similar incidences of magnetism, regardless of atmospheric composition. 2. The field strength distribution is uniform over four orders of magnitude from 40 kG to 300 MG. 3. Magnetism is detected more frequently in white dwarfs with higher than average mass. 4. White dwarfs with cooling ages younger than 0.5 Gyr - prior to core crystallization - are rarely magnetic. 5. There is no evidence of field strength decay over time. It is within this background of recent developments that emerged the relatively new and small class of DAHe white dwarfs (D: degenerate, A: Balmer lines strongest, H: magnetic line splitting, e: emission). The prototype is GD 356, an isolated \(T_{\rm eff}\approx 7500\) K star with Balmer emission lines split in a \(B\approx 13\) MG field. There are deep, multi-wavelength, non-detections that yield stringent upper limits on an X-ray corona, ongoing accretion, and low-mass companions (Greenstein and McCarthy, 1985; Ferraro et al., 1997; Weisskopf et al., 2007). This apparently single white dwarf has a 1.927 h rotation period, based on a nearly sinusoidal light curve that is well modeled by single dark spot, whose size is consistent with that of the magnetic and heated region (Ferraro et al., 1997; Brinkworth et al., 2004). These enigmatic properties led to the hypothesis that, analogous to the Jupiter-Io system, the relatively cool white dwarf surface could be heated by Ohmic dissipation of a current loop set up by the orbital motion of a conducting planet (Li et al., 1998; Wickramasinghe et al., 2010); referred to as the unipolar inductor model. GD 3561 was the only known DAHe white dwarf for 35 years, until 2020 when second and third cases were reported (Reding et al., 2020; Gansicke et al., 2020). In addition to their shared spectral morphology and strong magnetism implied from Zeeman splitting, these three cool white dwarfs with emission lines all share commonalities with some magnetic white dwarfs: relatively rapid rotation, masses only slightly above average, and no evidence for low-mass stellar or substellar (detached) companions. A detailed time-series study of the prototype has shown that (i) the spin period is stable over two decades, with no other independent frequency signals as would be expected from a unipolar inductor, (ii) the emission line strength oscillates in anti-phase with the broad-band stellar brightness, and (iii) so far, DAHe stars share a tightly correlated set of effective temperatures and luminosities (Gansicke et al., 2020; Walters et al., 2021). This clustering is potentially related to core crystallization and magnetic field diffusion toward the stellar surface (Ginzburg et al., 2022). Footnote 1: Previously thought to have a helium-rich atmosphere (Bergeron et al., 2001; Limoges et al., 2015). This paper reports detailed light curves for three DAHe white dwarfs: the second known example, SDSS J125230.93\(-\)023417.7 (Reding et al., 2020; hereafter SDSS J1252), and two recently identified members of this class, LP 705-64 and WD J143019.29\(-\)562358.3 (Reding et al., 2023; hereafter WD J1430). Two of the three stars reveal light curves with asymmetric dimming events that are 180\({}^{\rm o}\) out-of-phase, and thus consistent with dipolar star spots. These data are inconsistent with a unipolar inductor model, and instead support the generation of intrinsic chromospheres in some isolated, magnetic white dwarfs. The observations and data are discussed in Section 2, the time-series analysis is presented in Section 3, followed by a summary and discussion. ## 2 Observations This study focuses on light curves and the resulting periodicities of three DAHe white dwarfs, using both ground- and space-based photometric monitoring as described below. ### Target properties and selection SDSS J1252 is the second discovered example of a DAHe white dwarf, reported to have emission lines split in a \(B\approx 5\) MG field, and with a sinusoidal light curve dominated by a period of 317.3 s (Reding et al., 2020). The fast rotation of this star makes it an attractive target for high-cadence photometric monitoring from the ground, with a goal to obtain a detailed light curve. LP 705-64 and WD J1430 are two newer members of the DAHe spectral class with indications from _TESS_ data that their full spin cycles could each be readily covered in a single night of ground-based photometry (Reding et al., 2023). The initial observational goals were similar to those achieved by Walters et al. (2021), to establish robust ephemerides against which future period changes might be investigated (e.g. within a unipolar inductor and orbiting planet model), and to constrain the nature of the emitting and magnetic regions. ### ULTRACAM observations All three stars were observed with ULTRACAM, a frame-transfer CCD imaging camera (24 ms dead time between exposures; Dhillon et al., 2007) that is permanently mounted on the 3.6 m NTT telescope at the La Silla Observatory in Chile. The instrument has three independent channels that enable the use of independent filters simultaneously, and data were taken with filters similar to standard \(u\), \(g\), and one of \(r\) or \(i\) bandpasses, but with higher throughput. In each case, the blue channel was co-added every three frames to improve the effective signal-to-noise on the target. The observation details, including exposure times (same as the cadences for ULTRACAM), and durations of the resulting light curve segments, are summarized in Table 1. Images were corrected for bias and flat fielded with normalized sky images obtained during evening twilight (taken in a continuous spiral to remove stars). Differential brightness were measured relative to field stars with dedicated software2 using photometric apertures that \begin{table} \begin{tabular}{l c c c c} \hline Target & Observing & \(t_{\rm exp}\) & Coverage & Filters \\ & Dates & (s) & (min) & \\ \hline SDSS J1252 & 2021 Apr 06 & 10.05 & 57 & \(ugr\) \\ & 2021 Apr 08 & 10.35 & 86 & \(ugr\) \\ & 2021 Aug 20 & 10.05 & 24 & \(ugr\) \\ LP 705-64 & 2021 Aug 17 & 8.05 & 73 & \(ugi\) \\ & 2021 Aug 19 & 8.05 & 77 & \(ugr\) \\ & 2021 Aug 20 & 8.05 & 146 & \(ugr\) \\ WD J1430 & 2022 Apr 26 & 6.06 & 277 & \(ugi\) \\ & 2022 Jun 05 & 6.33 & 176 & \(ugi\) \\ \hline \end{tabular} \end{table} Table 1: Chronological summary of ULTRACAM observing runs. were typically scaled to 2\(\times\) the mean full width at half maximum of the stellar profiles for each exposure. The sky annuli were fixed to span the region 8.75-15.75 arcsec from the stars, where a clipped mean was used to determine the background. For all stars in all observations, the same sets of comparison stars were used to generate light curves, consisting of two or three stars in the \(gri\) frames, and one to two stars in the \(u\)-band images. Light curves were constructed by dividing the science target flux by the sum of the comparison star fluxes, and normalizing the result. Measurement errors were propagated from the aperture photometry, by summing in quadrature the fractional flux errors of all stars measured for a given light curve. All ULTRACAM times were converted to Barycentric Julian Day (BJD) using Barycentric Dynamical Time (TDB), following Eastman et al. (2010). ### _Tess_ data Data for each of the three DAHe targets are available from _TESS_(Ricker et al., 2015), and were downloaded from the MAST archive, where the pdcsap processed light curves were retained for analysis. Time stamps were were corrected to BJD = _TESS_BJD + 2457000. LP 705-64 (= TIC 136884288) was observed in Sector 30, while data were collected for WD J1430 (= TIC 139012860) within Sector 38, and for SDSS J1252 (= TIC 953086708) during Sector 46. All there stars have 120 s cadence observations. These data were further cleaned of NaN flux entries, but with no other processing based on data quality flags, yielding light curves that retained between 80 and 90 per cent of their pdcsap array values. Lastly, outliers beyond \(\pm 5\sigma\) of the local time average (or phase average) flux were removed, which were fewer than five points in total for each source. It is worth noting that these data are not all equally useful in subsequent analysis. The following _TESS_ benchmarks summarize their relative quality: SDSS J1252 has \(G=17.5\) mag, a mean flux of \(19.2\pm 5.3\) e\({}^{-}\) s\({}^{-}\) (28 per cent scatter); LP 705-64 has \(G=16.9\) mag, a mean flux of \(38.0\pm 5.6\) e\({}^{-}\) s\({}^{-}\) (15 per cent scatter), while WD J1430 has \(G=17.4\) mag, a mean flux of \(9.6\pm 5.4\) e\({}^{-}\) s\({}^{-}\) (73 per cent scatter), and lies within the Galactic plane. ## 3 Time-series analysis and results All light curves were analyzed using period04(Lenz & Breger, 2005), where a Lomb-Scargle periodogram was constructed using ULTRACAM data, _TESS_ pdcsap light curves, or a combination of the two datasets, with a goal to identify that which produces the most precise ephemerides for each target. Monte Carlo simulations, run within pcr04, were used to determine errors in frequency and phase for the strongest periodogram peak for each star and set of light curves, then propagated to determine the error in \(T_{0}\) corresponding to photometric minimum. The frequency and phase were allowed to vary independently during the simulations to determine errors, which were typically repeated 1000 times. ### Sdss j1252 For SDSS J1252, there are sufficient ULTRACAM data to uniquely determine the photometric period and provide an improved ephemeris. Light curves cover more than 30 epochs of its previously reported 317.3 s periodicity (frequency 272.3 d\({}^{-1}\), Reding et al., 2020), with an observational baseline of 136 d, spanning several 10\({}^{4}\) cycles at this frequency. In Figure 1 are shown the first and second \(g\)-band light curves obtained for this white dwarf, from which can be discerned that there are _two distinct set of minima (and maxima)_, each manifesting every 317.3 s, and thus revealing an actual photometric period of 634.6 s. The ULTRACAM \(g+r\) co-added light curves were analyzed using data from all three observing runs. The resulting best periodogram is plotted in Figure 2, where the strongest peak is identical to the frequency reported in the discovery paper; however, there is a second outstanding signal near 408 d\({}^{-1}\). This secondary peak is not as well-determined as the 272.3 d\({}^{-1}\) signal, but these two frequencies appear to have a near an exact ratio of 3:2. Additionally, the periodogram also reveals a weak-amplitude peak at roughly 816 d\({}^{-1}\), and the ratio of these three frequencies is 6:3:2. These periodicities are far shorter than any possible range of orbital signals originating from non-degenerate companions, as the lowest-frequency periodogram signal, at 272.3 d\({}^{-1}\), corresponds to a Keplerian orbit near 7 \(R_{\star}\) (seven white dwarf radii), deep within the Figure 1: Approximately 1 h of ULTRACAM \(g\)-band light curves for SDSS J1252, each taken on a different night. The data are plotted as observed in sequence, each light curve normalized, offset vertically by \(\pm 0.1\), and shifted horizontally to exhibit the same photometric phase. Visual inspection reveals that adjacent minima are unequal in depth, which is also true but more subtle for adjacent maxima. These real-time observations were the first indication that SDSS J1252 has two star spots, 180\({}^{\prime}\)out-of-phase, and the true rotation period is twice as long as the 317.3 s previously reported by Reding et al. (2020). nominal Roche limit. Only a compact object could survive at this orbital distance, such as those in close but detached, double white dwarf binaries. And while there are a few such systems known to have orbital periods comparable to the frequencies exhibited by SDSS J1252, their light curves reveal ellipsoidal modulation owing to tidal distortions in the primary white dwarfs (Kilic et al., 2011; Brown et al., 2011; Burdge et al., 2019). Furthermore, these rare, deformed degenerates are all helium-core white dwarfs less massive than 0.3 M\({}_{\odot}\), and thus significantly more prone to tidal distortion than SDSS J1252 and the DARE stars, which are considerably more compact (Walters et al., 2021; Reding et al., 2020; Gansicke et al., 2020). Therefore, the light curve and resulting periodogram of SDSS J1252 are interpreted as arising from a single star. It is reasonable to assume the \(T_{\rm eff}\approx 8000\) K white dwarf has a fixed spin period (no differential rotation), and no stellar pulsations, as it is far from the hydrogen atmosphere instability strip (Romero et al., 2022). The observed signals are then interpreted as the 2\({}^{\rm nd}\), 3\({}^{\rm rd}\), and 6\({}^{\rm th}\) harmonics of the stellar rotation frequency. The periodogram signals and their amplitudes reflect the fitting of sinusoids to the light curve, where the two highest have a 3:2 ratio in order to generate both the principal flux variation at 272.3 d\({}^{-1}\), and the alternating minima via the interference with the 408 d\({}^{-1}\) frequency. The revised stellar rotation period and associated uncertainty were determined by dividing the 2\({}^{\rm nd}\) harmonic frequency by two, yielding 136.15032(2) d\({}^{-1}\) (equivalent to a period of 634.59273(9) s). Although essentially no amplitude (or power) is seen in the periodogram at the frequency inferred to be the fundamental, this is an expected consequence of the light curve morphology and sinusoidal fitting (VanderPlas, 2018). A similar analysis was attempted using the _TESS_ light curve, both on its own and in combination with ULTRACAM data. While the _TESS_ 120 s cadence is 2.6\(\times\) raster than the peak periodogram frequency for SDSS J1252, and thus above the Nyquist rate, the data quality are relatively poor (Section 2.3). No time-series analysis utilizing _TESS_ led to any improvement in frequency or phase precision, and therefore all calculations for SDSS J1252 are based solely on the ULTRACAM observations. ### Lp 705-64 and WD J1430 _TESS_ data were the initial means of identifying the stellar rotation rates in these two DAHe white dwarfs (Reding et al., 2023). However, similar to as observed in SDSS J1252, the ULTRACAM light curve of LP 705-64 exhibits two unequal minima in a single cycle, and thus the period determined by _TESS_ represents one half its spin period (see Figure 3). For this source, the ULTRACAM data alone do not span a sufficient number of cycles to determine the photometric period with precision comparable to _TESS_. A significant improvement in the _TESS_ ephemeris is achieved using the combination of ULTRACAM \(g+r\) co-added light curves and _TESS_, resulting in a periodogram with a single peak at 39.65325(3) d\({}^{-1}\) [cf. 39.653(2) d\({}^{-1}\); Reding et al., 2023], and a corresponding higher precision in phase. However, the true spin period must be calculated from this frequency by recognizing it is the 2\({}^{\rm nd}\) harmonic of the fundamental, which is 19.82662(1) d\({}^{-1}\). For WD J1430, the ULTRACAM light curves reveal a single maximum and minimum with one of the largest amplitudes observed to date for a DAHe white dwarf (5.8 per cent in the \(g\) band). Similar to LP 705-64, there are insufficient ULTRACAM data from which to derive a precise ephemeris for this source, and thus the combination of ULTRACAM and _TESS_ Sector 38 pdcgap data were utilized for this goal. Initially, the analysis of these combined datasets improved the precision of the periodogram frequency, but resulted in phase errors that were larger than those based on _TESS_ alone. Subsequently, these _TESS_ data were re-scaled (see Section 3.4) to more closely match those of the co-added \(g+i\)-band ULTRACAM data, and the resulting analysis marginally improved the uncertainty in phase. Ultimately for this star, the best constraints were achieved by adding a third set of light curves into the periodogram analysis, using full-frame data from _TESS_ Sector 11, where fluxes were extracted based on PSF-subtracted images following Han & Brandt (2023). ### Light curve morphologies, ephemerides, spectral phases Based on the preceding analysis, and to reveal light curve structures more precisely as a function of phase, the ULTRACAM multi-band data were phase-folded and binned using a weighted average onto regular grids. The resulting light curves are shown in Figure 3, where there are 80 phase bins for LP 705-64 and WD J1430 in all three channels. In the case of SDSS J1252, the short spin period indicates that a single ULTRACAM frame has a phase width of 0.016 in the green and red channels, but 0.048 in the blue channel (owing to three co-adds). For this reason, the light curves of SDSS J1252 were re-sampled into 60 phase bins in \(g\) and \(r\), but only 20 bins in \(u\) band. The folded light curves for SDSS J1252 and LP 705-64 both exhibit alternating minima that are indicative of two distinct star spots 180\({}^{\circ}\) out-of-phase during rotation. While this behavior is not novel among magnetic white dwarfs, there appear to be only a few documented examples of white dwarf light curves where dipolar spots are suggested or required (Hermes et al., 2017; Kilic et al., 2019; Pshirkov et al., 2020). In contrast, the majority of magnetic white dwarf light curves Figure 2: Periodogram of SDSS J1252 based on three nights of ULTRACAM data using co-added, \(g+r\)-band light curves, with amplitudes plotted in grey. The data were bootstrapped 10 000 times to determine the amplitude above which represents a 0.1 per cent chance a signal is spurious. This false alarm amplitude of 0.007734 is delineated by the green dotted line, where only the strongest peak is higher. However, the two frequencies with the largest periodogram peaks have a near exact ratio of 3.2, and a weaker third peak is consistent with a frequency that is an integer multiple of both lower frequencies. For a fixed rate of stellar rotation, and despite a lack of significant periodogram power (amplitude), this result indicates the fundamental frequency is 136.150 d\({}^{-1}\). This is half of the frequency with the largest periodogram signal, and is consistent with two distinct, out-of-phase star spots tracing the observed light curve morphology. The first six harmonics of the fundamental are marked with blue triangles, where only those showing noteworthy amplitude are numbered. seem to be broadly consistent with sinusoidal (single spot) morphologies (Brinkworth et al., 2013), including the prototype DAHe star GD 356 (Walters et al., 2021). However, it should be noted that incomplete phase coverage and modest photometric precision can inhibit the detection of subtle light curve features (e.g. the discovery light curve of SDSS J1252, and the _TESS_ light curve of LP 705-64; Reding et al., 2020, 2023). To calculate accurate ephemerides based on the best precision achieved here, \(T_{0}\) was chosen from an ULTRACAM light curve located nearest to the middle of the temporal coverage for each star, and where a feature could be unambiguously identified as a true photometric minimum. The periodogram analysis of the preceding sections then results in the following best ephemerides for all three DAHe white dwarfs, where zero phase corresponds to actual photometric minimum, and the periods are accurate and precise determinations of their spins: \[\mathrm{BJD_{TDB}}\mathrm{(SDSSJ1252)}=2459313.809921(6)+0.007344823(1)\;E\] \[\mathrm{BJD_{TDB}}\mathrm{(LP\,705-64)}=2459444.92339(6)\;+0.05 043723(3)\;E\] \[\mathrm{BJD_{TDB}}\mathrm{(WD\,J1430)}=2459696.8239(3)\;+0.05999529 (3)\;E\] As mentioned earlier, the ephemeris of SDSS J1252 is based solely on ULTRACAM, whereas those of LP 705-64 and WD J1430 are based on the combination of _TESS_ and ULTRACAM. From these ephemerides, forward and backward extrapolations can be made and compared with published, time-varying spectra of LP 705-64 and WD J1430, but in the case of SDSS J1252 there is insufficient time resolution to compare its spectroscopic variations in phase with photometry (Reding et al., 2020, 2023). Starting with the more straightforward case of WD J1430 which exhibits a single spot, the photometric minimum (phase 0) occurred at \(\mathrm{BJD_{TDB}}=2459049.056\pm 0.001\) nearest the two epochs of the published spectroscopy. This notably falls close to halfway in time between the two spectra plotted and described by Reding et al. (2023) as 'emission' and 'absorption'. Specifically, and taking the reported epochs of observation at face value, these spectra correspond to photometric phases \(0.720\pm 0.007\) and \(0.221\pm 0.007\), respectively, and thus both occur close to the average stellar flux. While these two spectral phases are reported as potentially representing a maximum and minimum magnetic field strength, this interpretation seems uncertain, especially if other spectral phases exhibit weaker emission or absorption, where Zeeman splitting is not a robust diagnostic. Superficially interpreting these spectral phases of WD J1430 as the highest and lowest field strength would be somewhat inverse to that observed for the prototype DAHe GD 356, where there are multiple, full-phase coverage observations using both spectroscopy Figure 3: Normalized and phase-folded ULTRACAM light curves for all three stars, where the blue points are \(u\) band, the green points are \(g\) band, and the red points are \(r\) or \(i\) band. The time-series data have been folded on the periods listed at the top of each panel, and have been re-sampled onto regular grids. There are 80 phase bins for LP 705-64 and WD J1430 in all three filters, while for SDSS J1252 there are 60 phase bins for \(g\) and \(r\), but only 20 for the \(u\) band (Section 3.3). These light curves highlight the asymmetric, anti-phase modulation from two starspots in the case of both SDSS J1252 and LP 705-64. and spectropolarimetry. For this well-studied case, the magnetic field variations, both from the observed parallel component and using Zeeman splitting, display a peak and trough near phases 0.3 and 0.8, respectively, from photometric _minimum_(Walters et al., 2021). For WD J1430, existing data may not probe the magnetic field with sufficient sensitivity or phase coverage, and hence these comparative results should be considered preliminary at best. In the case of LP 705-64, the situation is more complex. Depending on the spot sizes, one might expect _two minima and maxima in both equivalent width and magnetic field variations, one pair associated with each spot_. However, there are only two epochs of spectroscopy plotted and described by Reding et al. (2023), and here again a comparison must be considered not only preliminary, but possibly inapt for the aforementioned reasons. Again taking the published epochs at face value, and where the the deeper of the two light curve minima is zero phase, the spectrum shown with the broader Zeeman splitting corresponds to photometric phase \(0.048\pm 0.001\). The two reported spectral epochs were chosen as to be separated by exactly one half spin cycle (Reding et al., 2023), so that further interpretation would reflect the selection. While the updated photometric ephemeris is sufficient to predict precise spin phases for spectroscopic observations of LP 705-64, their potential correlation is not yet straightforward. It has not yet been demonstrated that high and low Zeeman splitting might be in-phase with photometric extrema (cf. GD 356 Walters et al., 2021). The two published spectra may not represent precise peak behavior, and there may be some uncertainty in the epoch dates reported. Measurements of both equivalent width and magnetic field strength at all rotational phases would eliminate these ambiguities. The sparse set of published spectroscopic measurements of DAHe white dwarfs, currently prevents a more robust correlation of photometric and spectroscopic phases. ### Multi-band light curve amplitudes To better understand the nature of the spots and their associated magnetic regions, multi-band light curves for DAHe white dwarfs were used to calculate the photometric amplitude for each star in each observed bandpass. For the three stars observed by ULTRACAM as well as GD 356, this was done by taking only the _strongest_ signal in the best periodogram for each star (Figures 2 and 3), and determining the sinusoidal amplitude for each light curve in each bandpass at that frequency. In this way, all light curve amplitudes are evaluated by their strongest sinusoidal components, including those stars with little or no periodogram power at their true rotation frequency. Light curve amplitudes and uncertainties were determined using period04, using fixed frequencies and Monte Carlo simulations. Table 2 lists the multi-band, photometric amplitudes for the four DAHe white dwarfs with available data, the three stars reported here and the prototype GD 356 (Walters et al., 2021). It is interesting to note that the amplitude of photometric variation is relatively small in GD 356 (on the order of 1 per cent; Brinkworth et al., 2004), compared to the newer DAHe stars with several percent variations in their light curves. Although there is not yet any published multi-band photometry of SDSS J1219, its light curve amplitude is around 3 per cent in the \(B\) band (Gansicke et al., 2020; roughly halfway between the \(u\) and \(g\) filters), and thus at least double that of GD 356 at similar wavelengths. It is also possible that WD J161634.36+541011.51 (Manser et al., 2023; hereafter WD J1616) has a photometric amplitude comparable to the strongest found here. These are likely the results of observational bias, which enhances their detection as variables in surveys such as _Gaia_ and ZTF (e.g. Guidry et al., 2021). It should be noted that WD J1430 is positioned in a crowded field at Galactic latitude \(\beta<4\) degr, and the _TESS_ fluxes are dominated by other sources in the photometric aperture (pipeline keyword crowd-sap = 0.032). This pipeline metric implies that only 3.2 per cent of the flux in the extracted aperture is likely from the white dwarf, and subsequently the extracted flux has been dramatically reduced to recover a more accurate stellar brightness in the pdcgap light curve (Stumpe et al., 2012). While the ULTRACAM observations independently confirm the stellar spin frequency identified by periodogram analysis of the _TESS_ light curve, the pipeline fluxes are simply too noisy (see Section 2.3) and likely offset significantly from the true mean flux. Thus, no reliable variability amplitude can be deduced for the _TESS_ bandpass (see footnote to Table 2). The relative strengths of the photometric variations in DAHe stars appear to follow a trend as a function of wavelength, with increasing amplitudes towards the blue. Figure 4 plots the strengths of the multi \begin{table} \begin{tabular}{l r r r} \hline SDSS J1252 & LP 705-64 & WD J1430 & GD 356 \\ \hline \(u\): \(5.72\pm 0.23\) & \(u\): \(4.48\pm 0.17\) & \(u\): \(6.62\pm 0.14\) & \(u\): \(1.50\pm 0.06\) \\ \(g\): \(4.97\pm 0.06\) & \(g\): \(3.92\pm 0.06\) & \(g\): \(5.78\pm 0.04\) & \(g\): \(1.22\pm 0.04\) \\ \(r\): \(2.99\pm 0.06\) & \(r\): \(2.38\pm 0.05\) & & \(V\)+\(R\): \(0.81\pm 0.02\) \\ &... &... & \(i\): \(3.84\pm 0.07\) &... \\ \(T\): \(2.05\pm 0.30\) & \(T\): \(1.60\pm 0.17\) &... & \(T\): \(0.62\pm 0.02\) \\ \hline \end{tabular} _Note._ The \(u\)-band amplitude for GD 356 was obtained from a light curve taken at the WHT using PF-QHY on 2020 Jun 21. \(A_{T}\) is the flux variation amplitude in the _TESS_ band, where is no corresponding entry for WD J1430. While both SDSS J1252 and WD J1430 are similarly faint (\(G\approx 17.5\) mag) and thus near the limit of what _TESS_ can observe, the latter source has a target-to-total aperture flux of only 0.03 (cf. 0.89 for SDSS J1252, 0.56 for LP 705-64, and 0.87 for GD 356; Walters et al., 2021). Thus the _TESS_ amplitude for WD J1430 is likely unreliable. \end{table} Table 2: Multi-wavelength variability amplitudes \(A_{\lambda}\) in per cent flux. Figure 4: The multi-band photometric variability amplitudes of the three DAHe white dwarfs with ULTRACAM light curves, together with similar measurements for the prototype GD 356 (Walters et al., 2021). The adopted central wavelengths are 3600, 4700, 6200, and 7500 Å for \(ugri\)(Fukugita et al., 1996; Gunn et al., 2006), and 7900 Å for _TESS_(Ricker et al., 2015). All data were analyzed in a uniform manner for this plot and the Table 2 values, using period04 as described in Section 3.4. The light curve amplitudes are normalized to the \(g\)-band value for a given star, with a horizontal offset of \(\pm 100\) Å applied to separate the data points, and errors given for the sinusoidal fits to individual bandpass data. band variability for the four white dwarfs, where the photometric amplitude for each bandpass is plotted relative to the \(g\) band for each star. Three of the four stars have data in \(ugr\) (or similar) bandpasses, and all three exhibit a relatively tight correlation in their amplitude ratios as a function of these three wavelength ranges. Three stars have reliable _TESS_ amplitudes where again the same behavior is evident, and suggesting a phenomenon associated with this emerging spectral family. Based on the narrow range of \(T_{\rm eff}\) among DAHe white dwarfs, Figure 4 implies their spots have similar spectral properties. This indication is remarkable given the range of DAHe rotation periods and especially magnetic field strengths. To date, only relatively weak Balmer features have been detected blueward of H\(\beta\) in any DAHe star, and yet the photometric variability remains strongest at shorter wavelengths. This is consistent with the previous finding that the photometric variability arises from changes in the stellar continuum, and not from fluctuations in the Balmer emission lines (Walters et al., 2021). In \(2002-2003\), the \(V\)-band (5500 A) light curve amplitude of the GD 356 was recorded as 0.2 per cent (Brinkworth et al., 2004). But in 2020 the photometric variations observed using an SDSS \(g\)-band filter (4700 A), and a \(V+R\) filter (6200 A) were found to be \(4\times-6\)x higher (Table 2). It thus seems possible that the starspot has evolved during this time frame; however, all six emission features within H\(\alpha\) and H\(\beta\) seem consistent over at least 35 years (Greenstein and McCarthy, 1985; Ferrario et al., 1997; Walters et al., 2021). ### Simple spot modeling A basic set of spot models and corresponding light curves were constructed to better constrain the observed stellar surfaces as a function of rotation, with a particular motivation towards those stars where two spots appear to be necessary. Each white dwarf was treated as a \(T_{\rm eff}=8000\) K blackbody, with one or two circular, isothermal spots whose single temperature is controlled by a scaling factor \(f(T)\). Where required by the light curve morphology, two identical spots were placed on the surface at antipodal points. The other model parameters are the inclination (\(i\)) of the stellar rotation axis to the observer, the spot colatitude (\(\theta\)), and the spot angular radius (\(\alpha\)). It is commonly acknowledged that such models are potentially degenerate when the values of \(i\) and \(\theta\) are interchanged (e.g. Wynn and King, 1992). For each star, a grid of models was generated with the following parameter ranges and step sizes as given in Table 3. For small angular radii (\(\alpha<20^{\circ}\)) the spot temperature range was expanded because in order to reproduce a fixed photometric amplitude, the smallest spots must be darkest. The root mean square (RMS) difference between the model and observed fluxes in a given bandpass as a function of spin phase was computed and used to identify a best-fit model for each \(\alpha\), although in practice there is nearly always a range of models that yield similarly satisfactory results. Figure 5: Illustrative spot models fitted to ULTRACAM light curves. The upper panels plot the \(g+r\)-band data for LP 705-64, which exhibits the more extreme depth change between its two minima (cf. SDSS J1252 in Figure 3), and for which the simple models calculated here are moderately deficient. In the top panel are shown the RMS difference minimum models over a broad range of (fixed) spot sizes, all of which result in the highest inclination (or colatitude). The reason these maximum inclination models fit the data best is illustrated, by contrast, in the middle panel, where a representative model at lower inclinations is shown for a (fixed) spot radius of 40\({}^{\circ}\). In these middle panel models, the resulting secondary minima are decreasing in depth, but this causes an increasing shift in the phase positions of the predicted flux maxima (toward phase 0.5). The bottom panel plots the \(g+i\)-band light curve of WD J1430 with analogous models that demonstrate a spot radius smaller than around 20\({}^{\circ}\) is insufficient to reproduce the observed variations. \begin{table} \begin{tabular}{c c c} \hline Parameter & Range & Step size \\ \hline \(i\) & \(10^{\circ}-90^{\circ}\) & \(10^{\circ}\) \\ \(0\) & \(10^{\circ}-90^{\circ}\) & \(2^{\circ}\) \\ \(\alpha_{1}\) & \(20^{\circ}-80^{\circ}\) & \(5^{\circ}\) \\ \(\alpha_{2}\) & \(5^{\circ}-20^{\circ}\) & \(5^{\circ}\) \\ \(f_{1}(T)\) & 0.99-0.48 & 0.01 \\ \(f_{2}(T)\) & 0.99-0.10 & 0.01 \\ \hline \end{tabular} \end{table} Table 3: Spot modeling parameter ranges and step sizes. While the modeling is relatively simple, a few basic results emerge. Small spots with \(\alpha\lesssim 15^{\circ}\) cannot generate a sufficiently large photometric amplitude for any of the three white dwarfs with ULTRACAM data, but otherwise the spot size is mostly unconstrained. Beyond this small angular size threshold, the combination of adjustable geometry and spot temperature permits sufficient model flexibility to achieve comparably good fits within a range of the other three parameters. Nevertheless, the spots must be at least modestly large; in terms of solid angle, covering several per cent or more of one hemisphere. In the case of WD J1430, which has the largest photometric variability amplitude, only spots with \(\alpha\geq 20^{\circ}\) can reproduce the observed flux changes. However, the models with the smallest RMS differences for the light curves with two minima exhibit clear shortcomings. The resulting inclinations (or colatitudes) tend toward \(90^{\circ}\), and consequently the model fits have equally deep, light curve minima. These fitted parameters are driven in the direction of maximum inclination (or colatitude) because, while the lower \(i\) (or \(0\)) solutions can reproduce a more shallow secondary minimum as observed, this particular shape exhibits light curve maxima whose phase positions are shifted towards the secondary minimum. Examples of these modeling outcomes are illustrated in Figure 5. Despite these limitations, the modeling demonstrates that the basic geometry of antipodal spots is essentially correct. However, in the context of the simple model assumptions, it is unlikely that there is a centered but tilted, symmetric dipolar arrangement for the dual-spotted stars. Instead, the spots may not be circular, and where two opposing spots are necessary each may be distinct in size, shape, or temperature; alternatively, the spots may be in a dipolar configuration that is offset from the rotational center of the star. ## 4 Discussion and Summary The discovery of DAHe white dwarfs whose light curves require two spots in a basic dipolar configuration, now totaling at least three systems (Manser et al., 2023), is a modest breakthrough in their characterization. It raises the immediate question of whether all DAHe stars have dual spots, which manifest as light curves with either one or two minima, depending on the viewing angle and spot orientation. As of this publication, there are now just over two dozen known DAHe stars, but where only six have robustly measured light curves (Reding et al., 2020; Gansicke et al., 2020; Reding et al., 2023; Manser et al., 2023). There is a seventh DAHe candidate with a well-measured _TESS_ light curve, SDSS J041246.85+754942.26, but which currently lacks any type of magnetic field indication or upper limit (Tremblay et al., 2020; Walters et al., 2021). Of these seven objects, three of their light curves exhibit two photometric minima, and four are consistent with a single minimum. With such small numbers and weak constraints on spot properties, a statistical assessment of the inferred viewing geometry is not possible, but the data to date are likely consistent with all class members having dipolar magnetic and spotted regions. If the simple modeling performed here is any indication, it may be that magnetic (spot) axes must be highly inclined towards the viewer (or equivalently have similar colatitudes) for both spots to transit, i.e. have ingress and egress as opposed to being partly visible at all times. The detection of photometric variability in GD 356 yielded limited results on its spot properties, where the size of the temperature-contrast surface was assumed to be identical to that of the magnetic region inferred from modeling of spectropolarimetry, around \(40^{\circ}\)(Brinkworth et al., 2004). Otherwise the modeling followed the same assumptions as those described in Section 3.5, and the mostly sinusoidal light curve was ultimately fitted to two sets of models, one with a dark spot near the rotational pole (low \(\theta\)) viewed at high inclination, and a second viewed near the axis of rotation (low \(i\)), but with high colatitude; an example of the degeneracy between \(i\) and \(\theta\). In the case that GD 356 has antipodal spots, in the former scenario the secondary spot can remain hidden from the observer at all spin phases, and in the latter scenario, it is possible for both spots to be partly visible at all times. If the prototype does indeed have two spots, the previous photometric modeling would disfavor the latter orientation, as it would result in some light curve impact on from both spots. As with the DAHe prototype, it is tempting to co-identify the sizable spots with their magnetic and chromospherically active regions (Ferrario et al., 1997; Brinkworth et al., 2004). In one such picture, the spots are dark and magnetic regions underlying the chromospheric activity, so that the Balmer emission lines are at maximum brightness when the stellar continuum yields photometric minimum (Walters et al., 2021). This behavior may also be seen in SDSS J1212 and SDSS 1219 (Reding et al., 2020; Gansicke et al., 2020), but insufficient phase coverage and sampling prevent any certainty at present, and equivalent widths have not been determined for those stars. The results here for LP 705-64 and WD J1430 are currently ambiguous for similar reasons, and owing to additional complications. Interestingly, time-series spectroscopy for SDSS J1252, WD J1430, and WD J1616 suggest that their emission lines may effectively disappear at some phases (Reding et al., 2020, 2023; Manser et al., 2023), presumably when a spot or spots (and associated magnetic region) are out of view or have minimum visibility. However, as discussed in Section 3.3, this is likely an oversimplified picture; with the exception of GD 356, there is a distinct lack of magnetic field determinations across the entire spin phases of DAHe white dwarfs, and Zeeman splitting may be an ineffective tool for weak or transient emission features. At present, empirical metrics associated with published, DAHe time-series spectroscopy are sparse, and it would be ideal for observers to provide both equivalent widths and central wavelengths for emission features over at least one full cycle with sufficient sampling. In contrast to magnetic field strength estimates, which may not be possible to measure at all spin phases via Zeeman splitting if magnetic regions rotate in and out of view, only the phase behavior of equivalent width has been robustly characterized, and only in the prototype (Walters et al., 2021). It is thus essential that full spectroscopic phase coverage of DAHe white dwarfs is carried out with these measurements in mind, and where spectropolarimetry will be more sensitive to magnetic field strength, particularly when emission or absorption features are weak. The dual-spotted nature of at least three DAHe white dwarfs has direct bearing on the hypothesis that a heated region can be caused by star-planet interactions such that a current loop is dissipated in one region of the star (e.g. the unipolar inductor Li et al., 1998; Wickramasinghe et al., 2010). If such planetary interactions are in fact taking place within the strong magnetospheres of DAHe white dwarfs, they are unlike the interactions that lead to unipolar, Jupiter-Io footprint mechanisms (Goldreich and Lynden-Bell, 1969). Going forward, models that require the presence of closely orbiting and interacting planets, which at present lack no empirical support in observations of DAHe stars, should require strong evidence to be re-considered. Given the lack of additional periodic signals and the compelling evidence of DAHe white dwarf clustering in the HR diagram (Walters et al., 2021; Reding et al., 2023; Manser et al., 2023), an intrinsic mechanism is the most likely source for the spotted regions and chromospheric activity. ## Acknowledgements J. Farihi is grateful to the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder, and the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, for hosting during extended visits, and to J. S. Pineda for an illuminating discussion on the nature of chromospheres in sun-like and low-mass stars. The authors acknowledge the European Southern Observatory for the award of telescope time via program 150.2091. J. Farihi acknowledges support from STFC grant ST/R000476/1 and National Science Foundation Grant No. NSF PHY-1748958. S. P. Littlefair acknowledges the support of the STFC grant ST/V000853/1. N. Walters has been supported by a UK STFC studentship hosted by the UCL Centre for Doctoral Training in Data Intensive Science. S. G. Parsons acknowledges the support of a STFC Ernest Rutherford Fellowship. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) license to any author accepted manuscript version arising. This paper includes data collected by the _TESS_ mission, which is funded by the NASA Explorer Program. ## Data Availability ULTRACAM data are available on reasonable request to the instrument team, while _TESS_ data are available through the Mikulski Archive for Space Telescopes.
2305.03024
A Lattice Chiral Boson Theory in $1+1$d
Chiral field theories describe large classes of matter, from the edges of Quantum Hall systems to the electroweak sector of the Standard Model, but defining them on the lattice has been an ongoing challenge due to a no-go theorem precluding free local models, the potential of symmetry anomalies, and sign problems. Some approaches define a $1+1$d chiral field theory as the edge of a $2+1$d system and argue that the edge decouples from the bulk, but this can be difficult to verify due to finite size effects and strong interactions. On the other hand, recent work has shown how to define the $2+1$d bulk theory as an exactly solvable model with zero correlation length, in which case the edge theory may be extracted exactly. We use these techniques to derive a lattice field theory on a $1+1$d spacetime lattice which carries an anomalous chiral $U(1)$ symmetry with zero chiral central charge. The lattice theory with anomalous chiral $U(1)$ symmetry is always gapless, regardless of lattice interactions. We demonstrate the chiral anomaly by coupling to a background gauge field, develop a field theory which demonstrates the chiral behavior, and show how to assemble a chiral, anomaly-free theory where the gauge field may be taken to be dynamical.
Michael DeMarco, Ethan Lake, Xiao-Gang Wen
2023-05-04T17:49:00Z
http://arxiv.org/abs/2305.03024v1
# A Lattice Chiral Boson Theory in \(1+1\)d ###### Abstract Chiral field theories describe large classes of matter, from the edges of Quantum Hall systems to the electroweak sector of the Standard Model, but defining them on the lattice has been an ongoing challenge due to a no-go theorem precluding free local models, the potential of symmetry anomalies, and sign problems. Some approaches define a \(1+1\)d chiral field theory as the edge of a \(2+1\)d system and argue that the edge decouples from the bulk, but this can be difficult to verify due to finite size effects and strong interactions. On the other hand, recent work has shown how to define the \(2+1\)d bulk theory as an exactly solvable model with zero correlation length, in which case the edge theory may be extracted exactly. We use these techniques to derive a lattice field theory on a \(1+1\)d spacetime lattice which carries an anomalous chiral \(U(1)\) symmetry with zero chiral central charge. The lattice theory with anomalous chiral \(U(1)\) symmetry is always gapless, regardless of lattice interactions. We demonstrate the chiral anomaly by coupling to a background gauge field, develop a field theory which demonstrates the chiral behavior, and show how to assemble a chiral, anomaly-free theory where the gauge field may be taken to be dynamical. Between the fact that weak interaction couples left-hand and right-hand fermions differently and the appearance of some of the most striking quantum anomalies in chiral models, chiral quantum field theories (QFTs) have been the subject of enormous interest. An essential tool in the study of quantum field theories has been to regularize them on a lattice and to simulate their behavior [1]. Lattice QFT has proven enormously successful in providing insight into the non-perturbative dynamics of quantum field theories. However, the lattice unfortunately does not mix easily with chirality, and defining a chiral field theory on a lattice has remained a challenge. The first glimpse of the chiral problem was Nielsen and Ninomiya's theorem [2; 3; 4], which precludes the appearance of non-interacting chiral fermions on the lattice for a large class of theories. Instead, unwanted 'doubling' modes appear, rendering the theory non-chiral. A number of ingenious approaches have sidestepped this no-go theorem for anomalous or anomaly-free chiral models [5], including the overlap-fermion approach [5; 6; 7; 8], which computes correlation functions as the overlaps of successive ground states [9], and the related domain wall approach [10; 11; 12]. However, each of these comes with its own drawback. In the domain wall approach, the applied gauge field propagates in one higher dimension even for an anomaly-free theory, while the partition function in the overlap-fermion approach may not have an expression as a path integral of local theory. Nonetheless, continued studies from both the lattice QFT and condensed matter communities then led to a new class of theories similar to the domain wall theory. In this'mirror fermion' approach [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], the \(1+1\)d lattice is understood as the edge of a \(2+1\)d manifold. The chiral theory appears as a gapless theory on one edge, while its mirror conjugate gapless theory resides on the other edge, with the bulk being gapped. Taken together, the two gapless theories are non-chiral, but one seeks to introduce interactions [24; 25; 26; 27; 28] that gap out only the mirror edge, which is always possible for anomaly-free theory chiral theory (an insight from topological order and symmetry protected topological (SPT) order in one higher dimension [29]). This approach introduces a compelling physical picture, but comes with its own restrictions: it can only regularize anomaly-free theories, and, more importantly, it relies on interactions to do so and its validity is hard to confirm. For both computational efficiency and insight into the underlying physics, we seek local lattice models of chiral QFTs that are regularized in the same dimension, whose chiral properties may be determined analytically, and that may be coupled to a gauge field. However, so far such theories have remained elusive. Recently, a whole new exact approach to gapped \(2+1\)d \(U(1)\) SPT phases on the lattice has been found [30; 31], and in this paper we exploit these results to define a chiral boson theory in \(1+1\)d. Our approach is most similar to the mirror-fermion approach above, but it exhibits a critical feature: despite containing strong interactions, the \(2+1\)d bulk theory is exactly solvable with zero correlation length and so _the edge may be explicitly decoupled from the bulk_. This exact solubility of the bulk model leads to an \(1+1\)d edge theory, which is local, well-defined, and contains a \(U(1)\) 't Hooft anomaly that can be seen analytically. This makes it easy to verify that the theory has the correct chiral behavior, and makes our model of considerable use for the study and simulation of chiral QFTs. Because of the \(U(1)\) 't Hooft anomaly, we conjecture that the our \(1+1\)d model is always gapless regardless of lattice interactions, as long as the \(U(1)\) symmetry is not explicitly broken on lattice. It is quite striking to see a lattice model which remains gapless for any \(U(1)\)-symmetry preserving local interaction. The \(1+1\)d chiral boson theory we present also paves the way for simulation of more complicated chiral QFTs, including in higher dimensions and for nonabelian sym metries. The simplest extension of this theory is a chiral fermion theory that may be obtained by introducing a spin structure. Most importantly, we hope that this result will spur continued collaboration between the condensed matter and lattice QFT communities on these fascinating theories. The extremely useful properties of the theory we study here follow from the fact that it is a fixed-point theory. The key to writing a fixed-point theory on the lattice was first discovered in [32] but not implemented until recently [30; 31]: in order to write down the topological action which produces a fixed-point theory, we must allow discontinuous functions of the field variables. In turn, once we have a fixed-point topological action in \(2+1\)d, we have a gapped bulk and a gapless \(1+1\)d edge which decouple exactly, since the penetration of the gapless mode into the bulk must be zero at the fixed point. This is what enables us to write down the pure local \(1+1d\) model and study its properties. That we encounter physical quantities that are not continuous functions of the field variables in fixed-point theories follows from a simple argument. Consider, as we will shortly, a lattice QFT consisting of \(U(1)\) variables on the sites of a lattice. The space of field configurations is \(U(1)^{n_{\text{\tiny{disure}}}}\). We will need a function \(\rho_{v}\) which indicates the vortex number on each plaquette. In a fixed-point theory, the output of \(\rho_{v}\) should be an integer for each plaquette, i.e. an element of \(\mathbb{Z}^{n_{\text{\tiny{disputing}}}}\). As a function from a connected manifold, \(U(1)^{n_{\text{\tiny{disure}}}}\), to a discrete space, \(\mathbb{Z}^{n_{\text{\tiny{disputates}}}}\), \(\rho_{v}\) must be either discontinuous or constant, and constant would be useless. Hence when describing vortices in a fixed point theory, we should allow for discontinuous physical quantities; this holds for topological defects in many theories. Now let us turn to the model. We consider a spacetime lattice with sites labeled by \(i\) and a \(U(1)\) variable \(\phi_{i}\) on each site. To save many factors of \(2\pi\), we work with the \(\phi_{i}\) quantized to unity, not \(2\pi\), so that all functions of \(\phi_{i}\) must be invariant under \(\phi_{i}\to\phi_{i}+n_{i}\), with \(n_{i}\in\mathbb{Z}\). We implement this as a gauge "rotor redundancy": \[\phi_{i}\to\phi_{i}+n_{i} \tag{1}\] and will ensure that all physical quantities are invariant and the path integral measure is gauge-fixed. The simplest way to define the theories we describe is to imbue all lattices with a branching structure and use the differential \(d\) and cup product \(\cup\) from algebraic topology. We will not review the details of the formalism here (see the supplemental material of [30] for full details); instead we need only the following: A field which assigns a variable to lattice sites, like \(\phi_{i}\), is a 0-cochain. A field which assigns a variable to all the \(m\)-dimensional cells of the lattice, an \(m\)-cochain. An action must assign a real number to the three-dimensional cells of our lattice and so is a 3-cochain. Our task is to construct a 3-cochain from the 0-cochain \(\phi_{i}\). We use the cup product, which takes an \(m\)-cochain \(a_{m}\) and an \(n\)-cochain \(b_{n}\) to an \(m+n\) cochain \(a_{m}\cup b_{n}\) (we will abbreviate \(a_{m}\cup b_{n}\) as \(a_{m}b_{n}\)), and the lattice differential \(d\), which takes an \(m\)-cochain \(a_{m}\) to an \(m+1\) cochain \(da_{m+1}\) and satisfies \(\mathrm{d}^{2}=0\). On the lattice, \((\mathrm{d}\phi)_{ij}=\phi_{i}-\phi_{j}\). Furthermore, we will write out the most important equations, including the definition of the chiral \(1+1\)d model, with explicit lattice indices. We now return to the definition of a vortex referenced earlier. A branch cut in the field \(\phi_{i}\) on the link \(\langle i,j\rangle\) is labeled by: \[b_{ij}=\lfloor(\mathrm{d}\phi)_{ij}\rceil=\lfloor\phi_{i}-\phi_{j}\rceil \tag{2}\] where \(\lfloor x\rceil\) denotes the nearest integer to \(x\). Here, \(b\) takes a non-zero value only on the links which cross a branch cut. Now, \(\mathrm{d}b=\mathrm{d}\lfloor\mathrm{d}\phi_{ij}\rceil\) takes non-zero values only on the end of a branch cut, i.e. a vortex. Including a minus sign by convention, we define the vortex density in two dimensions as: \[\rho_{v}=-\mathrm{d}\lfloor\mathrm{d}\phi\rceil =-\Big{(}\lfloor\phi_{i}-\phi_{i+x}\rceil+\lfloor\phi_{i+x}- \phi_{i+x+y}\rceil\] \[-\lfloor\phi_{i+y}-\phi_{i+x+y}\rceil-\lfloor\phi_{i}-\phi_{i+y} \rceil\Big{)} \tag{3}\] where we have written the term out explicitly on a square plaquette (see Figure 1 and Appendix C). One can think of \(\rho_{v}\) as counting the branch cuts around the plaquette; it will be non-zero if a branch cut ends within the plaquette, which is when there is a vortex in the plaquette. Note that, while \(b\) is not invariant under (1), \(\rho_{v}\) is, as \(\rho_{v}\to-\mathrm{d}\lfloor\mathrm{d}\phi+\mathrm{d}n\rceil=\rho_{v}-\mathrm{ d}^{2}n=\rho_{v}\). In three dimensions, we can define a vortex current in the same way: \[j_{v}=\star(-\mathrm{d}\lfloor\mathrm{d}\phi\rceil) \tag{4}\] where we have introduced the lattice Hodge star operator. \(j_{v}\) is also invariant under (1). The starting point of our 1+1d model is a 2+1d fixed-point model with Hall conductance of \(2k\), \(k\in\mathbb{Z}\), derived in [31]. Let \(\mathcal{N}^{3}\) be a three-dimensional space-time lattice. The action is: \[S_{k}[\phi]=-2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}(\mathrm{d} \phi-\lfloor\mathrm{d}\phi\rceil)\cup\mathrm{d}(\mathrm{d}\phi-\lfloor\mathrm{d }\phi\rceil)\\ =2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}\mathrm{d}\phi\cup\mathrm{d }\lfloor\mathrm{d}\phi\rceil=-2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}\mathrm{d} \phi\cup\rho_{v} \tag{5}\] where used the fact that \(\mathrm{d}^{2}=0\), and that the action appears as \(e^{iS}\) to simplify the action. Here "\(\int_{\mathcal{N}^{3}}\)" means Figure 1: The model we study works on any lattice with a branching structure, but we will write down explicit expressions on this square lattice. evaluation against a generator of the top cohomology of the lattice \(\mathcal{N}^{3}\). The full path integral is: \[Z=\int D\phi e^{\mathrm{i}S_{k}[\phi]}\qquad\quad\int D\phi=\prod_{i}\int_{- \frac{1}{2}}^{\frac{1}{2}}\mathrm{d}\phi_{i} \tag{6}\] where the integral measure is gauge-fixed under (1). The most important aspect of the action (5) is that it is a total derivative, i.e. a surface term. It vanishes (mod \(2\pi\mathrm{i}\)) on a closed manifold. Hence we may evaluate it on a manifold \(\mathcal{N}^{3}\) with boundary \(\mathcal{M}^{2}=\partial\mathcal{N}^{3}\) to obtain a theory solely on \(\mathcal{M}^{2}\): \[S_{k}=\int_{\mathcal{M}^{2}}2\pi\mathrm{i}k\phi\mathrm{d}\lfloor\mathrm{d} \phi\rfloor-L(\phi)=\int-2\pi\mathrm{i}k\phi\rho_{v}-L(\phi) \tag{7}\] where we have added a possible additional non-topological term \(L(\phi)\). This is the model which we wish to present and, together with its gauged version we will see later, is the _main result of this paper_. Writing out the indices for a square lattice, the action is: \[S_{k}[\phi]=\sum_{i\in\mathcal{M}^{2}}-2\pi\mathrm{i}k\phi_{i} \Big{(}\lfloor\phi_{i}-\phi_{i+x}\rceil+\lfloor\phi_{i+x}-\phi_{i+x+y}\rceil\\ -\lfloor\phi_{i+y}-\phi_{i+x+y}\rceil-\lfloor\phi_{i}-\phi_{i+y} \rceil\Big{)}-L(\phi_{i}) \tag{8}\] where \(i\) sums over the sites of the lattice. Let us understand the properties of (7) when \(L(\phi)=0\). Under the redundancy (1), the action is invariant, since \(\rho_{v}\) is invariant and \(n\rho_{v}\) is integer-valued and so does not affect the exponential. The action also has particle-hole symmetry \(\phi\to-\phi\) and inherits the translation and rotation symmetry that the underlying lattice has. The term \(2\pi\mathrm{i}k\phi\mathrm{d}\lfloor\mathrm{d}\phi\rceil\) has an unusual \(U(1)\) symmetry (Any additional terms \(L(\phi)\) should have the usual symmetry \(L(\phi+\theta)=L(\phi)\)). Under \(\phi\to\phi+\theta\), with \(\theta\) a global constant, the action transforms as: \[S_{k}[\phi]\to S_{k}[\phi]-2\pi\mathrm{i}k\theta\int_{\mathcal{M}^{2}}\rho_{v} \tag{9}\] If \(\mathcal{M}^{2}\) is closed, then the total vortex number is zero, as \(\int_{\mathcal{M}^{2}}\rho_{v}=-\int_{\mathcal{M}^{2}}\mathrm{d}\lfloor \mathrm{d}\phi\rceil=0\) by summation by parts, and so the theory is \(U(1)\) symmetric. If \(\mathcal{M}^{2}\) has a boundary \(\mathcal{B}^{1}=\partial\mathcal{M}^{2}\), then the action is not \(U(1)\) invariant. Due to this unusual (anomalous) \(U(1)\) symmetry, adding any local \(L(\phi)\) term cannot cancel the \(2\pi\mathrm{i}k\phi\mathrm{d}\lfloor\mathrm{d}\phi\rceil\) term. The breaking of \(U(1)\) symmetry in the presence of a boundary is our first glimpse of the \(U(1)\) anomaly. This perspective on anomalies, as symmetries that break on the boundary, is more familiar to the condensed matter community. It should be considered as a consequence of the familiar expression of the 't Hooft anomaly, where the anomalous symmetry cannot be gauged, (which we investigate next), as a background gauge field could impose an electric potential that would create a boundary. Let us gauge the theory by coupling it to a background gauge field to see the anomaly in a more usual light. At a first glance, it is not clear how to gauge the action (7), as it contains a term \(\phi\) without a derivative. We could proceed by integrating by parts to rewrite the action in terms of "\(\mathrm{d}\phi\lfloor\mathrm{d}\phi\rceil\)," but this term is not manifestly invariant under (1) and gauging it will break invariance under (1). The obstruction of gauging is a common indication of a 't Hooft anomaly. One way to avoid this obstruction is to return to the 2+1d model (5). Gauging such a model has been done in [31] (see also Appendix C of [33]). The resulting action is: \[S_{k}[\phi;A]=-2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}\Big{\{}( \mathrm{d}\phi-A)(\mathrm{d}A-\lfloor\mathrm{d}A\rceil)\\ -\lfloor\mathrm{d}A\rfloor(\mathrm{d}\phi-A)-\mathrm{d}\Big{[}( \mathrm{d}\phi-A)(\mathrm{d}\phi-A-\lfloor\mathrm{d}\phi-A)\rceil\Big{]}\Big{\}} \tag{10}\] Note that we also take \(A\) to be periodic modulo unity. We also assume \(A\) to be weak, in the sense that \(\mathrm{d}A-\lfloor\mathrm{d}A\rceil\approx 0\). This implies that \(\mathrm{d}\lfloor\mathrm{d}A\rceil=0\), i.e. that field configurations are free of monopoles. This action has three gauge redundancies. Two are of the same "rotor redundancy" form as before: \[\phi_{i}\to\phi_{i}+n_{i} \tag{11}\] \[A_{ij}\to A_{ij}+m_{ij} \tag{12}\] for \(n_{i},m_{ij}\in\mathbb{Z}\). The third is the typical gauge invariance: \[\phi\to\phi+\theta\qquad\quad A\to A+\mathrm{d}\theta \tag{13}\] for \(\theta\) an \(\mathbb{R}/\mathbb{Z}\)-valued field. Now we separate (10) into boundary terms and bulk terms. We can rewrite the action as: \[2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}\Big{\{}-A(\mathrm{d}A- \lfloor\mathrm{d}A\rceil)+\lfloor\mathrm{d}A\rceil A-\mathrm{d}(A(A-\lfloor A \rceil))\Big{\}}\\ +2\pi\mathrm{i}k\int_{\mathcal{M}^{2}}\Big{\{}\phi(\mathrm{d}A- \lfloor\mathrm{d}A\rceil)-\lfloor\mathrm{d}A\rceil\phi\\ -(\mathrm{d}\phi-A)(\mathrm{d}\phi-A-\lfloor\mathrm{d}\phi-A \rceil)+A(A-\lfloor A\rceil)\Big{\}} \tag{14}\] We have now split the action into 'bulk terms' consisting only of \(A\) and 'boundary terms' which contain all the \(\phi\). Each of them are separately invariant under \(\phi\to\phi+n\), and we have added and subtracted terms to ensure that they are invariant under \(A\to A+m\). They are not separately invariant under the gauge symmetry (13). The boundary integral is the proper gauged action for the edge mode, i.e. our \(1+1\)d chiral boson model. Specifically, it is: \[S=2\pi\mathrm{i}k\int_{\mathcal{M}^{2}}\Big{\{}\phi(\mathrm{d}A- \lfloor\mathrm{d}A\rceil)-\lfloor\mathrm{d}A\rceil\phi\\ -(\mathrm{d}\phi-A)(\mathrm{d}\phi-A-\lfloor\mathrm{d}\phi-A \rceil)+A(A-\lfloor A\rceil)\Big{\}} \tag{15}\] This action is written on a square lattice in Appendix D. Together with the ungauged (\(A=0\)) model (7), eq. (15) is the main result of this paper. One can check that the action is invariant under both of the symmetries (11) and (12). However, it is not invariant under (13), i.e. \(\phi\rightarrow\phi+\theta\), \(A\to A+\mathrm{d}\theta\). The anomaly structure is in general complicated, but takes a simple form when we set \(\mathrm{d}\theta=0\). In that case, the action changes by a term: \[-2\pi(2k)i\theta\int_{M^{2}}[\mathrm{d}A] \tag{16}\] Now, \(\int_{\mathcal{M}^{2}}(\mathrm{d}A-\lfloor\mathrm{d}A\rceil)=-\int_{ \mathcal{M}^{2}}[\mathrm{d}A]\) is the total flux of the gauge field over \(\mathcal{M}^{2}\), and so this is precisely \(2\pi\mathrm{i}(2k)\int\theta F\), i.e. the anomaly required by the Hall conductance. We should recognize this as the expected Adler-Bell-Jackiw anomaly, i.e. as a lattice, discrete generalization of \(4\pi\mathrm{i}k\int\theta\mathrm{d}A\). As is usual, this failure of gauge invariance can be cancelled by an equal and opposite contribution from the bulk terms in eq. (14). We have now examined the \(1+1\)d lattice model in detail and seen the \(U(1)\) anomaly through both symmetry breaking in the presence of a boundary and through direct coupling to a background gauge field. Now we write down a continuum model for the edge theory and explain how it creates a chiral representation of \(U(1)\) and a nonzero quantized Hall conductance. The first step is to to see that the topological term imparts a charge to vortices. We do this by examining the parent \(2+1\)d theory. Because vortices are proliferated in the model (10) and therefore are not well-defined excitations, we first confine them be adding in a term \(\frac{1}{g}\sum_{\langle i,j\rangle}\cos 2\pi(\mathrm{d}\phi-A)\). This term sets the vortices to be nearly zero, i.e. \(\lfloor\star j\rceil=0\), in which case the action (10) can be written as a minimally coupled form of (5): \[S=2\pi\mathrm{i}k\int_{\mathcal{N}^{3}}(\mathrm{d}\phi-A-\lfloor\mathrm{d} \phi-A\rceil)\mathrm{d}(\mathrm{d}\phi-A-\lfloor\mathrm{d}\phi-A\rceil) \tag{17}\] where we have ignored 1-cup products that encode framing (See [33] Appendix C). Coupling in the gauge field modifies the vortex current (4) to: \[\star j=-\mathrm{d}(\mathrm{d}\phi-A-\lfloor\mathrm{d}\phi-A\rceil) \tag{18}\] On a closed manifold \(N^{3}\), the gauged bulk action (17) can be rewritten as: \[2\pi\mathrm{i}k\int_{N^{3}}(A\mathrm{d}A+A\star j+j\star A) \tag{19}\] Hence the topological term leads to a charge \(2k\) vortex. The ungauged action (7) describes a bosonic field \(\phi\) coupled to its vortices. To develop a continuum description, we recall the usual description of a compact field \(\phi\) with its vortex field \(\theta\): \[S\sim 2\pi\mathrm{i}\int\left[\phi\partial_{x}\partial_{t}\theta+\frac{v}{2}( \partial_{x}\phi)^{2}+\frac{v}{2}(\partial_{x}\theta)^{2}\right] \tag{20}\] Here \(e^{2\pi\mathrm{i}\hat{\theta}}\) creates a vortex in \(e^{2\pi\mathrm{i}\hat{\phi}}\), as can be seen from the commutation relations \([\hat{\phi}(x),\partial_{x^{\prime}}\hat{\theta}(x^{\prime})]=\frac{i}{2\pi} \delta(x-x^{\prime})\). We have included velocity terms with speed \(v>0\) that may be induced by a term \(\sum_{\mathrm{links}}\cos 2\pi\mathrm{d}\phi\) in the lattice model, and have set \(v_{\theta}=v_{\phi}=v\) for convenience. From the lattice model, we know that \(\phi\) has charge 1 and the vortex field \(\theta\) has charge 2\(k\). We define the composite fields \(\phi_{R}=\frac{1}{2}(\phi+\theta)\), \(\phi_{L}=\frac{1}{2}(\phi-\theta)\) to get: \[2\pi\mathrm{i}k\int(\phi_{R}\partial_{x}\partial_{t}\phi_{R}-\phi_{L}\partial _{x}\partial_{t}\phi_{L}+v(\partial_{x}\phi_{R})^{2}+v(\partial_{x}\phi_{L})^ {2}) \tag{21}\] Thus the chiral model consists of a right-moving mode \(\phi_{R}\) and a left moving mode \(\phi_{L}\), which have respective equations of motion \((\partial_{x}\pm\partial_{t})\partial_{x}\phi_{L/R}=0\), where \(\rho_{L/R}=\partial_{x}\phi_{L/R}\) is the usual bosonized excitation density. As there are equal numbers of left and right moving modes, there is no gravitational anomaly or, equivalently, thermal Hall conductance. On the other hand, there is a \(U(1)\) anomaly which arises because \(\phi_{L}\) and \(\phi_{R}\) have differing charges. Denote the change of a field \(\varphi\) by \(C[\varphi]\), so that \(C[\phi]=1\). We have seen that vortices have charge \(2k\) and so \(C[\theta]=2k\). Hence the anomaly has coefficient: \[C[\phi_{R}]-C[\phi_{L}]=C[\theta]=2k \tag{22}\] This is consistent with the Hall conductance of the bulk system, which is \(2k\frac{e^{2}}{k}\)[31]. We have seen how to create a \(1+1\)d lattice theory which realizes an anomalous chiral gapless field theory with a background gauge field. In order to make the gauge field dynamical, we would like to create an anomaly-free, but chiral gapless field theory. The solution is to layer multiple copies (say \(N\)) of the system with differing levels \(k_{I}\) and charges \(q_{I}\), \(I=1,...N\). Each layer contributes an anomaly factor of \(k_{I}q_{I}^{2}\mathcal{A}\), where \(\mathcal{A}\) is the anomaly factor defined in Appendix B. This leads to a familiar anomaly cancellation condition: \[\sum_{I=1}^{N}k_{I}q_{I}^{2}=0 \tag{23}\] If \(k_{I}\) and \(q_{I}\) are chosen to satisfy this, then an anomaly-free chiral lattice field theory is: \[S =2\pi i\sum_{I=1}^{N}k_{I}\int_{\mathcal{M}^{2}}\left\{\phi_{I}q_{ I}(dA-\lfloor dA\rceil)-q_{I}\lfloor dA\rceil\phi_{I}\right.\] \[\left.-(d\phi_{I}-q_{I}A)(d\phi_{I}-q_{I}A-\lfloor d\phi_{I}-q_{ I}A\rceil)\right\} \tag{24}\] and \(A\) may be interpreted as a dynamical gauge field. The fields with \(k_{I}>0\) are 'right-moving,' while those with \(k_{I}<0\) are left moving; accordingly these theories carry chiral \(U(1)\) representations, for example with \((k_{I},q_{I})=\{(1,3),(1,4),(-1,5)\}\). A generalized anomaly construction with manifestly on-site \(U(1)\) symmetry is given in Appendix F. We have written down a boson theory with chiral \(U(1)\) symmetry in \(1+1\)d by extracting the edge theory from an exactly soluble \(2+1\)d chiral model. We then demonstrated the \(U(1)\) anomaly by both inspection of the ungauged theory and by explicitly coupling in a background gauge field and calculating the variation of the edge action. Finally, we wrote down a continuum theory for our model and showed that it carries chiral \(U(1)\) charge as expected. The key to our model is the expression for the vortex density (3) which allows for a fixed-point description of the topological defects of the field. The generalization to other models with more complicated target spaces (e.g. \(SO(3)\)) will make use of similar discontinuous functions. _Acknowledgments_ This research was partially supported by NSF DMR-2022428, the NSF Graduate Research Fellowship under Grant No. 1745302, by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440), and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704. MD and EL acknowledge useful discussions with H. Goldman, J.Y. Chen, J. Wang, and J. Wen. MD acknowledges useful discussions with V.V. Albert, and is grateful to E. Witten for comments on generalizing to \(SU(2)\).
2301.09798
Thermal capillary waves on bounded nanoscale thin films
The effect of confining walls on the fluctuation of a nanoscale thin film's free surface is studied using stochastic thin-film equations (STFEs). Two canonical boundary conditions are employed to reveal the influence of the confinement: (1) an imposed contact angle and (2) a pinned contact line. A linear stability analysis provides the wave eigenmodes, after which thermal-capillary-wave theory predicts the wave fluctuation amplitudes. Molecular dynamics (MD) simulations are performed to test the predictions, and a Langevin diffusion model is proposed to capture oscillations of the contact lines observed in MD simulations. Good agreement between the theoretical predictions and the MD simulation results is recovered, and it is discovered that confinement can influence the entire film. Notably, a constraint on the length scale of wave modes is found to affect fluctuation amplitudes from our theoretical model, especially for 3D films. This opens up challenges and future lines of inquiry.
Jingbang Liu, Chengxi Zhao, Duncan A. Lockerby, James E. Sprittles
2023-01-24T03:24:45Z
http://arxiv.org/abs/2301.09798v1
# Thermal capillary waves on bounded nanoscale thin films ###### Abstract The effect of confining walls on the fluctuation of a nanoscale thin film's free surface is studied using the stochastic thin-film equations (STFE). Two canonical boundary conditions are employed to reveal the influence of the confinement: (i) an imposed contact angle and (ii) a pinned contact line. A linear stability analysis provides the wave eigenmodes, after which thermal-capillary-wave theory predicts the wave fluctuation amplitudes. Molecular dynamics (MD) simulations are performed to test the predictions and a Langevin diffusion model is proposed to capture oscillations of the contact-lines observed in MD. Good agreement between the theoretical predictions and the MD simulation results is recovered, and it is discovered that confinement can influence the entire film. Notably, a constraint on the length scale of wave modes is found to affect fluctuation amplitudes from our theoretical model, especially for 3D films. This opens up new challenges and future lines of inquiry. ## I Introduction The behavior of fluids at the nanoscale attracts increasing attention as fluid-based technologies continue to miniaturize [1], for example, in: lab-on-a-chip devices [2], nanofluidic transistors [3], ink-jet printing [4] and osmotic transport [5]. The dynamics at such scales are challenging, if not impossible, to observe experimentally, making modeling and simulation a vital component of continued technological progress. However, due to the additional physical phenomena that appear when going from traditional engineering scales to the nanoscale [6], conventional fluid dynamical modeling approaches are often inaccurate. A canonical nanoscale flow topic that underpins many applications is the behavior and stability of thin liquid films on rigid solid surfaces. Here, stability is crucial to coating technologies [7; 8] whilst instability can be harnessed to create pre-determined patterns [9]. Driven by technological demands and fundamental interest, there is a huge body of research in this field, see for example review articles [9; 10; 11; 12]. It is well established that at the nanoscale disjoining pressure becomes important, competing with surface tension for the stability of the film and driving rupture; via the so-called spinodal mechanism [13; 14; 15]. Notably, though, in order for theoretical predictions of rupture timescales to agree with those from experiment, thermal fluctuations, which drive free-surface nanowires, need to be incorporated in the physical model [16]. The dynamics of these nanowires on thin films form the basis of this work, where we consider, for the first time, their behavior within a confined environment, i.e. bounded by surfaces. It has long been expected that the chaotic thermal motion of molecules in a liquid would generate so-called 'thermal capillary waves' at liquid-fluid interfaces [17; 18; 19]. One of the earliest experimental confirmations of the existence of such waves was obtained using a light-scattering technique at the liquid-vapour interface of carbon dioxide [20]. More recently, experiments have been conducted to observe and measure thermal capillary waves by exploiting ultra-low surface tension fluids that generate micron-scale waves [21; 22], using various optical scattering techniques [23; 24; 25], and, with simple fluids, using an atomic force microscope cantilever placed on a micro hemispherical bubble [26]. An alternative tool for probing the physics of the nanoscale is molecular dynamics (MD) simulations, providing an environment for conducting 'virtual experiments' [27; 28] that complement traditional methods and yield additional understanding. MD simulations have observed thermal capillary waves in the context of: nanoscale thin films [29; 30; 31; 32; 33], the instability and breakup of liquid jets [34; 35; 36], the coalescence of nanodroplets [37; 38; 39] and films on fibers [40]. Whilst MD contains the necessary nanoscale physics to capture thermal capillary waves, it is both very computationally expensive and requires interpretation that is arguably best provided by macroscopic theories. For illustration, in this article, a 51.4 ns simulation of a thin film containing 32883 Lennard-Jones particles took 11 hours to run on a 28 core CPU; and to obtain statistically reliable averages, multiple realizations are needed. Clearly, there is a need for a complementary modeling approach that is more computationally tractable. To go beyond conventional fluid mechanics and include thermal fluctuations, Landau and Lifshitz introduced the equations of fluctuating hydrodynamics (FH) [41] by adding a random stress tensor satisfying the fluctuation dissipation theorem into the Navier-Stokes equations. For thin liquid films, the stochastic thin-film equation (STFE), accurate in the lubrication approximation, has been derived for planar films [42; 43]; a similar stochastic equation has been obtained for jets [34]. Extensions of the STFE have also been derived, for example, for different slip conditions [30; 40] and with an elastic plate on top of the film [44]. A linear stability analysis can be applied to the STFE to obtain a power spectrum for the thermal capillary waves [45] that can be compared with experiment [16]. The power spectrum of the free-surface waves has also been shown to agree with MD [29; 40], exhibiting unconventional effects like an evolving wave number associated with fastest growth. Attempts have also been made to solve the full nonlinear STFE, and its variants, numerically [46; 47; 48; 36] which, although requiring more complex formulations, still generate large computational savings compared with MD. In summary, the STFE is a remarkably powerful and efficient tool for studying the dynamics of ultra-thin films whose potential is yet to be fully exploited (e.g., thus far most analyses are confined to 2D). Notably, previous studies of the STFE either assume the films are unbounded, that the dynamics are periodic on some length scale (essentially, to enable a simple Fourier analysis), or that the boundaries are sufficiently far away that they have no effect other than to potentially regularize the solution at some upper scale. How then, does confinement, i.e. the effects of nearby boundaries, affect the dynamics of nanoscale films? This will be our focus, beginning by considering the properties of nanowaves in thermal equilibrium. In this work, we examine, in both quasi-2D and 3D, the effect of the two typical boundary conditions where a free surface meets a wall: (i) an imposed contact angle and (ii) a (partially) pinned contact line. A linear stability analysis is performed and the waves modes are calculated by solving the eigenvalue problem for each boundary condition. Thermal-capillary-wave theory is used to predict the fluctuation amplitude and then validated against MD simulations. The paper is organized as follows. In Section II we consider quasi-2D bounded films with the two different boundary conditions and for each provide two ways to derive a theoretical prediction for the fluctuation amplitude of the free surface: (i) from thermal-capillary-wave theory; and (ii) directly from the STFE. Details of MD simulations are provided and results are compared with the theory. In Section III we extend our study to 3D circular bounded films with two different boundary conditions; theories and fluctuation amplitudes are derived. MD simulations are performed and results are compared. In Section IV the role of a cut-off length scale is then discussed. In Section V, future research directions are outlined. ## II Quasi-2D bounded thin films In this section, we present the modeling and MD simulation results of 2D bounded thin films on a solid that is in the \((x,y)\)-plane at \(z=0\), as shown in Fig. 1. The MD simulations are inherently 3D, and to approximate a 2D flow the thickness of the film \(L_{y}\) in the \(y\)-direction is set to be much smaller than the length of the film \(L_{x}\), making it 'quasi-2D'. To compare the theory to MD results, we consider quantities which are averaged 'into the page', over \(L_{y}\) in the \(y\)-direction, resulting in all quantities depending only on \((x,t)\), see [29]. Assuming that \((H/L_{x})^{2}Re\ll 1\), where \(H\) is the characteristic height of the free surface and \(Re\) is the Reynolds number, which is expected to be true for nanoscale thin films at thermal equilibrium, we can apply the lubrication approximation [10] to the Navier-Stokes equations and find that inertial effects are negligible. Then, in the absence of disjoining pressure, whose influence we also assume to be negligible in thermal equilibrium for the film heights we consider, we arrive at the thin-film equation (TFE) to provide a description of the free surface \(z=h(x,t)\) given by \[\frac{\partial h}{\partial t}=-\frac{\gamma}{3\mu}\frac{\partial}{\partial x} \left(h^{3}\frac{\partial^{3}h}{\partial x^{3}}\right), \tag{1}\] where \(\gamma\) is the surface tension and \(\mu\) is the dynamic viscosity. When thermal fluctuations are included, the stochastic thin-film equation (STFE) [42; 43; 45] can be derived from fluctuating hydrodynamics: \[\frac{\partial h}{\partial t}=-\frac{\gamma}{3\mu}\frac{\partial}{\partial x }\left(h^{3}\frac{\partial^{3}h}{\partial x^{3}}\right)+\sqrt{\frac{2k_{B}T} {3\mu L_{y}}}\frac{\partial}{\partial x}(h^{3}/2\mathcal{N}), \tag{2}\] Figure 1: An illustration of the geometry of the quasi-2D thin-film problem (top) and a snapshot of a representative MD simulation for a thin film with \(90^{\circ}\) contact angle (below); yellow particles denote liquid Argon and red particles denote Platinum solid. where \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature. Thermal noise \(\mathcal{N}(x,t)\) has zero mean and covariance \[\langle\mathcal{N}(x,t)\mathcal{N}(x^{\prime},t^{\prime})\rangle=\delta(x-x^{ \prime})\delta(t-t^{\prime}), \tag{3}\] which means that the noise is uncorrelated in both time and space. Note, the \(\sqrt{1/L_{y}}\) factor in the noise term of Eq. (2) comes from averaging in the \(y\)-direction. One can easily see that a flat free surface \(h(x,t)=h_{0}\) is a steady solution to Eq. (1). However, thermal fluctuations, modeled by the noise term in Eq. (2), drives the free surface away from the steady solution, creating thermal capillary waves [33; 34; 28; 9; 45]. Note, these 'waves' are viscous damped response to fluctuations arising from within the bulk liquid of the film (i.e. they are inertia free). In the case of fluctuation-driven films, \(\langle h(x,t)\rangle=h_{0}\), where \(\langle\rangle\) represents ensemble average. To understand the properties of the nanoscale waves, we consider a linearized setup with \(h(x,t)=h_{0}+\delta h(x,t)\) and \(\delta h\ll h_{0}\). Then, as is conventionally done, if we assume the domain is periodic on a length \(L_{x}\), the perturbation can be decomposed into Fourier modes and the fluctuation amplitude (or surface roughness) can be estimated by [28; 19] \[\langle\delta h^{2}\rangle=\frac{l_{T}^{2}}{12}\frac{L_{x}}{L_{y}}, \tag{4}\] where \(l_{T}=\sqrt{k_{B}T/\gamma}\) is the 'thermal length scale' characterizing the approximate amplitude of these waves. Interestingly, the dynamic growth of these nanoscale waves from an initially flat interface has been shown in [31] to fall into a specific universality class. Here, we consider a different, practically more realistic setup, with solid walls at \(x=0\), \(L_{x}\) and two different physically inspired boundary conditions: (i) a prescribed \(90^{\circ}\) contact angle; and (ii) (partially) pinned contact lines. ### Prescribed \(90^{\circ}\) contact angle As a starting point, we consider a \(90^{\circ}\) contact angle, for which the equilibrium state is on average a flat film. In this case \[\left.\frac{\partial h}{\partial x}\right|_{x=0}=\left.\frac{\partial h}{ \partial x}\right|_{x=L_{x}}=0. \tag{5}\] It is worth noting that here we have assumed that the contact angle is \(90^{\circ}\) at every instant in time. Assuming also that the walls are impermeable, we have \[\left.\frac{\partial^{3}h}{\partial x^{3}}\right|_{x=0}=\left.\frac{\partial ^{3}h}{\partial x^{3}}\right|_{x=L_{x}}=0. \tag{6}\] Since the boundary conditions are not periodic, we can no longer assume the wave modes to be Fourier. Instead, linearizing the TFE and solving the corresponding eigenvalue problem, we can show that the appropriate wave modes (see Appendix A.1) are as follows: \[\phi_{n}(x)=\cos\left(\frac{n\pi x}{L_{x}}\right),\,n=1,2,\ldots \tag{7}\] Given this information, we can proceed with the classical 'thermal-capillary-wave theory' approach [49]. #### ii.1.1 Thermal-capillary-wave theory The free surface can be written as the superposition of the average film thickness \(h_{0}\) and a perturbation \(h_{1}(x,t)\): \[h(x,t)=h_{0}+h_{1}(x,t). \tag{8}\] Here, \(h_{1}(x,t)\) can be decomposed into the wave modes \(\phi_{n}(x)\), so that \[h_{1}(x,t)=\sum_{n=1}^{\infty}a_{n}(t)\phi_{n}(x), \tag{9}\] and it is assumed that \(a_{n}(t)\ll h_{0}\). An energetic argument, exploiting equipartition in thermal equilibrium, will then give us the statistical properties of the amplitudes (the \(a_{n}\)'s). The cost of energy for doing work against surface tension by expanding the interface's area is given by \[E=\gamma\left(L_{y}\int_{0}^{L_{x}}\sqrt{1+\left(\frac{\partial h}{\partial x }\right)^{2}}dx-L_{x}L_{y}\right), \tag{10}\] where \(L_{y}\) is the film length into the page. Taking the standard thin-film approximation that \(\partial h/\partial x\ll 1\) we have \[E\approx\gamma L_{y}\int_{0}^{L}\frac{1}{2}\left(\frac{\partial h}{\partial x }\right)^{2}dx, \tag{11}\] so that using Eq. (8) and Eq. (9) we can obtain the total energy: \[E=\sum_{n=1}^{\infty}E_{n}=\sum_{n=1}^{\infty}\frac{\gamma\pi^{2}n^{2}}{4} \frac{L_{y}}{L_{x}}a_{n}^{2}. \tag{12}\] According to the equipartition theorem, at thermal equilibrium the energy is shared equally among each mode, i.e. \(\langle E_{n}\rangle=k_{B}T/2\), leading to an expression for the variance of each mode's amplitude (note their means are zero by construction): \[\langle a_{n}^{2}\rangle=\frac{2}{\pi^{2}}l_{T}^{2}\frac{L_{x}}{L_{y}}\frac{1} {n^{2}}. \tag{13}\] This expression then allows us to obtain information about the nanowaves in thermal equilibrium. Using Eq. (9) and \(\langle a_{m}a_{n}\rangle=\delta_{mn}\langle a_{n}^{2}\rangle\) (see Appendix A.2), we can find the variance of the perturbation across the film as follows \[\langle h_{1}^{2}(x)\rangle =\left\langle\sum_{m=1}^{\infty}a_{m}\cos\left(\frac{m\pi x}{L_{x} }\right)\sum_{n=1}^{\infty}a_{n}\cos\left(\frac{n\pi x}{L_{x}}\right)\right\rangle\] \[=\sum_{n=1}^{\infty}\langle a_{n}^{2}\rangle\cos^{2}\left(\frac{n \pi x}{L_{x}}\right)\] \[=\frac{2l_{T}^{2}}{\pi^{2}}\frac{L_{x}}{L_{y}}\sum_{n=1}^{\infty }\frac{\cos^{2}\left(\frac{n\pi x}{L_{x}}\right)}{n^{2}}\] \[=l_{T}^{2}\frac{L_{x}}{L_{y}}\left[\frac{1}{12}+\left(\frac{1}{2 }-\frac{x}{L_{x}}\right)^{2}\right]. \tag{14}\] Notably, in contrast to the periodic spatially homogeneous case Eq. (4), expression Eq. (14) is a function of \(x\). A full discussion of this case will be provided after we have compared to MD results. There is also an alternative derivation for \(\langle a_{n}^{2}\rangle\) directly from the STFE (see Appendix A.2). Since the STFE describes the time evolution of the film height from some initial (non-equilibrium) state, the result is also time dependent: \[\langle a_{n}^{2}\rangle=\frac{2k_{B}T}{\gamma\pi^{2}}\frac{L_{x}}{L_{y}}\frac {1}{n^{2}}(1-\exp(-2An^{4}t)), \tag{15}\] where \(A=\gamma h_{0}^{3}\pi^{4}/(3\mu L_{x}^{4})\). This tells us that an initial perturbation decays exponentially with time, and that at thermal equilibrium (as \(t\rightarrow\infty\)) the results from the STFE agrees with Eq. (13) derived from thermal-capillary-wave theory, which provides a more straight forward derivation. #### ii.2.2 Molecular-dynamics simulations To verify our new theoretical prediction, we use molecular dynamics simulations (MD) as a virtual experiment to probe the behavior of quasi-2D thin films that are bounded on both sides by solid walls with \(90^{\circ}\) contact angles. The simulations are performed in the open-source software LAMMPS [50], which has been widely used to study fluid phenomena at the nanoscale, e.g. [51, 52, 53, 54, 55, 56, 57, 58] Argon is used as a fluid and Platinum is used for the solid walls. The interaction between particles are modeled using the conventional Lennard-Jones 12-6 potential \[V(r_{ij})=4\epsilon_{AB}\left[\left(\frac{\sigma_{AB}}{r_{ij}}\right)^{12}- \left(\frac{\sigma_{AB}}{r_{ij}}\right)^{6}\right], \tag{16}\] where \(r_{ij}\) is the distance between atoms \(i\) and \(j\), \(\epsilon_{AB}\) is the energy parameter representing the depth of potential wells and \(\sigma_{AB}\) is the length parameter representing the effective atomic diameter. Here, \(AB\) are different combinations of particle types; namely, fluid-fluid (ff), solid-solid (ss) and solid-fluid (sf). The simulation parameters are summarized in Table 1 with corresponding non-dimensional 'MD values' henceforth denoted with an asterisk (as one can see, energy is scaled with respect to \(\epsilon_{\it ff}\), lengths with \(\sigma_{\it ff}\) and mass with \(m_{f}\)). To obtain a \(90^{\circ}\) contact angle, we set \(\epsilon_{sf}^{*}=0.52\) and \(\sigma_{sf}^{*}=0.8\). The position of solid particles are fixed to reduce computational cost. The timestep is set to \(0.0085\) ps. Transport properties of liquid Argon are measured under MD simulations, with parameters given by Table 1. Shear viscosity \(\mu=2.44\times 10^{-4}\) kg/(ms) is calculated using the Green-Kubo method [59]: \[\mu=\frac{V}{3k_{B}T}\sum\int_{0}^{\infty}\langle J_{pq}(t)J_{pq}(0)\rangle dt, \tag{17}\] where \(V\) is the volume, \(J_{pq}\) are the components of the stress tensor and the sum accumulates three terms given by \(pq\) (\(=xy,yz,zx\)). Note, only off-diagonal terms of the stress tensor are used, as shear viscosity is measured by the transport of momentum perpendicular to velocity. Surface tension \(\gamma=1.52\times 10^{-2}\) N/m is calculated from the difference between the normal and tangential components of pressure tensor in a simple vapor-liquid-vapor system (\(z\)-direction) [60, 61]: \[\gamma=\frac{1}{2}\int_{0}^{L_{z}}\left(P_{zz}(z)-\frac{1}{2}\left(P_{xx}(z)+P _{yy}(z)\right)\right)dz, \tag{18}\] where \(L_{z}\) is the length of the simulation box in the \(z\)-direction and \(P_{xx}\), \(P_{yy}\), \(P_{zz}\), are the diagonal components of the pressure tensor. To set up the MD simulation we use the following procedure: (i) a block of liquid Argon is created in a periodic box with dimension \((L_{x},L_{y},h_{0})\), density \(\rho_{l}\) and is equilibrated for \(5\times 10^{6}\) timesteps with NVT at temperature \(T\), (ii) a block of vapor Argon is created in a periodic \begin{table} \begin{tabular}{c c c c} Property & Nondim.value & Value & Unit \\ \hline \(\epsilon_{\it ff}\) & 1 & \(1.67\times 10^{-21}\) & J \\ \(\epsilon_{\it sf}\) & 0.52 & \(0.8684\times 10^{-21}\) & J \\ \(\epsilon_{\it ss}\) & 50 & \(83.5\times 10^{-21}\) & J \\ \(\sigma_{\it ff}\) & 1 & 0.34 & nm \\ \(\sigma_{\it sf}\) & 0.8 & 0.272 & nm \\ \(\sigma_{\it ss}\) & 0.72 & 0.247 & nm \\ \(m_{f}\) & 1 & \(6.63\times 10^{-26}\) & kg \\ \(m_{s}\) & 4.8863 & \(32.4\times 10^{-26}\) & kg \\ \(T\) & 0.7 & 85 & \(K\) \\ \(\rho_{l}\) & 0.83 & \(1.4\times 10^{3}\) & kg/m\({}^{3}\) \\ \(\rho_{v}\) & 0.0025 & 3.5 & kg/m\({}^{3}\) \\ \(\rho_{s}\) & 2.6 & \(21.45\times 10^{3}\) & kg/m\({}^{3}\) \\ \(r_{c}\) & 5.5 & 1.87 & nm \\ \end{tabular} \end{table} Table 1: Simulation parameters and their non-dimensional values (reduced units based on Lennard-Jones potential parameters \(\epsilon_{\it ff}\), \(\sigma_{\it ff}\), \(m_{f}\)) for a \(90^{\circ}\) contact angle. box with dimension \((L_{x},L_{y},3h_{0})\), density \(\rho_{v}\) then equilibrated for \(5\times 10^{6}\) timesteps with NVT at temperature \(T\), (iii) Platinum walls are created with a face centered cubic structure (fcc) of density \(\rho_{s}\), each wall has 5 layers of Platinum atoms with thickness 0.872 nm, the bottom wall then has dimension \((L_{x},L_{y},0.872)\), (iv) the equilibrated liquid Argon is then place onto the bottom wall with a 0.17 nm gap between the solid and the liquid (the gap results from the repulsive force in the Lennard-Jones potential and its thickness is found after equilibration) [40], (v) equilibrated vapor Argon is placed on the top. The MD simulation is then run with NVT at temperature \(T\). Fig. 1 shows a snapshot of the MD simulation. Periodic boundary conditions are applied only in the \(y\)-direction. A reflective wall is applied at the top boundary. In our simulations, the position of the liquid-vapor interface is determined using the number density and a binning technique see [31]. We first calculate the number density of each Argon particle using a cut-off radius of \(3.5\sigma_{\mathit{ff}}\). Particles with number density above \(0.5n^{*}\) are then defined as liquid particles and particles with number density below \(0.5n^{*}\) are identified as vapor particles, where \(n^{*}=0.83\) is the non-dimensional number density of a liquid Argon particle in the bulk. The simulation domain is uniformly divided into vertical bins and the position of the free surface in each bin is determined by taking the maximum of the heights of all liquid particles inside the bin. Here, we use bins with side length \(1.5\sigma_{\mathit{ff}}\) in \(x\) and \(1.4\sigma_{\mathit{ff}}\) in \(y\). As a result the free surface position is projected onto a \(x-y\) mesh and expressed as a 2D array. Three different film lengths are tested: Film 1 (\(L_{x}=13.04\) nm), Film 2 (\(L_{x}=25.99\) nm) and Film 3 (\(L_{x}=51.29\) nm). The film width \(L_{y}=2.94\) nm is chosen so that the MD simulation can be consider quasi-2D. The initial film height \(h_{0}=4.85\) nm is chosen so that the film is relatively thin, but yet does not breakup due to disjoining pressure [9; 10; 29]. The equilibriation time \(t_{e}\), i.e. the time taken for all the waves to fully develop from an initially flat interface, is estimated by Eq. (10), which is the characteristic time for the mode with the longest wavelength (and thus slowest growth) to develop; this varies with film length \(L_{x}\). Multiple independent MD simulations (realizations) (Film 1: 10, Film 2: 10, Film 3: 20) are performed in parallel to reduce wall-clock simulation time. Data is gathered after \(t_{e}\) every 4000 timesteps and the free surface position is averaged in the \(y\)-direction, to provide \(h=h(x,t)\) at each snapshot. Fig. 2 shows the standard deviation of the free surface fluctuations, normalized by the thermal length scale \(l_{T}\), obtained from MD simulations and compared to our theoretical predictions, Eq. (14); the agreement is excellent. The fluctuation amplitudes of an unbounded film (i.e. adopting a periodic boundary condition, Eq. (4)) are also provided. The relative strength of thermal fluctuations of the film interface increase with film length, as expected [19; 28]. However, comparing to Eq. (4) we can see that our expression predicts an enhanced fluctuation amplitude to that of a periodic film everywhere except at the center, \(x=L_{x}/2\), where they coincide. Physically, this is because the replacement of periodicity with a fixed contact angle permits additional ('half') wave modes, i.e. of the form \(\cos((2n-1)\pi x/L_{x})\) for \(n=1,2,...\), which contribute to a larger amplitude everywhere except at \(x=L_{x}/2\), where they are zero. Another interesting observation is that the effects of boundaries propagate across the whole film, regardless of the film length. ### Partially pinned contact lines Now we turn our attention to the case where the contact lines are pinned onto the walls. The position of contact lines can be restrained by chemical heterogeneity [56] or physical defects [62; 63]. However, in MD it is not possible to perfectly pin the interface at a height \(h=h_{0}\), as thermal fluctuations cause it to fluctuate, even if just mildly around the target pinning height. Therefore, to compare MD and theory, we must account for this and do so by modeling the contact line as a Langevin diffusion process; as done in [51]. Then, the 'partially' pinned boundary condition can be written as \[h(0,t)=N_{1}(t)+h_{0},\qquad h(L_{x},t)=N_{2}(t)+h_{0}. \tag{19}\] Figure 2: Standard deviation of the fluctuations of films with \(90^{\circ}\) contact angle (black: \(L_{x}=13.04\) nm, blue: \(L_{x}=25.99\), green: \(L_{x}=51.29\) nm). MD results (dashed lines with circles) are compared to our theory, Eq. (14) (solid lines). Results are normalized by the thermal length scale \(l_{T}\). The dashed-and-dotted horizontal lines are fluctuation amplitudes predicted by Eq. (4) for a periodic (unbounded) film. Here \(N_{1}(t)\) and \(N_{2}(t)\) are Langevin diffusion processes governed by \[\xi\frac{dN_{1}}{dt} =-kN_{1}(t)+f_{1}(t), \tag{20}\] \[\xi\frac{dN_{2}}{dt} =-kN_{2}(t)+f_{2}(t), \tag{21}\] where \(\xi\) is the so-called coefficient of friction, \(k\) is the harmonic constant, and \(f_{1}(t)\) and \(f_{2}(t)\) are Gaussian noise functions that satisfy \(\langle f_{1}(s)f_{1}(\tau)\rangle=2\xi k_{B}T\delta(s-\tau)\) and \(\langle f_{2}(s)f_{2}(\tau)\rangle=2\xi k_{B}T\delta(s-\tau)\). From this model, the correlation of \(N\) has the form [64] \[\langle N(s)N(\tau)\rangle=\frac{k_{B}T}{k}e^{-\frac{k}{\xi}|s-\tau|}, \tag{22}\] and when \(s=\tau\) Eq. (22) simply gives the variance of \(N\) as \[\langle N^{2}\rangle=\frac{k_{B}T}{k}. \tag{23}\] By fitting the exponential curve of Eq. (22) and the variance Eq. (23) to MD simulations data we can calculate \(k\) and \(\xi\). Our problem in this case is then solving the STFE Eq. (2) with the partially-pinned-contact-line condition Eq. (19) and the impermeable side-wall condition Eq. (6). #### ii.2.1 Bulk modes For a _perfectly_ pinned contact line, the appropriate wave modes (see Appendix B.1) are \[\varphi_{n}(x) =\sinh(\lambda_{n}^{1/4}x)+\sin(\lambda_{n}^{1/4}x) \tag{24}\] \[+K\big{(}\cosh(\lambda_{n}^{1/4}x)-\cos(\lambda_{n}^{1/4}x)\big{)}\] with eigenvalues \[\lambda_{n}\approx\left(\frac{\pi/2+n\pi}{L_{x}}\right)^{4},\qquad n=1,2,\ldots \tag{25}\] As distinct from the \(90^{\circ}\)-contact-angle case, the mode corresponding to the \(\lambda_{0}=0\) case also exists: \[\varphi_{0}(x)=x\left(1-\frac{x}{L_{x}}\right). \tag{26}\] Although \(\varphi_{n}(x)\) are not orthogonal, for odd \(n\) they are odd functions around \(x=L_{x}/2\) and for even \(n\) they are even functions around \(x=L_{x}/2\). We will exploit this property to simplify the calculation for fluctuation amplitudes later on. Figure 3 provides an illustration of \(\varphi_{n}(x)\). #### ii.2.2 Decomposition of fluctuations The partially-pinned-contact-line boundary condition is a linear combination of the perfectly pinned condition and the Langevin diffusion condition. This suggests that under linearization the free surface can be decomposed as \[h(x,t)=h_{0}+h_{2}(x,t)+h_{3}(x,t), \tag{27}\] where \(h_{0}\) is the initial position of the contact line and \(h_{2}(x,t)\), \(h_{3}(x,t)\) are small perturbations. Applying this to Eq. (2), at the leading order (\(h_{2}\sim h_{3}\sim\mathcal{N}\ll h_{0}\)) we obtain \[\frac{\partial h_{2}}{\partial t}+\frac{\partial h_{3}}{\partial t}=-\frac{ \gamma h_{0}^{3}}{3\mu}\left(\frac{\partial^{4}h_{2}}{\partial x^{4}}+\frac{ \partial^{4}h_{3}}{\partial x^{4}}\right)+\sqrt{\frac{2k_{B}Th_{0}^{3}}{3\mu L_ {y}}}\frac{\partial\mathcal{N}}{\partial x} \tag{28}\] and the boundary conditions in Eq. (6) and Eq. (19) become \[h_{2}(0,t)+h_{3}(0,t)=N_{1}(t), \tag{29}\] \[h_{2}(L_{x},t)+h_{3}(L_{x},t)=N_{2}(t),\] (30) \[\frac{\partial^{3}h_{2}}{\partial x^{3}}(0,t)+\frac{\partial^{3} h_{3}}{\partial x^{3}}(0,t)=0,\] (31) \[\frac{\partial^{3}h_{2}}{\partial x^{3}}(L_{x},t)+\frac{\partial ^{3}h_{3}}{\partial x^{3}}(L_{x},t)=0. \tag{32}\] This is actually a linear combination of two smaller problems, one with a noise-driven bulk and pinned contact Figure 3: Wave modes \(\varphi_{n}(x)\) for a film with perfectly pinned contact lines. lines \[\frac{\partial h_{2}}{\partial t}=-\frac{\gamma h_{0}^{3}}{3\mu} \frac{\partial^{4}h_{2}}{\partial x^{4}}+\sqrt{\frac{2k_{B}Th_{0}^{3}}{3\mu L_{ y}}}\frac{\partial\mathcal{N}}{\partial x}, \tag{33a}\] \[h_{2}(0,t)=h_{2}(L_{x},t)=0,\] (33b) \[\frac{\partial^{3}h_{2}}{\partial x^{3}}(0,t)=\frac{\partial^{3}h _{2}}{\partial x^{3}}(L_{x},t)=0, \tag{33c}\] and the other with deterministic equations in the bulk and noise-driven contact lines \[\frac{\partial h_{3}}{\partial t}=-\frac{\gamma h_{0}^{3}}{3\mu} \frac{\partial^{4}h_{3}}{\partial x^{4}}, \tag{34a}\] \[h_{3}(0,t)=N_{1}(t),\,h_{3}(L_{x},t)=N_{2}(t),\] (34b) \[\frac{\partial^{3}h_{3}}{\partial x^{3}}(0,t)=\frac{\partial^{3}h _{3}}{\partial x^{3}}(L_{x},t)=0. \tag{34c}\] We can then solve Eqs. (33) for \(h_{2}(x,t)\) by decomposing it into wave modes \(\varphi_{n}(x)\) \[h_{2}=\sum_{n=1}^{\infty}c_{n}(t)\varphi_{n}(x), \tag{35}\] where \(c_{n}(t)\) are wave amplitudes that can be expressed explicitly (see Appendix B.2). Similarly \(h_{3}(x,t)\) can also be decomposed into wave modes \(\varphi_{n}(x)\) and boundary modes, \[h_{3}(x,t) =\sum_{n=1}^{N}e_{n}(t)\varphi_{n}(x)\] \[+N_{1}(t)\left(1-\frac{x}{L_{x}}\right)\left(1-\frac{x}{L_{x}}-u _{10}\frac{x}{L_{x}}\right)\] \[+N_{2}(t)\frac{x}{L_{x}}\left[\frac{x}{L_{x}}-u_{20}\left(1- \frac{x}{L_{x}}\right)\right]. \tag{36}\] where \(e_{n}(t)\) are wave amplitudes with explicit expressions, and \(u_{10}\) and \(u_{20}\) are constants that are given in Appendix B.3. Note, only \(N\) wave modes are considered for \(h_{3}(x,t)\), opposed to infinitely many wave modes considered for \(h_{2}(x,t)\). This is because the wave modes are not orthogonal to each other, so when solving the linear system for \(e_{n}(t)\), matrix \(G\) is non-diagonal and it would be impossible to take its inverse if the dimension is infinite (see Appendix B.3). We can confirm numerically that this does not affect our results (the fluctuation amplitudes converge) and in Section IV we show that a cut-off on the number of wave modes is actually preferable. #### iii.1.3 Thermal-capillary-wave theory We can obtain the fluctuation amplitude for \(h_{2}\) using thermal-capillary-wave theory. Similar to the \(90^{\circ}\)-contact-angle case, we substitute Eq. (27) into Eq. (11) and use the fact that the \(\partial\varphi_{n}/\partial x\) are orthogonal (see Appendix B.1) to obtain \[E=\frac{\gamma L_{x}L_{y}}{2}\sum_{n=1}^{\infty}\lambda_{n}^{1/2}c_{n}^{2}+ \text{other terms}, \tag{37}\] where the first term on the right hand is the change of surface area due to \(h_{2}\) and the other terms are the change of surface area due to \(h_{3}\) and cross terms of \(h_{2}\) and \(h_{3}\). Applying the equipartition theorem to the \(h_{2}\) only terms we find \[\langle c_{n}^{2}\rangle=l_{T}^{2}\frac{1}{L_{x}L_{y}}\frac{1}{\lambda_{n}^{1/ 2}}. \tag{38}\] Using the fact that \(\langle c_{m}c_{n}\rangle=\delta_{mn}\langle c_{n}^{2}\rangle\) (see Appendix B.2), finally, we have \[\langle h_{2}^{2}(x)\rangle=l_{T}^{2}\frac{1}{L_{x}L_{y}}\sum_{n=1}^{\infty} \frac{\varphi_{n}^{2}(x)}{\lambda_{n}^{1/2}}. \tag{39}\] Note, alternatively one can derive \(\langle c_{n}^{2}\rangle\) directly from the STFE, which includes time dependence, \[\langle c_{n}^{2}\rangle=\frac{k_{B}T}{\gamma}\frac{1}{L_{x}L_{y}}\frac{1}{ \lambda^{1/2}}(1-\exp(-2C\lambda_{n}t)), \tag{40}\] where \(C=\gamma h_{0}^{3}/(3\mu)\) (see Appendix B.2). At thermal equilibrium this agrees with Eq. (38). #### iii.1.4 Combined fluctuation amplitude The fluctuations combine to give a total variance of \[\langle(h_{2}+h_{3})^{2}\rangle=\langle h_{2}^{2}\rangle+2\langle h_{2}h_{3} \rangle+\langle h_{3}^{2}\rangle. \tag{41}\] where \(\langle h_{2}^{2}\rangle\) is the fluctuation of the bulk, already calculated via thermal-capillary-wave theory, and \(\langle h_{3}^{3}\rangle\) is the fluctuation of the film originating from fluctuations of the contact lines. Notably, since the random variables \(\mathcal{N}(x,t)\), \(f_{1}(t)\) and \(f_{2}(t)\) are uncorrelated, \(\langle h_{2}h_{3}\rangle=0\) (see Appendix B.4). An expression for \(\langle h_{3}^{2}(x)\rangle\), obtained from Eq. (100), can be found in Appendix B.4. #### iii.1.5 Molecular-dynamics simulations The MD simulations are the same as in Section II.1.2 with the exception that we need to pin the contact line. There are several ways to achieve this, for example, by using topographical defects on the solid substrate [65], but here we use the technique described by Kusudo _et. al_[66] using chemical heterogeneity. As shown in Fig. 4, this is achieved by using a hydrophilic wall (blue) beneath the film's equilibrium height (\(h_{0}\)) and a hydrophobic one (red), which is less wettable, above it. The wettability of the walls are tuned by changing the interaction parameters between solid and liquid, \(\epsilon_{sf1}\) and \(\epsilon_{sf2}\)[51]. This results in the position of the contact line following a Gaussian distribution with mean \(h_{0}\), when in thermal equilibrium, as can be seen from Fig. 4 (d). The variance depends on the equilibrium contact angles (i.e. on the \(\epsilon_{sf}\)'s) of the walls and a small variance is preferable to mimic perfect pinning. Our choice of parameters are shown in Table 2. Four different film lengths are tested: Film 4 (\(L_{x}=13.04\) nm), Film 5 (\(L_{x}=25.99\) nm), Film 6 (\(L_{x}=51.29\) nm) and Film 7 (\(L_{x}=102.30\) nm). The film width \(L_{y}=2.94\) nm and the initial film height \(h_{0}=4.85\) nm are the same as in the \(90^{\circ}\)-contact-angle case. The equilibration time \(t_{c}\) can be estimated from Eq. (111). Multiple independent MD simulations are performed (Film 4: 18, Film 5: 10, Film 6: 10 and Film 7: 20). Fig. 5(a) shows the fluctuation amplitudes of the free surface obtained from MD simulations using bins with side length \(1.5\sigma_{ff}\) in the \(x\)-direction and \(1.4\sigma_{ff}\) in the \(y\)-direction, which agree well with the theoretical predictions, Eq. (41). Notably, the largest difference between the MD and theory occurs for the shortest films; an effect we will revisit in Section IV. One can see that the fluctuation amplitudes of films with partially pinned contact lines have a saddle shape with one trough and two crests, symmetric about \(x=L_{x}/2\) as we would expect. In contrast to the previous case of a fixed contact angle, here the fluctuations are almost everywhere lower than those for a periodic film. The amplitudes dip significantly at the wall, due to the pinning effect, but do not reach zero due to the oscillations around the pinning position. Moreover, the positions of the trough and the crests relative to the length of the film are fixed, showing again that the boundary effects propagate across the film. Lastly, as suggested by our theory Eq. (41), the fluctuation amplitudes of the free surface can be attributed to the thermal noise in the bulk \(\sqrt{\langle h_{2}^{3}\rangle}\) and the fluctuations of the contact line \(\sqrt{\langle h_{3}^{3}\rangle}\). From the decomposition of fluctuation amplitudes shown in Fig. 5 (b) one can see that the effect of contact line fluctuations is limited to the region near the boundaries and clearly in this regime the bulk fluctuations are stronger than the contact-line-driven motions. ## III 3D circular bounded thin films Let us now extend our investigation to three-dimensional bounded nanofilms. In 3D, the position of the free surface \(h(\mathbf{x},t)\) is given by the thin-film equation (TFE) [67] as \[\frac{\partial h}{\partial t}=-\frac{\gamma}{3\mu}\nabla\cdot\left(h^{3}\nabla \nabla^{2}h\right), \tag{42}\] and its stochastic version [40; 42; 43] is \[\frac{\partial h}{\partial t}=-\frac{\gamma}{3\mu}\nabla\cdot\left(h^{3} \nabla\nabla^{2}h\right)+\sqrt{\frac{2k_{B}T}{3\mu}}\nabla\cdot\left(h^{3/2} \boldsymbol{\mathcal{N}}\right), \tag{43}\] with thermal noise uncorrelated in space and time: \[\langle\mathcal{N}_{i}(\mathbf{x},t)\mathcal{N}_{j}(\mathbf{x}^{\prime},t^{ \prime})\rangle=\delta_{ij}\delta(\mathbf{x}-\mathbf{x}^{\prime})\delta(t-t^ {\prime}). \tag{44}\] As in quasi-2D, thermal fluctuations drive nanowaves on the free surface. When periodic boundary conditions (i.e. an unconfined film) are considered on a square domain of length \(L\) the perturbation \(\delta h\) to the average film \begin{table} \begin{tabular}{c c c c} Property & Nondim.value & Value & Unit \\ \hline \(\epsilon_{sf1}\) & 0.05 & \(0.0835\times 10^{-21}\) & J \\ \(\epsilon_{sf2}\) & 0.62 & \(1.0354\times 10^{-21}\) & J \\ \(\sigma_{sf1}\) & 0.8 & 0.272 & nm \\ \(\sigma_{sf2}\) & 0.8 & 0.272 & nm \\ \end{tabular} \end{table} Table 2: Simulation parameters and their non-dimensional values (reduced units based on Lennard-Jones potential parameters) for pinned contact lines. Figure 4: (a), (b) and (c) are MD snapshots in a region near the contact line and the chemical heterogeneity of the solid wall, showing the position of the contact line below, on and above the proposed pinning point respectively. Blue particles denote hydrophilic wall atoms and red particles denote the hydrophobic wall atoms. Liquid Argon atoms are in yellow. (d) shows a histogram of contact line position extracted from a single realization of MD simulations. height can be decomposed into Fourier modes and the fluctuation amplitude is given by \[\langle\delta h^{2}\rangle=\frac{l_{T}^{2}}{\pi^{2}}\sum_{m=1}^{\infty}\sum_{n=1}^ {\infty}\frac{1}{m^{2}+n^{2}}, \tag{45}\] where \(m\) and \(n\) are wavenumbers in the \(x\)-direction and the \(y\)-direction. Unlike quasi-2D, this summation is unbounded and therefore an upper limit to the wavenumbers \(m\) and \(n\) is required. A natural choice is to consider a 'cut-off' length scale \(\ell_{c}\), such that wave modes with length scale less than \(\ell_{c}\) are ignored [49; 68; 69] \[\langle\delta h^{2}\rangle=\frac{l_{T}^{2}}{\pi^{2}}\sum_{m=1}^{m<L/\ell_{c}} \sum_{n=1}^{n<L/\ell_{c}}\frac{1}{m^{2}+n^{2}}, \tag{46}\] A discussion on the significance of the requirement for a cut-off length scale is provided in Section IV. If \(L/\ell_{c}\gg 1\), which is not unreasonable given \(\ell_{c}\) will be on the molecular scale, the summation Eq. (46) can be approximated by \[\langle\delta h^{2}\rangle\approx l_{T}^{2}\ln\frac{L}{\ell_{c}}, \tag{47}\] showing a logarithmic growth of the fluctuation amplitude with the length of the domain \(L\). This growth, although much slower than the linear growth in Eq. (4) for the quasi-2D periodic boundary case, is nevertheless unbounded as \(L\to\infty\). Let us now consider how the analysis is modified when we have confined 3D films. In particular, we choose to confine liquid films in circular domains by solid walls as illustrated in Fig. 6. As in quasi-2D, we apply two different boundary conditions: (i) \(90^{\circ}\) contact angle and (ii) pinned contact line. ### Prescribed Angle at \(90^{\circ}\) It is natural to conduct the analysis in cylindrical coordinates \((r,\theta,z)\), with a thin film \(h=h(r,\theta,t)\) of equilibrium height \(h_{0}\) confined by an impermeable wall at \(r=a\). Then, a prescribed \(90^{\circ}\) contact angle corresponds to \[\left.\frac{\partial h}{\partial r}\right|_{r=a}=0,\qquad\theta\in[0,2\pi). \tag{48}\] Figure 6: An illustration of the geometry of the 3D circular thin films (half to show cross section). Figure 5: Standard deviation of fluctuations for the films with partially pinned contact lines (black: \(L_{x}=13.04\) nm, blue: \(L_{x}=25.99\) nm, green: \(L_{x}=51.29\) nm, red: \(L_{x}=102.30\) nm): (a) shows a comparison of MD (dashed lines with circles), theory via Eq. (41) (solid lines) and the classic thermal-capillary-wave theory for periodic films (dashed lines) predicted by Eq. (4); (b) shows the decomposition of theory Eq. (41), where the dashed lines represent the amplitudes of fluctuations caused by thermal noise in bulk, the dotted lines represent the amplitudes of fluctuations caused by noise on the contact lines and the solid lines represent the full fluctuation amplitudes. and impermeability of the wall is \[\nabla\nabla^{2}h\big{|}_{r=a}\cdot\hat{\mathbf{r}}=0,\qquad\theta\in[0,2\pi). \tag{49}\] The impermeability condition corresponds to a projection of the flux onto the direction normal to the wall, given by \(\hat{\mathbf{r}}\), which is a unit vector in \(r\). Linearizing the 3D TFE, Eq. (42), and solving the eigenvalue problem with the above boundary conditions (Eq. (48) and Eq. (49)), we obtain the following wave modes: (see Appendix C.1) \[\Upsilon^{1}_{n,\alpha}(r,\theta) =\cos(n\theta)\chi_{n,\alpha}(r), \tag{50}\] \[\Upsilon^{2}_{n,\alpha}(r,\theta) =\sin(n\theta)\chi_{n,\alpha}(r). \tag{51}\] Here \[\chi_{n,\alpha}(r)=J_{n}(\omega_{n,\alpha}r),\qquad n=0,1,\ldots, \tag{52}\] are the wave modes in \(r\). \(J_{n}\) is the \(n\)th Bessel function of the first kind. The dispersion relation \[J_{n}^{\prime}(\omega a)=0,\qquad n=0,1,\ldots \tag{53}\] where \({}^{\prime}\) denotes a derivative, is derived from solving the eigenvalue problem; from the dispersion relation we obtain the frequencies \(\{\omega_{n,\alpha}:\alpha=1,2,\ldots\}\). Fig. 7 gives an illustration of \(\chi_{n,\alpha}(r)\) with different \(n\) and \(\alpha\). One can see that the \(90^{\circ}\)-contact-angle condition is satisfied. As \(n\) increases the position of the first crest gets further away from the origin and there is an expanded region in which the wave mode's amplitude is negligible. #### iii.3.1 Thermal-capillary-wave theory The free surface can be written in terms of the wave modes: \[h(r,\theta,t)=h_{0}+h_{4}(r,\theta,t), \tag{54}\] where the perturbation \(h_{4}\) is \[h_{4}(r,\theta,t)=\sum_{n=0}^{\infty}\sum_{\alpha=1}^{\infty} \Bigl{[}A_{n,\alpha}(t)\Upsilon^{1}_{n,\alpha}(r,\theta)\] \[\qquad+B_{n,\alpha}(t)\Upsilon^{2}_{n,\alpha}(r,\theta)\Bigr{]}. \tag{55}\] Then we can calculate the energy of a perturbed surface (see Appendix C.2) \[E =\gamma\Biggl{(}\int_{0}^{2\pi}\int_{0}^{a}\sqrt{1+\left(\frac{ \partial h}{\partial r}\right)^{2}+\frac{1}{r^{2}}\left(\frac{\partial h}{ \partial\theta}\right)^{2}}rdrd\theta-\pi a^{2}\Biggr{)}\] \[\approx\frac{\gamma}{2}\int_{0}^{2\pi}\int_{0}^{a}\left(\left( \frac{\partial h}{\partial r}\right)^{2}+\frac{1}{r^{2}}\left(\frac{\partial h }{\partial\theta}\right)^{2}\right)rdrd\theta\] \[=\gamma\pi\sum_{\alpha=1}^{\infty}A_{0,\alpha}^{2}S_{0,\alpha}+ \frac{\gamma\pi}{2}\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}(A_{n,\alpha}^ {2}+B_{n,\alpha}^{2})S_{n,\alpha}, \tag{56}\] where \[S_{n,\alpha}=\frac{1}{2}(\omega_{n,\alpha}^{2}a^{2}-n^{2})J_{n}^{2}(\omega_{n,\alpha}a). \tag{57}\] Since the wave modes are orthogonal to each other (see Appendix C.2) and the energy is quadratic in the amplitudes, we can use the equipartition theorem to give \[\frac{k_{B}T}{2}=\gamma S_{0,\alpha}\langle A_{0,\alpha}^{2}\rangle \tag{58}\] and \[\frac{k_{B}T}{2}=\frac{\gamma\pi}{2}S_{n,\alpha}\langle A_{n,\alpha}^{2} \rangle=\frac{\gamma\pi}{2}S_{n,\alpha}\langle B_{n,\alpha}^{2}\rangle, \tag{59}\] so that the (position-dependent) variance of fluctuations is given by \[\langle h_{4}^{2}(r,\theta)\rangle =\frac{k_{B}T}{\gamma}\frac{1}{\pi}\Bigl{[}\sum_{\alpha=1}^{ \infty}\frac{1}{2S_{0,\alpha}}\chi_{0,\alpha}^{2}(r)\] \[+\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}\frac{1}{S_{n,\alpha }}\chi_{n,\alpha}^{2}(r)\Bigr{]}. \tag{60}\] Note that the variance is only \(r\) dependent, since periodicity in \(\theta\) eliminates variations. Asymptotic analysis shows that \(S_{n,\alpha}\) increases with \(n\) linearly (see Appendix C.2). So the summation over \(n\) in Eq. (60) diverges as \(n\to\infty\) and a cut-off for smallest length scale \(\ell_{c}\) should be introduced, as we've already seen for the periodic (i.e. unbounded) 3D film Eq. (46). Applying a cut-off length scale in polar coordinates is non-trivial, as the modes in the \(r\) and the \(\theta\) directions differ, in contrast to the Cartesian case, and the radial wavemodes have non-trivial form. To do so, we define the length scale of a wave mode \(f_{n,\alpha}(r,\theta)\) as \[\mathcal{L}(f_{n,\alpha})=\frac{\max_{r\in[0,a],\theta\in[0,2\pi]}|f_{n,\alpha} (r,\theta)|}{\max_{r\in[0,a],\theta\in[0,2\pi]}|\nabla f_{n,\alpha}(r,\theta)|}, \tag{61}\] where \(||\) is the absolute value and \(\nabla\) is the gradient operator. For simplicity, we denote the length scale of the wave mode for the \(90^{\circ}\)-contact-angle case as \(L_{n,\alpha}^{90}=\mathcal{L}(\Upsilon_{n,\alpha}^{1})\), and one can easily check that \(\mathcal{L}(\Upsilon_{n,\alpha}^{1})=\mathcal{L}(\Upsilon_{n,\alpha}^{2})\). We then introduce a threshold function \[Z^{90}(\ell_{c},n,\alpha)=\begin{cases}1,&\text{ if }L_{n,\alpha}^{90}\geq \ell_{c}\\ 0,&\text{ if }L_{n,\alpha}^{90}<\ell_{c}\end{cases}, \tag{62}\] to identify wave modes with length scale greater than a chosen \(\ell_{c}\). Finally, we apply the cut-off to Eq. (60) \[\langle h_{4}^{2}(r,\theta)\rangle =\frac{k_{B}T}{\gamma}\frac{1}{\pi}\Big{[}\sum_{\alpha=1}^{ \infty}\frac{Z^{90}(\ell_{c},0,\alpha)}{2S_{0,\alpha}}\chi_{0,\alpha}^{2}(r)\] \[+\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}\frac{Z^{90}(\ell_{c },n,\alpha)}{S_{n,\alpha}}\chi_{n,\alpha}^{2}(r)\Big{]} \tag{63}\] which regularizes the unbounded sum. The effect of cut-offs will be further discussed in Section IV. #### ii.2.2 Molecular-dynamics simulations The setup of the MD simulations is very similar to before, except for geometry. The cylindrical side wall is joined by a circular base, both 5 layers of Platinum atoms in fcc, to form a 'cup'. An equilibrated \(h_{0}=2.5\) nm thick Argon liquid film is then placed on top of the base with a 0.17 nm gap, as in Section II.1.2. Equilibrated Argon vapor is then placed on top of the liquid. Non-periodic boundary conditions with reflective walls are applied at the top, as a result the vapor can not escape the cup. Fig. 8 (a) gives a snapshot of the MD simulation. The position of the free surface is measured using the number density and binning technique, with a circular mesh used to reduce errors near the wall. Calculation of number density and identification of liquid Argon particles are the same as for quasi-2D. To define the vertical bins, we: (i) choose the number of layers \(N_{l}\), (ii) create a center bin as a circle with radius \(r_{m}=a/(2N_{l}-1)\) and area \(\pi r_{m}^{2}\), (iii) ensure bins in the other layers are rings with width of \(2r_{m}\), (iv) divide each ring equally into tiles such that the area of each tile is also \(\pi r_{m}^{2}\). Fig. 8 (b) shows an illustration of the circular mesh with \(N_{L}=12\). The characteristic length scale of the mesh is given by the square root of the area of the tiles \(L_{m}=\sqrt{\pi r_{m}^{2}}\). The parameters for the MD simulations are still given in Table 1 and we confirm that the average contact angle remains at 90 degrees. Films with two different radii are tested: Film 8 (\(a=11.81\) nm) and Film 9 (\(a=23.39\) nm). The average height \(h_{0}\) of the free surface is measured to be 2.47 nm. The fluctuation amplitudes extracted from MD simulations are averaged over \(\theta\) si Figure 8: Snapshots of MD simulations for a 3D circular film: (a) shows the cross section, (b) shows the top view with the circular mesh used for extracting the free surface position. Figure 9: Standard deviation of fluctuations for \(90^{\circ}\)-contact-angle 3D circular films with two different radii (black: \(a=11.81\) nm, red: \(a=23.39\) nm). MD simulation results (dashed lines with circles) and theory Eq. (63) (solid lines) are normalized by the thermal scale \(l_{T}\). pendent. Fig. 9 shows the fluctuation amplitudes of 3D circular bounded films with a \(90^{\circ}\) contact angle (normalized by the thermal length scale \(l_{T}\)) obtained from MD simulations and compared with the theoretical prediction Eq. (63). The smallest length scale allowed in the theory is chosen to be \(\ell_{c}=\sigma_{\mathit{ff}}\) and the length scale of the circular mesh for MD is chosen proportionally \(L_{m}=1.77\ell_{c}\) (i.e. \(r_{m}=\sigma_{\mathit{ff}}\)). One can see that the MD results agree well with Eq. (63) for both films, while the agreement improves as the film gets larger. The MD results indicate that similar to the quasi-2D films with \(90^{\circ}\) contact angle, the minimum of the fluctuation amplitude is found at the center of the film. This is because the first crest of wave modes for \(n\geq 1\) get pushed further away from the origin as \(n\) increases, shown in Fig. (7), distributing less energy to the center and more energy towards the boundary. One can also observe oscillations in the theory near the origin, which is absent in MD simulation results. This could indicate that a better cut-off mechanism is needed near the singularity (\(r=0\)), or MD resolutions may need to be increased to capture the oscillation; both worthy of future investigation. ### Pinned contact lines Next, we consider the case where the contact line is pinned onto the wall. As mentioned in Section II.2, in practice the contact line will always oscillate in MD simulations. However, due to the complexity in 3D, and the relatively less prominent influence of contact line fluctuations previously observed, the theory we develop will only consider the contact line being pinned perfectly onto the wall. It will be interesting in future work to explore the oscillation of the contact line in 3D. The pinned-contact-line boundary condition for the 3D circular film is given by \[h(a,\theta)=h_{0},\qquad\theta\in[0,2\pi]. \tag{64}\] Together with Eq. (42) and Eq. (49) we can obtain the appropriate wave modes (see Appendix D.1) \[\Psi^{1}_{n,\alpha}(r,\theta) =\cos(n\theta)\psi_{n,\alpha}(r), \tag{65}\] \[\Psi^{2}_{n,\alpha}(r,\theta) =\sin(n\theta)\psi_{n,\alpha}(r). \tag{66}\] Here \[\psi_{n,\alpha}(r)=J_{n}(\zeta_{n,\alpha}r)-\frac{J_{n}(\zeta_{n,\alpha}a)}{I_{n}(\zeta_{n,\alpha}a)}I_{n}(\zeta_{n,\alpha}r),\] \[n=0,1,\ldots \tag{67}\] are the wave modes in \(r\). \(I_{n}\) is the \(n\)th modified Bessel function of first kind. The frequencies \(\{\zeta_{n,\alpha}:\alpha=1,2,\ldots\}\) are obtained from a dispersion relation \[2nJ_{n}(\zeta a)I_{n}(\zeta a) +\zeta a\Big{[}J_{n}(\zeta a)I_{n+1}(\zeta a)\] \[-J_{n+1}(\zeta a)I_{n}(\zeta a)\Big{]}=0, \tag{68}\] derived from the eigenfunction problem. Fig. 10 gives an illustration of \(\psi_{n,\alpha}(r)\) with different \(n\) and \(\alpha\). The pinned boundary condition is satisfied and the distance between the origin and the first crest still increases with \(n\). #### iii.2.1 Thermal-capillary-wave theory If we perturb the free surface from the mean profile \(h_{0}\) we obtain \[h(r,\theta,t)=h_{0}+h_{5}(r,\theta,t) \tag{69}\] where the perturbation \(h_{5}(r,\theta,t)\) can be decomposed into wave modes: \[h_{5}(r,\theta,t)=\sum_{n=0}^{\infty}\sum_{\alpha=1}^{\infty} \Big{[}C_{n,\alpha}(t)\Psi^{1}_{n,\alpha}(r,\theta)\] \[+D_{n,\alpha}(t)\Psi^{2}_{n,\alpha}(r,\theta)\Big{]}. \tag{70}\] Following the same procedure as before, we find that the energy required to perturb the surface is given by (details see Appendix D.2) \[E=\gamma\pi\sum_{\alpha=1}^{\infty}A_{0,\alpha}^{2}K_{0,\alpha}+\frac{\gamma \pi}{2}\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}(C_{n,\alpha}^{2}+D_{n, \alpha}^{2})K_{n,\alpha}, \tag{71}\] Figure 10: Wave modes \(\psi_{n,\alpha}(r)\) for the circular bounded 3D film with a pinned contact line. \(n\) is the wave number in \(\theta\) and \(\alpha\) is the wave number in \(r\). \(\alpha=2\) for black lines and \(\alpha=5\) for red lines. where \[K_{n,\alpha} =\frac{1}{2}\omega_{n,\alpha}^{2}a^{2}\Bigg{(}-J_{n-1}(\omega_{n, \alpha}a)J_{n+1}(\omega_{n,\alpha}a)\] \[+I_{n-1}(\omega_{n,\alpha}a)I_{n+1}(\omega_{n,\alpha}a)\frac{J_{n} ^{2}(\omega_{n,\alpha}a)}{I_{n}^{2}(\omega_{n,\alpha}a)}\Bigg{)} \tag{72}\] Assuming the wave modes are uncorrelated, we can apply the equipartition theorem: \[\frac{k_{B}T}{2}=\gamma K_{0,\alpha}\langle C_{0,\alpha}^{2}\rangle, \tag{73}\] \[\frac{k_{B}T}{2}=\frac{\gamma\pi}{2}K_{n,\alpha}\langle C_{n,\alpha}^{2} \rangle=\frac{\gamma\pi}{2}K_{n,\alpha}\langle D_{n,\alpha}^{2}\rangle, \tag{74}\] to find the variance of the fluctuations: \[\langle h_{5}^{2}(r,\theta)\rangle =\frac{k_{B}T}{\gamma}\frac{1}{\pi}\Big{[}\sum_{\alpha=1}^{ \infty}\frac{1}{2K_{0,\alpha}}\psi_{0,\alpha}^{2}(r) \tag{75}\] \[+\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}\frac{1}{K_{n,\alpha }}\psi_{n,\alpha}^{2}(r)\Big{]}. \tag{76}\] The value of \(K_{n,\alpha}\) increases with \(n\) linearly (see Appendix D.2) so that, as expected, the fluctuation amplitude diverges and a cut-off for length scale \(\ell_{c}\) should be introduced. The length scales of the wave modes for the pinned case are again calculated by Eq. (61) and now denoted by \(L_{n,\alpha}^{p}=\mathcal{L}(\Psi_{n,\alpha}^{1})=\mathcal{L}(\Psi_{n,\alpha}^ {2})\). Introducing the threshold function \[Z^{p}(\ell_{c},n,\alpha)=\begin{cases}1,&\text{ if }L_{n,\alpha}^{p}\geq\ell_{c} \\ 0,&\text{ if }L_{n,\alpha}^{p}<\ell_{c}\end{cases}, \tag{77}\] and we can write the regularized sum as \[\langle h_{5}^{2}(r,\theta)\rangle =\frac{k_{B}T}{\gamma}\frac{1}{\pi}\Big{[}\sum_{\alpha=1}^{ \infty}\frac{Z^{p}(\ell_{c},0,\alpha)}{2K_{0,\alpha}}\psi_{0,\alpha}^{2}(r)\] \[+\sum_{n=1}^{\infty}\sum_{\alpha=1}^{\infty}\frac{Z^{p}(\ell_{c},n,\alpha)}{K_{n,\alpha}}\psi_{n,\alpha}^{2}(r)\Big{]}. \tag{78}\] The choice of cut-offs will be further discussed in Section IV. #### iii.2.2 Molecular-dynamics simulations The geometry of the MD simulations is set and the position of the free surface is measured using the same methods described in Section III.1.2. MD parameters from Table 2 are used. The rest of the MD settings are the same as in Section II.1.2. Films with two different radii are tested: Film 10 (\(a=11.81\) nm) and Film 11 (\(a=23.39\) nm). Fig. 11 shows the spatial variation of fluctuation amplitudes of the 3D circular films with a pinned contact line. The figure compares the theoretical predictions of the fluctuation amplitudes Eq. (78) (by setting cut off length scale \(\ell_{c}=\sigma_{f\!f}\)) with the MD results (obtained with a circular binning mesh of characteristic length scale \(L_{m}=1.77\ell_{c}\)) ; again the overall agreement is very good, with improvements for larger films. The theoretical predictions also exhibit oscillations near the origin, whereas the MD results do not. This might suggest a singularity in the solution is requiring better cut-off mechanism, or a low MD resolution failed to detect the oscillations, as stated in Section III.1.2. Similar to the quasi-2D films with partially pinned contact lines, the MD results exhibit a trough at the center (\(r=0\) and \(x=L_{x}/2\)) and a crest before reaching the boundary. ## IV Discussion on minimum length scales We have seen that our theoretical models for 3D films require a minimum length scale to be defined in order for predictions to be made; a length-scale 'cut-off' is needed. For the results presented, where we have compared to Molecular Dynamics, this cut-off was chosen on a physical basis, coinciding with the Lennard-Jones length-scale parameter \(\sigma_{f\!f}\). Willis _et. al_[33] observed, in MD simulation, rapid attenuation of the fluctuation strength of thermal capillary waves (on a 2D film) at scales beneath \(\sigma_{f\!f}\), and so this was a natural first choice. However, in the case of MD, there is another length scale that might influence the results: the bin size over which measurements are spatially averaged. A more so Figure 11: Standard deviation of fluctuations for pinned-contact-line 3D circular films with two different radii (black: \(a=11.81\) nm, red: \(a=23.39\) nm). MD simulation results (dashed lines with circles) and theory Eq. (78) (solid lines) are normalized by thermal scale \(l_{T}\). phisticated choice of cut-off for our theoretical model should, then, be either bin size or \(\sigma_{\mathit{ff}}\), whichever is larger. Up to now, coincidentally, they have been equal. To test if predictions from MD are indeed modified by the bin size when it is larger than \(\sigma_{\mathit{ff}}\), and that our theoretical model with an appropriately adjusted cut-off predicts this, we have performed the data processing presented in Figure 12. It shows that, indeed, the overall fluctuation strength of the film in MD is influenced by the choice of bin size, and that this is well captured by the theory with a bin-size cut-off. Unfortunately, we were unable to perform simulations with bin sizes smaller than \(\sigma_{\mathit{ff}}\), as done in Willis _et. al_ Willis et al. (2013), where we might expect the effect of bin size to disappear, revealing a minimum scale comparable with \(\sigma_{\mathit{ff}}\). We leave this for clarification in follow-on work. As these results indicate, be it physical (\(\sigma_{\mathit{ff}}\)) or numerical (bin size), the results from MD are affected by a minimum length scale. While we originally introduced the 'cut-off' out of necessity for a bounded sum in our theoretical model for 3D films, this discussion implies that introducing a cut-off in the theoretical model for quasi-2D films should still improve the comparison with MD. This comparison is made in Figure 13, and indeed there is a small but noticeable improvement in agreement. ## V Conclusion & future directions In this article, we have uncovered the behavior of confined nanoscale films in thermal equilibrium. These results, in particular the spatial dependence of the fluctuation amplitude, could be validated experimentally, using either scattering techniques Kravchenko et al. (2013); Kravchenko et al. (2013); Kravchenko et al. (2013) or colloid-polymer mixtures to enable optical measurement Kravchenko et al. (2013) - quasi-2D results could be approximated using Hele-Shaw type geometries whilst 3D domains are the norm. Furthermore, the techniques used could be extended to tackle a range of other nanoscale flows including free films, drops or bubbles. Our findings serve to further highlight the accuracy of fluctuating hydrodynamics to describe nanoscale fluid phenomena, or, put another way, to reproduce effects seen in molecular simulations at a fraction of the computational cost. Moreover, the results presented could provide useful benchmarks for computational schemes intended at describing nanoscale flows and give insight into the choice of cut-off used to regularize the singular noise terms in the stochastic partial differential equations; which can be achieved either using projections onto regular bases Kravchenko et al. (2013); Kravchenko et al. (2013) or just be crudely based on the numerical grid size. Notably, in many cases one is interested primarily in the stability of nanovolumes. For thin films Willis et al. (2013), the importance of thermal fluctuations has been established Kravchenko et al. (2013) but the relation to nano-confinement is yet to be determined; this could also be a direction of future research. ###### Acknowledgements. This work was supported by the EPSRC grants EP/W031426/1, EP/S029966/1, EP/S022848/1, EP/P031684/1, EP/V01207X/1, EP/N016602/1 and the NSFC grant 12202437. Jingbang Liu is supported by a studentship within the UK Engineering and Physical Sciences Research Council-supported Centre for Doctoral Training in modeling of Heterogeneous Systems, Grant No. EP/S022848/1. The authors are grateful to Figure 12: Standard deviation of fluctuation of 3D circular film with radius \(a=23.39\) nm and different boundary conditions: (a) \(90^{\circ}\) contact angle and (b) a pinned contact line. Theoretical predictions Eq. (63) Eq. (78) with different cut-off length scales \(\ell_{c}=\sigma_{\mathit{ff}}\), \(2\sigma_{\mathit{ff}}\), \(3\sigma_{\mathit{ff}}\) are given by solid lines. The circled lines are MD results obtained using different bin sizes \(L_{m}=1.77\ell_{c}\). Dr Ed Brambley for discussions on the orthogonality of eigenfunctions. ## Appendix A Quasi-2D thin film with \(90^{\circ}\) contact angle In this section we layout the technical details for the quasi-2D thin film with \(90^{\circ}\) contact angle. ### Derivation of the wave modes The surface wave can no longer be decomposed into Fourier modes since the boundary conditions are not periodic. To find appropriate wave modes, we first linearize the thin-film equation Eq. (1) and then solve the eigenvalue problem. Consider a perturbation to the free surface of the form \[h(x,t)=h_{0}+\epsilon h_{1}(x)T(t), \tag{10}\] where we anticipate that the perturbation is separable in time \(T(t)\) and space \(h_{1}(x)\), the steady state is a flat free surface \(h=h_{0}\) and \(\epsilon\ll 1\). Apply this to the thin-film equation (1), at the leading order we obtain a linear problem \[\frac{1}{T}\frac{dT}{dt}=-\frac{\gamma h_{0}^{3}}{3\mu h_{1}}\frac{d^{4}h_{1} }{dx^{4}}=-\omega, \tag{11}\] where \(\omega\) is a constant and must be positive for stability. This gives an eigenvalue problem for \(h_{1}(x)\) with corresponding boundary conditions \[\frac{d^{4}h_{1}}{dx^{4}}=\sigma h_{1},\qquad\sigma=\frac{3\mu\omega}{\gamma h _{0}^{3}}\geq 0 \tag{12}\] \[\left.\frac{dh_{1}}{dx}\right|_{x=0}=\left.\frac{dh_{1}}{dx}\right|_{x=L_{x}}= \left.\frac{d^{3}h_{1}}{dx^{3}}\right|_{x=0}=\left.\frac{d^{3}h_{1}}{dx^{3}} \right|_{x=L_{x}}. \tag{13}\] Solving the eigenvalue problem gives the appropriate wave modes \[\phi_{n}(x)=\cos\left((\sigma_{n})^{1/4}x\right),\;n=1,2,\ldots, \tag{14}\] where the associated eigenvalues are \[\sigma_{n}=\left(\frac{n\pi}{L_{x}}\right)^{4},\;n=1,2,\ldots \tag{15}\] Solving Eq. (11) for \(T(t)\) we get \[T=T_{0}\exp(-\omega t)=T_{0}\exp\left(-\sigma\frac{\gamma h_{0}^{3}}{3\mu}t \right), \tag{16}\] which gives us an estimate of how fast the perturbation decay and how long it takes for the wave modes to equilibrate. The wave mode with longest length scale takes longest to equilibrate \[t_{e}\approx\frac{3\mu}{\gamma h_{0}^{3}\sigma_{1}}. \tag{17}\] In this subsection we (i) derived the wave modes for quasi-2D \(90^{\circ}\)-contact-angle case and (ii) evaluated the time for the wave modes to equilibrate, which guides us to the runtime of MD simulations. Figure 13: Comparison of fluctuation amplitudes for quasi-2D thin films extracted from MD (circled lines), theoretical predictions considering a cut-off (crossed lines) and without a cut-off (solid lines). (a) \(90^{\circ}\) contact angle, (b) partially pinned contact lines. Black: \(L_{x}=13.04\) nm, blue: \(L_{x}=25.99\) nm, green: \(L_{x}=51.29\) nm, red: \(L_{x}=102.30\) nm. ### Fluctuation amplitude from the STFE Applying Eq. (8) to the STFE Eq. (2), at the leading order we obtain \[\sum_{n=1}^{\infty}\phi_{n}\frac{da_{n}}{dt}=-\frac{\gamma h_{0}^{3}\pi^{4}}{3 \mu L_{x}^{4}}\sum_{n=1}^{\infty}n^{4}a_{n}\phi_{n}+\sqrt{\frac{2k_{B}Th_{0}^{3 }}{3\mu L_{y}}}\frac{\partial\mathcal{N}}{\partial x}. \tag{10}\] The noise is then expanded in the wave modes \(\bar{\phi}_{n}=\sin(n\pi x/L_{x})\), so that \[\mathcal{N}(x,t)=\sum_{m=1}^{\infty}b_{m}(t)\bar{\phi}_{m}(x) \tag{11}\] and using the orthogonality of the \(\bar{\phi}\)'s and noting that \(\int_{0}^{L}\bar{\phi}_{m}^{2}dx=L_{x}/2\) we find \[b_{m}=\frac{2}{L_{x}}\int_{0}^{L_{x}}\bar{\phi}_{m}\mathcal{N}dx. \tag{12}\] This allows us to write an equation for each mode \[\phi_{n}\frac{da_{n}}{dt}=-Cn^{4}a_{n}\phi_{n}+Db_{n}\frac{d\bar{\phi}_{n}}{dx}, \tag{13}\] where we have introduced constants \(A=\frac{\gamma h_{0}^{3}\pi^{4}}{3\mu L_{x}^{2}}\) and \(B=\sqrt{\frac{2k_{B}Th_{0}^{3}}{3\mu L_{y}}}\). We can then rewrite Eq. (13) using an integrating factor to find \[\phi_{n}\frac{d}{dt}\left(a_{n}e^{An^{4}t}\right)=Be^{An^{4}t}b_{n}\frac{d\bar {\phi}_{n}}{dx}. \tag{14}\] Integrating both sides with time and assuming that the initial film is flat, i.e. \(a_{n}(0)=0\), we have \[\phi_{n}a_{n}=Be^{-An^{4}t}\frac{d\bar{\phi}_{n}}{dx}\int_{0}^{t}e^{An^{4}\tau }b_{n}(\tau)d\tau. \tag{15}\] and noting \(\frac{d\bar{\phi}_{n}}{dx}=\frac{n\pi}{L_{x}}\phi_{n}\), we find \[a_{n}(t)=\frac{Bn\pi}{L_{x}}e^{-An^{4}t}\int_{0}^{t}e^{An^{4}\tau}b_{n}(\tau)d\tau. \tag{16}\] Next, using Eq. (3) and Eq. (12) we determine the properties of the noise coefficients \[\langle b_{n}(\tau)b_{n}(s)\rangle = \langle\frac{4}{L_{x}^{2}}\left(\int_{0}^{L_{x}}\bar{\phi}_{n}(x) \mathcal{N}(x,\tau)dx\right) \tag{17}\] \[\times \left(\int_{0}^{L_{x}}\bar{\phi}_{n}(x^{\prime})\mathcal{N}(x^{ \prime},s)dx^{\prime}\right)\rangle\] \[= \frac{4}{L_{x}^{2}}\int_{0}^{L_{x}}\int_{0}^{L_{x}}\bar{\phi}_{n} (x)\bar{\phi}_{n}(x^{\prime})\] \[\times \delta(x-x^{\prime})\delta(\tau-s)dxdx^{\prime}\] \[= \frac{2}{L_{x}}\delta(\tau-s),\] from which we finally obtain \[\langle a_{n}^{2}\rangle = \frac{2B^{2}n^{2}\pi^{2}e^{-2An^{4}t}}{L_{x}^{3}}\int_{0}^{t}e^{2 An^{4}\tau}d\tau \tag{18}\] \[= \frac{B^{2}\pi^{2}(1-e^{-2An^{4}t})}{An^{2}L_{x}^{3}}.\] This gives a time dependent version of \(\langle a_{n}^{2}\rangle\), and as \(t\rightarrow\infty\) (i.e. we approach thermal equilibrium) we have \[\langle a_{n}^{2}\rangle=\frac{2k_{B}T}{\gamma\pi^{2}}\frac{L_{x}}{L_{y}}\frac {1}{n^{2}}, \tag{19}\] which agrees with the result from thermal-capillary-wave theory Eq. (13). We can also show that \(\langle a_{m}a_{n}\rangle=\delta_{mn}\langle a_{n}^{2}\rangle\), \[\langle a_{m}(t)a_{n}(t)\rangle=De^{-C(m^{4}+n^{4})t}\] \[\qquad\times\int_{0}^{t}\int_{0}^{t}e^{Cm^{4}s+Cn^{4}\tau}\langle b _{m}(s)b_{n}(\tau)\rangle dsd\tau \tag{20}\] where \[\langle b_{m}(s)b_{n}(\tau)\rangle = \langle\frac{4}{L_{x}^{2}}\left(\int_{0}^{L_{x}}\bar{\phi}_{m}(x) \mathcal{N}(x,s)dx\right)\left(\int_{0}^{L_{x}}\bar{\phi}_{n}(x^{\prime}) \mathcal{N}(x^{\prime},\tau)dx^{\prime}\right)\rangle \tag{102}\] \[= \langle\frac{4}{L_{x}^{4}}\int_{0}^{L_{x}}\int_{0}^{L_{x}} \mathcal{N}(x,s)\mathcal{N}(x^{\prime},\tau)\bar{\phi}_{m}(x)\bar{\phi}_{n}(x^ {\prime})dxdx^{\prime}\rangle\] \[= \frac{4}{L_{x}^{2}}\int_{0}^{L_{x}}\int_{0}^{L_{x}}\langle \mathcal{N}(x,s)\mathcal{N}(x^{\prime},\tau)\rangle\bar{\phi}_{m}(x)\bar{\phi}_ {n}(x^{\prime})dxdx^{\prime}\] \[= \frac{2\delta(s-\tau)}{L_{x}^{2}}\int_{0}^{L}\bar{\phi}_{m}(x^{ \prime})\bar{\phi}_{n}(x^{\prime})dx^{\prime}\] \[= \frac{2\delta(s-\tau)}{L_{x}}\delta_{mn}.\] This shows that the wave modes are uncorrelated and is required in the calculation of Eq. (14). In this subsection we showed that (i) the amplitudes of the wave mode \(\langle a_{n}^{2}\rangle\) can be derived directly from the STFE as a function of time, which agrees with thermal-capillary-wave theory at thermal equilibrium (as \(t\to\infty\)) and (ii) the wave mode are uncorrelated, which is an requirement for the calculation in Eq. (14). ## Appendix B Quasi-2D thin film with partially pinned contact lines In this section we layout the technical details for quasi-2D thin film with partially pinned contact lines. ### Derivation of wave modes The partially pinned boundary condition given by Eq. (19) states that the contact-lines oscillates around the pinned point \(h_{0}\). However, first we derive the wave modes with perfectly pinned boundary condition with the contact-lines pinned at \(h_{0}\). After the same linearization as in Appendix A.1 we arrive at the eigenvalue problem with pinned and no-flux boundary conditions \[\frac{d^{4}h_{2}}{dx^{4}}=\lambda h_{2}, \tag{103}\] \[h_{2}(0)=h_{2}(L_{x})=\frac{d^{3}h_{2}}{dx^{3}}(0)=\frac{d^{3}h_{2}}{dx^{3}}( L_{x})=0. \tag{104}\] The eigenvalue problem gives a general solution [70] \[h_{2}(x) = C_{1}\cosh(\lambda^{1/4}x)+C_{2}\sinh(\lambda^{1/4}x) \tag{105}\] \[+C_{3}\cos(\lambda^{1/4}x)+C_{4}\sin(\lambda^{1/4}x),\] where \(C_{1}\)-\(C_{4}\) are constants to be determined. Substituting in the boundary conditions gives the appropriate wave modes \[\varphi_{n}(x) =\sinh(\lambda_{n}^{1/4}x)+\sin(\lambda_{n}^{1/4}x) \tag{106}\] \[+K\big{(}\cosh(\lambda_{n}^{1/4}x)-\cos(\lambda_{n}^{1/4}x)\big{)}\] where \[K=\frac{\bigg{(}\sinh(\lambda_{n}^{1/4}L_{x})+\sin(\lambda_{n}^{1/4}L_{x}) \bigg{)}}{\bigg{(}\cos(\lambda_{n}^{1/4}L_{x})-\cosh(\lambda_{n}^{1/4}L_{x}) \bigg{)}}. \tag{107}\] The eigenvalues \(\lambda_{n}\) must satisfy \[\cosh(\lambda_{n}^{1/4}L_{x})\cos(\lambda_{n}^{1/4}L_{x})=1, \tag{108}\] which gives us an estimate \[\lambda_{n}\approx\bigg{(}\frac{\pi/2+n\pi}{L_{x}}\bigg{)}^{4},\qquad n=1,2,\ldots \tag{109}\] And for \(\lambda_{0}=0\), we have \[\varphi_{0}(x)=\frac{x}{L_{x}}\left(1-\frac{x}{L_{x}}\right). \tag{110}\] Similar to Appendix (A.1) we can estimate the equilibration time \(t_{c}\) by looking at the wave mode with longest length scale apart from \(0\) \[t_{c}=\frac{3\mu}{\gamma h_{0}^{3}\lambda_{1}}. \tag{111}\] Although the wave modes are not orthogonal, their first derivatives are, so that we are able to make analytic progress. To show this, let \({}^{\prime}\) denote \(\partial/\partial x\), then using integration by parts repeatedly we can show \[\lambda_{m}\int_{0}^{L_{x}} \varphi^{\prime}_{m}\varphi^{\prime}_{n}dx=\int_{0}^{L_{x}}\varphi^ {\prime\prime\prime\prime\prime}_{m}\varphi^{\prime}_{n}dx\] \[=[\varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime}_{n}]_{0}^ {L_{x}}+\int_{0}^{L_{x}}\varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime \prime}_{n}dx\] \[=[\varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime}_{n}- \varphi^{\prime\prime\prime}_{m}\varphi^{\prime\prime}_{n}]_{0}^{L_{x}}+\int_{0 }^{L}\varphi^{\prime\prime\prime}_{m}\varphi^{\prime\prime\prime}_{n}dx\] \[=[\varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime}_{n}- \varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime\prime}_{n}+\varphi^{ \prime\prime}_{m}\varphi^{\prime\prime\prime}_{n}]_{0}^{L_{x}}\] \[-\int_{0}^{L_{x}}\varphi^{\prime\prime}_{m}\varphi^{\prime\prime \prime\prime}_{n}dx\] \[=[\varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime}_{n}- \varphi^{\prime\prime\prime\prime}_{m}\varphi^{\prime\prime}_{n}+\varphi^{ \prime\prime}_{m}\varphi^{\prime\prime\prime}_{n}-\varphi^{\prime}_{m}\varphi^ {\prime\prime\prime\prime}_{n}]_{0}^{L_{x}}\] \[+\int_{0}^{L_{x}}\varphi^{\prime}_{m}\varphi^{\prime\prime\prime \prime\prime}_{n}dx\] \[=\lambda_{n}\int_{0}^{L_{x}}\varphi^{\prime}_{m}\varphi^{\prime \prime}_{n}dx, \tag{101}\] where all the boundary terms vanish due to boundary conditions. Since \(\lambda_{m}\neq\lambda_{n}\), we have shown that \(\varphi^{\prime}\) are orthogonal. In this subsection we (i) derived the wave modes \(\varphi_{n}\) for quasi-2D thin films with partially pinned contact lines, (ii) derived the time for the wave modes to reach equilibrium, and (iii) showed that \(\varphi^{\prime}_{n}\) are orthogonal, which is used in derivation of Eq. (37). ### Fluctuation amplitude from the STFE Similar to before, we would like to derive the mean square displacement directly from the stochastic thin-film equation for \(h_{2}\). Considering a perturbation and expanding it in the derived wave modes gives \[h=h_{0}+\sum_{n=0}^{\infty}c_{n}(t)\varphi_{n}(x). \tag{102}\] Using similar arguments and noting that \(\varphi_{n}^{(4)}=\lambda_{n}\varphi_{n}\) we find \[\sum_{n=0}^{\infty}\varphi_{n}\frac{dc_{n}}{dt}=-\frac{\gamma h_{0}^{3}}{3\mu} \sum_{n=1}^{\infty}\lambda_{n}c_{n}\varphi_{n}+\sqrt{\frac{2k_{B}Th_{0}^{3}}{3 \mu L_{y}}}\frac{\partial\mathcal{N}}{\partial x}, \tag{103}\] Note that \(\lambda_{0}=0\), this is why the second term begins with \(n=1\). To continue we need to expand the noise in some basis as well. Since there is already a first derivative in space on \(\mathcal{N}\) we expand the noise with \(\bar{\varphi}_{n}=\varphi^{\prime\prime\prime}_{n}\). Note that \(\varphi^{\prime\prime\prime}_{0}=0\), so \(\bar{\varphi}_{0}=0\). So we can write the noise in terms of the basis \[\mathcal{N}(x,t)=\sum_{m=1}^{\infty}d_{m}(t)\bar{\varphi}_{m}(x) \tag{104}\] and using the orthogonality of \(\bar{\varphi}_{m}\) and noting that \(\int_{0}^{L_{x}}\bar{\varphi}_{m}^{2}dx=L_{x}\) we have \[d_{m}=\frac{1}{L_{x}}\int_{0}^{L_{x}}\bar{\varphi}_{m}\mathcal{N}dx. \tag{105}\] We can then rewrite equation (103) in each mode as \[\varphi_{n}\frac{dc_{n}}{dt}=-C\lambda_{n}c_{n}\varphi_{n}+Dd_{n}\frac{d\bar{ \varphi}_{n}}{dx}, \tag{106}\] where we have introduced new constants \(C=\frac{\gamma h_{0}^{3}}{3\mu}\) and \(D=\sqrt{\frac{2k_{B}Th_{0}^{3}}{3\mu L_{y}}}\). For \(\varphi_{0}\) we have \[\frac{dc_{0}}{dt}=0, \tag{107}\] and since the average free surface is flat we have \(c_{0}=0\). We can then solve the ordinary differential equations for \(n\neq 0\) as before to get \[\varphi_{n}c_{n}=De^{-C\lambda_{n}t}\frac{d\bar{\varphi}_{n}}{dx}\int_{0}^{t}e ^{C\lambda_{n}\tau}d_{n}(\tau)d\tau. \tag{108}\] Noting \(\frac{d\bar{\varphi}_{n}}{dx}=\lambda^{1/4}\varphi_{n}\), we have \[c_{n}=D\lambda_{n}^{1/4}e^{-C\lambda_{n}t}\int_{0}^{t}e^{C\lambda_{n}\tau}d_{n} (\tau)d\tau. \tag{109}\] Following equation (3) and (105) we find that \[\langle d_{n}(\tau)d_{n}(s)\rangle =\langle\frac{1}{L_{x}^{2}}\left(\int_{0}^{L_{x}}\bar{\varphi}_{n} (x)\mathcal{N}(x,\tau)dx\right)\] \[\times\left(\int_{0}^{L}\bar{\varphi}_{n}(x^{\prime})\mathcal{N} (x^{\prime},s)dx^{\prime}\right)\rangle\] \[=\frac{1}{L_{x}^{2}}\int_{0}^{L_{x}}\int_{0}^{L_{x}}\bar{\varphi}_ {n}(x)\bar{\varphi}_{n}(x^{\prime})\] \[\times\delta(x-x^{\prime})\delta(\tau-s)dxdx^{\prime}\] \[=\frac{1}{L_{x}}\delta(\tau-s). \tag{110}\] So we can write \[\langle c_{n}^{2}\rangle =\langle D^{2}\lambda_{n}^{1/2}e^{-2C\lambda_{n}t}\int_{0}^{t}e^{C \lambda_{n}\tau}d_{n}(\tau)d\tau\] \[\times\int_{0}^{t}e^{C\lambda_{n}s}d_{n}(s)ds\rangle\] \[=\frac{D^{2}(1-e^{-2C\lambda_{n}t})}{2L_{x}C\lambda_{n}^{1/2}}\] \[=\frac{k_{B}T}{\gamma}\frac{1}{L_{x}L_{y}}\frac{1}{\lambda_{n}^{1/ 2}}(1-e^{-2C\lambda_{n}t}), \tag{111}\] and as \(t\to\infty\) \[\langle c_{n}^{2}\rangle=\frac{k_{B}T}{\gamma}\frac{1}{L_{x}L_{y}}\frac{1}{ \lambda_{n}^{1/2}} \tag{112}\] Similarly we can also show that \[\langle d_{m}(s)d_{n}(\tau)\rangle=\frac{\delta(s-\tau)}{L_{x}}\delta_{mn}, \tag{101}\] and so \[\langle c_{m}(t)c_{n}(t)\rangle =\langle D^{2}\lambda_{m}^{1/4}\lambda_{n}^{1/4}e^{-C(\lambda_{m} +\lambda_{n})t}\] \[\times\int_{0}^{t}\int_{0}^{t}e^{C\lambda_{m}s+C\lambda_{n}\tau} d_{m}(s)d_{n}(\tau)dsd\tau\rangle\] \[=\frac{D^{2}\lambda_{m}^{1/4}\lambda_{n}^{1/4}}{L_{x}}e^{-C( \lambda_{m}+\lambda_{n})t}\] \[\times\int_{0}^{t}e^{C(\lambda_{m}+\lambda_{n})\tau}\delta_{mn}d\tau\] \[=\frac{D^{2}\lambda_{m}^{1/4}\lambda_{n}^{1/4}(1-e^{-C(\lambda_{m }+\lambda_{n})t})}{L_{x}C(\lambda_{m}+\lambda_{n})}\delta_{mn}, \tag{102}\] as expected. In this subsection we (i) derived the amplitude of the wave modes \(\langle c_{n}^{2}\rangle\) directly from the STFE as a function of time, which confirms the result from thermal-capillary-wave theory Eq. (38) at thermal equilibrium (as \(t\rightarrow\infty\) and (ii) showed that the wave modes are uncorrelated, which is used in the derivation Eq. (39) of the fluctuation amplitude \(\langle h_{2}^{2}\rangle\). ### Solving the linearized problem with Langevin motions on the boundaries The problem we are looking to solve is given by \[\frac{\partial h_{3}}{\partial t}=-\frac{\gamma h_{0}^{3}}{3\mu} \frac{\partial^{4}h_{3}}{\partial x^{4}}, \tag{103}\] \[h_{3}(0,t)=N_{1}(t),\,h_{3}(L_{x},t)=N_{2}(t),\] (104) \[\frac{\partial^{3}h_{3}}{\partial x^{3}}(0,t)=\frac{\partial^{3} h_{3}}{\partial x^{3}}(L_{x},t)=0, \tag{105}\] where \(N_{1}(t)\) and \(N_{2}(t)\) are Langevin diffusion process described as \[\xi\frac{dN_{1}}{dt}=-kN_{1}(t)+f_{1}(t), \tag{106}\] \[\xi\frac{dN_{2}}{dt}=-kN_{2}(t)+f_{2}(t). \tag{107}\] Here \(\xi\) is the coefficient of friction, \(k\) is the harmonic constant. \(f_{1}(t)\) and \(f_{2}(t)\) are Gaussian noises that satisfy \(\langle f_{1}(s)f_{1}(\tau)\rangle=2\xi k_{B}T\delta(s-\tau)\) and \(\langle f_{2}(s)f_{2}(\tau)\rangle=2\xi k_{B}T\delta(s-\tau)\). This problem can be further divided into two sub-problems \[\frac{\partial h_{31}}{\partial t}=-\frac{\gamma h_{0}^{3}}{3\mu }\frac{\partial^{4}h_{31}}{\partial x^{4}}, \tag{108a}\] \[h_{31}(0,t)=N_{1}(t),\,h_{31}(L_{x},t)=0,\] (108b) \[\frac{\partial^{3}h_{31}}{\partial x^{3}}(0,t)=\frac{\partial^{3} h_{31}}{\partial x^{3}}(L_{x},t)=0, \tag{108c}\] and \[\frac{\partial h_{32}}{\partial t}=-\frac{\gamma h_{0}^{3}}{3\mu }\frac{\partial^{4}h_{32}}{\partial x^{4}}, \tag{109a}\] \[h_{32}(0,t)=0,\,h_{32}(L_{x},t)=N_{2}(t),\] (109b) \[\frac{\partial^{3}h_{32}}{\partial x^{3}}(0,t)=\frac{\partial^{3} h_{32}}{\partial x^{3}}(L_{x},t)=0. \tag{109c}\] Since Eq. (103) is linear it is easy to see that \(h_{3}(x,t)=h_{31}(x,t)+h_{32}(x,t)\) is a solution. Now, \(h_{31}(x,t)\) can be found with the following procedure. Let \(h_{31}(x,t)=w_{1}(x,t)+v_{1}(x,t)\), where \(w_{1}(x,t)=N_{1}(t)(1-x/L_{x})^{2}\). This choice of \(w_{1}(x,t)\) ensures that \[w_{1}(0,t)=N_{1}(t),\,w_{1}(L_{x},t)=0, \tag{110}\] and \[\frac{\partial^{3}w_{1}}{\partial x^{3}}=0. \tag{111}\] So after substituting \(h_{31}(x,t)=w_{1}(x,t)+v_{1}(x,t)\) and Langevin diffusion to Eq. (103), we have \[\frac{\partial v_{1}}{\partial t}+\frac{\partial w_{1}}{\partial t}=-\frac{ \gamma}{3\mu}h_{0}^{3}\left(\frac{\partial^{4}v_{1}}{\partial x^{4}}\right) \tag{112}\] \[v_{1}(0,t)=v_{1}(L_{x},t)=0, \tag{113}\] \[\frac{\partial^{3}v_{1}}{\partial x^{3}}(0,t)=\frac{\partial^{3}v_{1}}{ \partial x^{3}}(L,t)=0, \tag{114}\] \[N_{1}(0)\left(1-\frac{x}{L_{x}}\right)^{2}+v_{1}(x,0)=0. \tag{115}\] We then expand \(v_{1}(x,t)\) and \(\partial w_{1}(x,t)/\partial t\) in wave modes \(\varphi_{n}(x)\) corresponding to the pinned contact line problem since it is a Dirichlet type boundary condition \[v_{1}(x,t)=\alpha_{10}(t)\varphi_{0}(x)+\sum_{i=1}^{\infty}\alpha_{1i}(t) \varphi_{i}(x), \tag{116}\] \[\frac{\partial w}{\partial t}(x,t)=\beta_{10}(t)\varphi_{0}(x)+\sum_{i=1}^{ \infty}\beta_{1i}(t)\varphi_{i}(x). \tag{117}\] Since the expression for \(\frac{\partial w}{\partial t}\) is known, we should be able to calculate \(\beta_{1i}(t)\) explicitly. However, since \(\varphi_{i}(x)\) are not orthogonal we can only expand it with finitely many wave modes and calculate \(\beta_{1i}(t)\) by solving the linear system of finite unknowns. So \[v_{1}(x,t)=\sum_{i=1}^{N}\alpha_{1i}(t)\varphi_{i}(x), \tag{118}\] \[\frac{\partial w}{\partial t}(x,t)=\sum_{i=1}^{N}\beta_{1i}(t)\varphi_{i}(x). \tag{119}\] We then multiply both sides of Eq. (B40) with \(\varphi_{i}(x)\) and integrate w.r.t. \(x\) from \([0,L_{x}]\) to get \[\int_{0}^{L_{x}}\frac{\partial w}{\partial t}(x,t)\varphi_{i}(x)dx\] (B41) \[=\int_{0}^{L_{x}}\frac{dN_{1}}{dt}(t)(1-\frac{x}{L})^{2}\varphi_{ i}(x)dx\] (B42) \[=\begin{cases}\frac{dN_{1}}{dt}(t)\frac{L_{x}^{2}}{20},\text{ for }i=0 \\ -\frac{dN_{1}}{dt}(t)\frac{L_{x}^{1/2}}{L_{x}\lambda_{i}^{1/4}}\tan\left(\frac {L_{x}\lambda_{i}^{1/4}}{2}\right),\text{ for odd }i\\ \frac{dN_{1}}{dt}(t)\frac{4}{L_{x}^{2}\lambda_{i}^{3/4}}\left[-2+L_{x}\lambda_ {i}^{1/4}\cot\left(\frac{L_{x}\lambda_{i}^{1/4}}{2}\right)\right],\text{ for even }i \end{cases}\] (B43) \[=\sum_{j=0}^{N}\beta_{1j}(t)\int_{0}^{L_{x}}\phi_{j}(x)\phi_{i}(x )dx\] (B44) \[=\sum_{j=0}^{N}\beta_{1j}(t)G_{ij},\] (B45) where \(G\) is the matrix with \[G_{ij} =\int_{0}^{L_{x}}\varphi_{i}(x)\varphi_{j}(x)dx\] (B46) \[=\begin{cases}\frac{L_{x}^{3}}{30},\text{ for }i=j=0\\ 4\frac{2-L_{x}\lambda_{i}^{1/4}\cot(L_{x}\lambda_{i}^{1/4}/2)}{L_{x}\lambda_{i }^{3/4}},\text{ for }i=0,\text{ }j\text{ is even or }j=0,\text{ }i\text{ is even}\\ \frac{\tan(\lambda_{i}^{1/4}L_{x}/2)(\lambda_{i}^{1/4}L_{x}\tan(\lambda_{i}^{ 1/4}L_{x}/2)+2)}{\lambda_{i}^{1/4}},\text{ for }i=j\text{ and }i\text{ is odd},\\ \frac{\tan(\lambda_{i}^{1/4}L_{x}/2)(\lambda_{i}^{1/4}L_{x}\cot(\lambda_{i}^{ 1/4}L_{x}/2)-2)}{\lambda_{i}^{1/4}},\text{ for }i=j\text{ and }i\text{ is even},\\ \frac{8\lambda_{i}^{1/4}\lambda_{i}^{1/4}(\lambda_{i}^{1/4}\tan(\lambda_{i}^{ 1/4}L_{x}/2)-\lambda_{i}^{1/4}\tan(\lambda_{i}^{1/4}L_{x}/2))}{\lambda_{i}- \lambda_{j}},\text{ for }i\neq j\text{ and both odd}\\ \frac{8\lambda_{i}^{1/4}\lambda_{i}^{1/4}(\lambda_{i}^{1/4}\cot(\lambda_{i}^{ 1/4}L_{x}/2)-\lambda_{i}^{1/4}\cot(\lambda_{i}^{1/4}L_{x}/2))}{\lambda_{i}- \lambda_{j}},\text{ for }i\neq j\text{ and both even}\\ 0,\text{ otherwise}.\end{cases}\] (B47) Then \(\beta_{1i}(t)\) can be solved explicitly in the form of \(u_{1i}\) \[\beta_{1i}(t)=\frac{dN_{1}}{dt}(t)u_{1i}.\] (B48) Substituting in Eq. (B27) we have \[\beta_{1i}(t)=u_{1i}(-\frac{k}{\xi}N_{1}(t)+\frac{1}{\xi}f_{1}(t)).\] (B49) Now substitute Eq. (B39), Eq. (B40) and Eq. (B49) into Eq. (B33) we get \[\sum_{i=0}^{N}\left(\frac{d\alpha_{1i}}{dt}(t)\varphi_{i}(x)+(- \frac{k}{\xi}N_{1}(t)+\frac{1}{\xi}f_{1}(t))u_{1i}\varphi_{i}(x)\right)\] \[=\sum_{i=1}^{N}-\frac{\gamma}{3\mu}h_{0}^{3}\alpha_{1i}(t)\frac{ d^{4}\varphi_{i}}{dx^{4}}(x).\] (B50) Recalling \[\frac{d^{4}\varphi_{i}}{dx^{4}}(x)=\lambda_{i}\varphi_{i}(x),\] (B51) we have for \(\lambda_{0}=0\) \[\left(\frac{d\alpha_{10}}{dt}(t)+u_{10}\frac{dN_{1}}{dt}(t)\right)\varphi_{0}(x)=0. \tag{108}\] and for \(i=1,2,\ldots,N\) \[\frac{d\alpha_{1i}}{dt}+\frac{\gamma}{3\mu}h_{0}^{3}\lambda_{i}\alpha_{1i}=( \frac{k}{\xi}N_{1}(t)-\frac{1}{\xi}f_{1}(t))u_{1i}. \tag{109}\] Denote \(C=\frac{\gamma h_{0}^{3}}{3\mu}\) and multiply both side with \(\exp(C\lambda_{i}t)\) we have \[\frac{d}{dt} (\exp(C\lambda_{i}t)\alpha_{1i}(t))\] \[=u_{1i}(\frac{k}{\xi}N_{1}(t)-\frac{1}{\xi}f_{1}(t))\exp(C \lambda_{i}t), \tag{110}\] integrate both side we have \[\exp(C\lambda_{i}t) \alpha_{1i}(t)-\alpha_{1i}(0)\] \[=u_{1i}\int_{0}^{t}(\frac{k}{\xi}N_{1}(\tau)-\frac{1}{\xi}f_{1}( \tau))\exp(C\lambda_{i}\tau)d\tau. \tag{111}\] Since the initial shape of the free surface is flat we know \(\alpha_{1i}(0)=0\). Then for \(i=0\) we have \[\alpha_{10}(t)=-u_{10}N_{1}(t), \tag{112}\] and for \(i=1,2,\ldots,N\) \[\alpha_{1i}(t) =u_{1i}\exp(-C\lambda_{i}t)\times\] \[\int_{0}^{t}(\frac{k}{\xi}N_{1}(\tau)-\frac{1}{\xi}f_{1}(\tau)) \exp(C\lambda_{i}\tau)d\tau. \tag{113}\] Thus we have an expression for \(h_{31}(x,t)\) \[h_{31}(x,t) =\sum_{i=1}^{N}\alpha_{1i}(t)\varphi_{i}(x)\] \[+N_{1}(t)\left(1-\frac{x}{L_{x}}\right)\left(1-\frac{x}{L_{x}}-u_ {10}\frac{x}{L_{x}}\right). \tag{114}\] With the same procedure we can solve for \(h_{32}\) as well \[h_{32}(x,t) =\sum_{i=1}^{N}\alpha_{2i}(t)\varphi_{i}(x)\] \[+N_{2}(t)\frac{x}{L_{x}}\left(\frac{x}{L_{x}}-u_{20}(1-\frac{x}{L _{x}})\right), \tag{115}\] where for \(i=1,2,\ldots,N\) \[\alpha_{2i}(t) =u_{2i}\exp(-C\lambda_{i}t)\times\] \[\int_{0}^{t}(\frac{k}{\xi}N_{2}(\tau)-\frac{1}{\xi}f_{2}(\tau)) \exp(C\lambda_{i}\tau)d\tau, \tag{116}\] and \(u_{2i}\) is obtained similar to Eq. (102). And we can write out \(h_{3}(x,t)\) in the required form \[h_{3}(x,t) =\sum_{i=1}^{N}e_{i}(t)\varphi_{i}(x)\] \[+N_{1}(t)\left(1-\frac{x}{L_{x}}\right)\left(1-\frac{x}{L_{x}}-u_ {10}\frac{x}{L_{x}}\right)\] \[+N_{2}(t)\frac{x}{L_{x}}\left(\frac{x}{L_{x}}-u_{20}(1-\frac{x}{L _{x}})\right), \tag{117}\] where \(e_{i}(t)=\alpha_{1i}(t)+\alpha_{2i}(t)\). In this subsection we solved the linearized TFE with Langevin diffusion motion on the boundaries analytically and give the details of the derivation of Eq. (36). ### Combined fluctuation amplitude We first show that \(\langle h_{2}h_{3}\rangle=0\). From Appendix B.3 we know \(h_{3}(x,t)=h_{31}(x,t)+h_{32}(x,t)\), so \[\langle h_{2}h_{3}\rangle=\langle h_{2}h_{31}\rangle+\langle h_{2}h_{32}\rangle. \tag{118}\] By Eq. (35) and Eq. (114) we have \[\langle h_{2}h_{31}\rangle =\sum_{i=1}^{\infty}\sum_{j=1}^{N}\langle c_{i}(t)\alpha_{1j}(t) \rangle\varphi_{i}(x)\varphi_{j}(x)\] \[+\sum_{i=1}^{\infty}\langle N_{1}(t)c_{i}(t)\rangle\varphi_{i}(x) \left(1-\frac{x}{L_{x}}\right)\] \[\times\left(1-\frac{x}{L_{x}}-u_{10}\frac{x}{L_{x}}\right). \tag{119}\] By Eq. (100) and Eq. (113) we have \[\langle c_{i}(t)\alpha_{1j}(t)\rangle =Du_{1j}\lambda_{i}^{1/4}e^{-C(\lambda_{i}+\lambda_{j})t}\] \[\times\int_{0}^{t}\int_{0}^{t}e^{C(\lambda_{i}\tau+\lambda_{j}s) }\frac{1}{\xi}(k\langle N_{1}(s)d_{i}(\tau)\rangle\] \[-\langle f_{1}(s)d_{i}(\tau)\rangle)d\tau ds \tag{120}\] where \(N_{1}(s)\) is the position of the fluctuating contact-line driven by random force \(f_{1}(s)\) and \(d_{i}(\tau)\) is random flux. By Eq. (100) we have \[\langle N_{1}(t)c_{i}(t)\rangle=D\lambda_{i}^{1/4}e^{-C\lambda_{i}t}\int_{0}^{ t}e^{C\lambda_{i}\tau}\langle d_{i}(\tau)N_{1}(t)\rangle d\tau. \tag{121}\] If we consider that the random force \(f_{1}(s)\) is uncorrelated to random flux \(d_{i}(\tau)\), then by Eq. (121) we have \(\langle c_{i}(t)a_{1j}(t)\rangle=0\) and by Eq. (121) we have \(\langle N_{1}(t)c_{i}(t)\rangle=0\) and thus \(\langle h_{2}h_{31}\rangle=0\). Applying the same argument one can derive that \(\langle h_{2}h_{32}\rangle=0\), and thus \(\langle h_{2}h_{3}\rangle=0\). Now we consider \(\langle h_{3}^{2}\rangle=\langle h_{31}^{2}\rangle+2\langle h_{31}h_{32} \rangle+\langle h_{32}^{2}\rangle\). We first show that \(\langle h_{31}h_{32}\rangle=0\). If we assume that the random forces driving the fluctuations of the contact-lines are uncorrelated \(\langle f_{1}(t)f_{2}(t)\rangle=0\) then the positions of the contact-lines are also uncorrelated \(\langle N_{1}(t)N_{2}(t)\rangle=0\). By Eq. (100) and Eq. (101) we have \[\langle h_{31}h_{32}\rangle =\sum_{i=1}^{N}\sum_{j=1}^{N}\langle\alpha_{1i}(t)\alpha_{2j}(t) \rangle\varphi_{i}(x)\varphi_{j}(x)\] \[\quad+\sum_{i=1}^{N}\langle\alpha_{1i}(t)N_{2}(t)\rangle\varphi_{ i}(x)\frac{x}{L_{x}}\left(\frac{x}{L_{x}}-u_{20}(1-\frac{x}{L_{x}})\right)\] \[\quad+\sum_{j=1}^{N}\langle\alpha_{2j}(t)N_{1}(t)\rangle\left(1- \frac{x}{L_{x}}\right)\left(1-\frac{x}{L_{x}}-u_{10}\frac{x}{L_{x}}\right)\] \[\quad+\langle N_{1}(t)N_{2}(t)\rangle\left(1-\frac{x}{L_{x}} \right)\left(1-\frac{x}{L_{x}}-u_{10}\frac{x}{L_{x}}\right)\] \[\quad\times\frac{x}{L_{x}}\left(\frac{x}{L_{x}}-u_{20}(1-\frac{x }{L_{x}})\right) \tag{102}\] \[=0 \tag{103}\] We then consider \(\langle h_{31}^{2}\rangle\). By Eq. (100) we can calculate \[\langle h_{31}^{2}\rangle =\sum_{i=1}^{N}\sum_{j=1}^{N}\langle\alpha_{1i}(t)\alpha_{1j}(t) \rangle\varphi_{i}(x)\varphi_{j}(x)\] \[\quad+2\sum_{i=1}^{N}\langle\alpha_{1i}(t)N_{1}(t)\rangle\varphi_ {i}(x)\left(1-\frac{x}{L_{x}}\right)\] \[\quad\times\left(1-\frac{x}{L_{x}}-u_{10}\frac{x}{L_{x}}\right)\] \[\quad+\langle N_{1}^{2}(t)\rangle\left(1-\frac{x}{L_{x}}\right)^ {2}\left(1-\frac{x}{L_{x}}-u_{10}\frac{x}{L_{x}}\right)^{2}. \tag{104}\] By Eq. (100) we have \[\langle\alpha_{1i}(t)\alpha_{1j}(t)\rangle=\langle\frac{u_{1i}u_ {1j}k^{2}}{\xi^{2}}e^{-A(\sigma_{i}+\sigma_{j})t}\] \[\quad\times\int_{0}^{t}\int_{0}^{t}N_{1}(\tau)N_{1}(s)e^{A(\sigma _{i}\tau+\sigma_{j}s)}d\tau ds\rangle\] \[\quad-\langle\frac{2u_{1i}u_{1j}k}{\xi^{2}}e^{-A(\sigma_{i}+ \sigma_{j})t}\] \[\quad\times\int_{0}^{t}\int_{0}^{t}N_{1}(\tau)f(s)e^{A(\sigma_{i} \tau+\sigma_{j}s)}d\tau ds\rangle\] \[\quad+\langle\frac{u_{1i}u_{1j}}{\xi^{2}}e^{-A(\sigma_{i}+\sigma_ {j})t}\] \[\quad\times\int_{0}^{t}\int_{0}^{t}f(\tau)f(s)e^{A(\sigma_{i}\tau +\sigma_{j}s)}d\tau ds\rangle \tag{105}\] and \[2\langle\alpha_{1i}(t)N_{1}(t)\rangle=\langle\frac{2u_{1i}k}{\xi }e^{-C\sigma_{i}t}\] \[\quad\times\int_{0}^{t}N_{1}(\tau)N_{1}(t)e^{C\sigma_{i}\tau}d\tau\rangle\] \[\quad-\langle\frac{2u_{1i}}{\xi}e^{-C\sigma_{i}t}\int_{0}^{t}f( \tau)N_{1}(t)e^{C\sigma_{i}\tau}d\tau\rangle \tag{106}\] It is well known that for Langevin process \[\langle N_{1}(\tau)N_{1}(s)\rangle=\frac{k_{B}T}{k}e^{-\frac{k}{\xi}|\tau-s|}, \tag{107}\] and \[\langle f(\tau)f(s)\rangle=2\xi k_{B}T\delta(\tau-s). \tag{108}\] But we don't know what \(\langle N_{1}(\tau)f(s)\rangle\) is. By Langevin diffusion equation we know \[\frac{dN_{1}}{dt}=-\frac{k}{\xi}N_{1}(t)+\frac{1}{\xi}f(t), \tag{109}\] so \[\frac{d}{dt}\left(e^{k/\xi t}N_{1}(t)\right)=\frac{1}{\xi}e^{k/\xi t}f(t), \tag{110}\] then \[e^{k/\xi t}N_{1}(t)-N_{1}(0)=\frac{1}{\xi}\int_{0}^{t}e^{k/\xi\tau}f(\tau)d\tau. \tag{111}\] If we let \(N_{1}(0)=0\), we have \[N_{1}(t)=\frac{1}{\xi}e^{-k/\xi t}\int_{0}^{t}e^{k/\xi\tau}f(\tau)d\tau. \tag{112}\] Then we have \[\langle N_{1}(\tau)f(s)\rangle =\langle\frac{1}{\xi}e^{-k/\xi\tau}\int_{0}^{\tau}e^{k/\xi\tau}f( )drf(s)\rangle\] \[=\frac{1}{\xi}e^{-k/\xi\tau}\int_{0}^{\tau}e^{k/\xi\tau}\langle f (r)f(s)\rangle dr\] \[=2k_{B}Te^{-k/\xi\tau}\int_{0}^{\tau}e^{k/\xi\tau}\delta(\tau-s)dr\] \[=\begin{cases}2k_{B}Te^{-k/\xi(\tau-s)},\text{ if }\tau>s\\ 0,\text{ if }\tau\leq s\end{cases} \tag{113}\] So with careful evaluation we come to \[\langle h_{31}^{2}(x,t)\rangle=\sum_{k=1}^{9}\mathcal{S}_{k}(x,t), \tag{114}\] where \[\mathcal{S}_{1}(x,t) =\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x)\] \[\quad\times\left(\frac{1-\exp(-C(\sigma_{i}+\sigma_{j})t)}{(C \sigma_{i}+\frac{k}{\xi})(C(\sigma_{i}+\sigma_{j}))}\right), \tag{115}\] \[\mathcal{S}_{2}(x,t) =-\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x)\] \[\quad\times\left(\frac{\exp(-(C\sigma_{i}+\frac{k}{\xi})t)-\exp(-C( \sigma_{i}+\sigma_{j})t)}{(C\sigma_{i}+\frac{k}{\xi})(C\sigma_{j}-\frac{k}{\xi})} \right), \tag{116}\] \[\mathcal{S}_{3}(x,t) = \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{k_{B}Tu_{1i}u_{1j}k}{\xi^{2}}\phi _{i}(x)\phi_{j}(x) \tag{111}\] \[\times \left(\frac{1-\exp(-(C\sigma_{j}+\frac{k}{\xi})t)}{(C\sigma_{i}- \frac{k}{\xi})(C\sigma_{j}+\frac{k}{\xi})}\right),\] \[\mathcal{S}_{4}(x,t) = -\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x) \tag{112}\] \[\times \left(\frac{1-\exp(-C(\sigma_{i}+\sigma_{j})t)}{(C\sigma_{i}- \frac{k}{\xi})(C(\sigma_{i}+\sigma_{j}))}\right),\] \[\mathcal{S}_{5}(x,t) = \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{2k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x) \tag{113}\] \[\times \left(\frac{1-\exp(-C(\sigma_{i}+\sigma_{j})t)}{(C(\sigma_{i}+ \sigma_{j}))}\right),\] \[\mathcal{S}_{6}(x,t) = \frac{k_{B}T}{k}\left(1-\frac{x}{L}\right)^{2}\left(1-\frac{x}{L }-u_{0}x\right)^{2}, \tag{114}\] \[\mathcal{S}_{7}(x,t) = -\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{4k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x) \tag{115}\] \[\times \left(\frac{1-\exp(-(C\sigma_{j}+\frac{k}{\xi})t)}{(C\sigma_{i}- \frac{k}{\xi})(C\sigma_{j}+\frac{k}{\xi})}\right),\] \[\mathcal{S}_{8}(x,t) = +\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{4k_{B}Tu_{1i}u_{1j}k}{\xi^{2}} \phi_{i}(x)\phi_{j}(x) \tag{116}\] \[\times \left(\frac{1-\exp(-C(\sigma_{i}+\sigma_{j})t)}{(C\sigma_{i}- \frac{k}{\xi})(C(\sigma_{i}+\sigma_{j}))}\right),\] \[\mathcal{S}_{9}(x,t) = -\sum_{i=1}^{N}\left(\frac{1-\exp(-(C\sigma_{i}+\frac{k}{\xi})t) }{(C\sigma_{i}+\frac{k}{\xi})}\right) \tag{117}\] \[\times \frac{2k_{B}Tu_{1i}}{\xi}\phi_{i}(x)\left(1-\frac{x}{L}\right) \left(1-\frac{x}{L}-u_{0}x\right).\] \(k\) and \(\xi\) can be extracted from MD simulations via Eq. (112), and we found that \(C\sigma_{i}+k/\xi\) and \(C(\sigma_{i}+\sigma_{j})\) are always positive for any \(i\) and \(j\), so as \(t\rightarrow\infty\), \(\mathcal{S}_{2}=0\), \(\mathcal{S}_{3}\) merges with \(\mathcal{S}_{7}\), \(\mathcal{S}_{4}\) merges with \(\mathcal{S}_{8}\), and we have \[\langle h_{31}^{2}(x)\rangle = \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{k_{B}Tu_{1i}u_{1j}k}{\xi^{2}(A \sigma_{i}+\frac{k}{\xi})(A(\sigma_{i}+\sigma_{j}))}\phi_{i}(x)\phi_{j}(x) \tag{118}\] \[+ \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{2k_{B}Tu_{1i}u_{1j}}{\xi(A( \sigma_{i}+\sigma_{j}))}\phi_{i}(x)\phi_{j}(x)\] \[+ \frac{k_{B}T}{k}\left(1-\frac{x}{L}\right)^{2}\left(1-\frac{x}{L }-u_{0}x\right)^{2}\] \[- \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{3k_{B}Tu_{1i}u_{1j}k}{\xi^{2}(A \sigma_{i}-\frac{k}{\xi})(A\sigma_{j}+\frac{k}{\xi})}\phi_{i}(x)\phi_{j}(x)\] \[+ \sum_{i=1}^{N}\sum_{j=1}^{N}\frac{3k_{B}Tu_{1i}u_{1j}k}{\xi^{2}(A \sigma_{i}-\frac{k}{\xi})(A(\sigma_{i}+\sigma_{j}))}\phi_{i}(x)\phi_{j}(x)\] \[- \sum_{i=1}^{N}\frac{2k_{B}Tu_{1i}}{\xi(A\sigma_{i}+\frac{k}{\xi}) }\phi_{i}(x)\] \[\times \left(1-\frac{x}{L}\right)\left(1-\frac{x}{L}-u_{0}x\right).\] Following the same derivation one can find that \(\langle h_{32}^{2}(x)\rangle\) is symmetric to \(\langle h_{31}^{2}(x)\rangle\) around \(L_{x}/2\). And so with Eq. (114) we have calculated \[\langle h_{3}^{2}(x)\rangle=\langle h_{31}^{2}(x)\rangle+\langle h_{32}^{2}(x)\rangle. \tag{119}\] In this subsection we (i) showed that perturbation induced by thermal noise in the bulk \(h_{2}\) and perturbation induced by thermal noise on the boundary \(h_{3}\) are uncorrelated under the linearized construction and (ii) calculated the fluctuation amplitude \(\langle h_{3}^{2}(x)\rangle\) and the combined fluctuation amplitude \(\langle(h_{2}(x)+h_{3}(x))^{2}\rangle\), expression Eq. (41) in details. ## Appendix C 3D circular thin film with \(90^{\circ}\) contact angle In this section we layout the technical details for 3D circular thin film with \(90^{\circ}\) contact angle. ### Derivation of wave modes We begin with the linearization of thin-film equation in 3D. Consider a perturbation to the free surface \[h(\mathbf{x},t)=h_{0}+\epsilon h_{4}(\mathbf{x})T(t), \tag{120}\] where \(\epsilon\ll 1\). Apply the perturbation to 3D TFE Eq. (42) and match the leading order we get \[\frac{dT}{dt}h_{1}(\mathbf{x})=-\frac{\gamma h_{0}^{3}}{3\mu}\nabla^{4}h_{4}(\mathbf{x}), \tag{121}\] where \(\nabla^{4}\) is the biharmonic operator. We can then separate the variables to get \[\frac{T^{\prime}}{T}=-\frac{\gamma h_{0}^{3}}{3\mu}\frac{\nabla^{2}\nabla^{2}h_{ 4}}{h_{4}}=-\lambda^{4}, \tag{122}\] where the minus sign in front of \(\lambda^{4}\) guarantees stability, given \(\lambda\) being real number. Since we have a circular thin film it is natural to work in cylindrical coordinate and we arrive at the following eigenvalue problem \[\nabla^{2}\nabla^{2}h_{4}(r,\theta)=\omega^{4}h_{4}(r,\theta) \tag{104}\] where \(\omega^{4}=\lambda^{4}(3\mu)/(\gamma h_{0}^{3})>0\). The general solution of the eigenvalue problem of the biharmonic operator can be obtained in cylindrical coordinate as follows. For the moment let us denote \(h_{4}\) as \(H\) and the eigenvalue problem can be rewritten as \[(\nabla^{2}-\omega^{2})(\nabla^{2}+\omega^{2})H(r,\theta)=0. \tag{105}\] This tells us that \(H=C_{1}H_{1}+C_{2}H_{2}\) where \[\nabla^{2}H_{1}+\omega^{2}H_{1}=0 \tag{106}\] and \[\nabla^{2}H_{2}-\omega^{2}H_{2}=0, \tag{107}\] and \(C_{1}\)\(C_{2}\) are constants. This is easy to see: Eq. (105) is equivalent to \[\begin{cases}\nabla^{2}H_{1}(r,\theta)+\omega^{2}H_{1}(r,\theta)=0,\\ \nabla^{2}H(r,\theta)-\omega^{2}H(r,\theta)=H_{1}(r,\theta),\end{cases} \tag{108}\] or \[\begin{cases}\nabla^{2}H_{2}(r,\theta)-\omega^{2}H_{2}(r,\theta)=0,\\ \nabla^{2}H(r,\theta)+\omega^{2}H(r,\theta)=H_{2}(r,\theta).\end{cases} \tag{109}\] For sake of argument, continue with the first case (and later it is easy to see that the two cases are equivalent) - we have already worked out what \(H_{1}\) is. From the equation for \(H\) we can see that \(H\) is the solution of the homogeneous problem plus a special solution, and it is easy to see that \[H=H_{2}-\frac{1}{2\omega^{2}}H_{1}. \tag{110}\] To solve for \(H_{1}\) and \(H_{2}\) we use separation of variables \(H_{1}(r,\theta)=R_{1}(r)\Theta_{1}(\theta)\), \(H_{2}(r,\theta)=R_{2}(r)\Theta_{2}(\theta)\), and we have \[R_{1}^{\prime\prime}\Theta_{1}+\frac{1}{r}R_{1}^{\prime}\Theta_{1}+\frac{1}{r^ {2}}R_{1}\Theta_{1}^{\prime\prime}+\omega^{2}R_{1}\Theta_{1}=0. \tag{111}\] \[R_{2}^{\prime\prime}\Theta_{2}+\frac{1}{r}R_{2}^{\prime}\Theta_{2}+\frac{1}{r^ {2}}R_{2}\Theta_{2}^{\prime\prime}-\omega^{2}R_{2}\Theta_{2}=0. \tag{112}\] Divide by \(R_{1}\Theta_{1}\) (\(R_{2}\Theta_{2}\)) we have \[-\frac{\Theta_{1}^{\prime\prime}}{\Theta_{1}}=r^{2}\frac{R_{1}^{\prime\prime} }{R_{1}}+r\frac{R_{1}^{\prime}}{R_{1}}+\omega^{2}r^{2}=n^{2}. \tag{113}\] \[-\frac{\Theta_{2}^{\prime\prime}}{\Theta_{2}}=r^{2}\frac{R_{2}^{\prime\prime} }{R_{2}}+r\frac{R_{2}^{\prime}}{R_{2}}-\omega^{2}r^{2}=n^{2}. \tag{114}\] Since \(\Theta_{1}\) and \(\Theta_{2}\) must be periodic with \(2\pi\), we have \[\Theta_{1}(\theta)=A_{1}\cos(n\theta)+B_{1}\sin(n\theta), \tag{115}\] \[\Theta_{2}(\theta)=A_{2}\cos(n\theta)+B_{2}\sin(n\theta), \tag{116}\] where \(n\) is a positive integer. And for \(R_{1}\), \(R_{2}\), if we let \(\rho=\omega r\) we have \[R_{1}^{\prime\prime}+\frac{1}{\rho}R_{1}^{\prime}+\left(1-\frac{n^{2}}{\rho^{ 2}}\right)R_{1}=0, \tag{117}\] \[R_{2}^{\prime\prime}+\frac{1}{\rho}R_{2}^{\prime}-\left(1+\frac{n^{2}}{\rho^{ 2}}\right)R_{2}=0, \tag{118}\] and we can immediately see that these are the Bessel function of the first kind and the modified Bessel function of the first kind, so \[R_{1}(\rho)=C_{1}J_{n}(\rho)+D_{1}Y_{n}(\rho), \tag{119}\] \[R_{2}(\rho)=C_{2}I_{n}(\rho)+D_{2}K_{n}(\rho). \tag{120}\] And so \[H_{n}(r,\theta) =(A_{1}\cos(n\theta)+B_{1}\sin(n\theta))\] \[\times(C_{1}J_{n}(\omega r)+D_{1}Y_{n}(\omega r))\] \[+(A_{2}\cos(n\theta)+B_{2}\sin(n\theta))\] \[\times(C_{2}I_{n}(\omega r)+D_{2}K_{n}(\omega r)). \tag{121}\] Since height of the film at the origin is finite, the terms involving \(Y_{n}\) and \(K_{n}\) must be zero, so the general solution is \[H_{n}(r,\theta) =(A_{1}\cos(n\theta)+B_{1}\sin(n\theta))J_{n}(\omega r)\] \[+(A_{2}\cos(n\theta)+B_{2}\sin(n\theta))I_{n}(\omega r), \tag{122}\] where \(n=0,1,\ldots\). Here we don't have to consider negative \(n\) because \[J_{-n}=(-1)^{n}J_{n} \tag{123}\] and \[I_{-n}=I_{n}. \tag{124}\] Applying the \(90^{\circ}\)-contact-angle boundary condition Eq. (48) we obtain \[\Theta_{1}J_{n}^{\prime}(\omega a)+\Theta_{2}I_{n}^{\prime}(\omega a)=0, \tag{125}\] and applying the no-flux boundary condition Eq. (49) gives \[-\Theta_{1}J_{n}^{\prime}(\omega a)+\Theta_{2}I_{n}^{\prime}(\omega a)=0. \tag{126}\] These two conditions tell us the dispersion relation \[J_{n}^{\prime}(\omega a)=0, \tag{127}\] and since \(I^{\prime}_{n}\) is always positive, \[\Theta_{2}=0. \tag{101}\] For each \(n\), the dispersion relation gives a list of suitable frequencies \(\{\omega_{n,\alpha}:\,\alpha=1,2,\ldots\}\), and so the wave modes are \[\Upsilon^{1}_{n,\alpha}(r,\theta) =\cos(n\theta)\chi_{n,\alpha}(r), \tag{102}\] \[\Upsilon^{2}_{n,\alpha}(r,\theta) =\sin(n\theta)\chi_{n,\alpha}(r), \tag{103}\] where \[\chi_{n,\alpha}=J_{n}(\omega_{n,\alpha}r),\;n=0,1,\ldots \tag{104}\] In this subsection we (i) derived the general solution to the eigenvalue problem for 3D circular film, which will be used in Eq. (102) and (ii) derived the wave modes for 3D circular film with prescribed \(90^{\circ}\) contact angle. ### Thermal-capillary-wave theory The free energy required to perturb the free surface is given by the product of the surface tension and the surface area created: \[E=\gamma\Bigg{(}\int_{0}^{2\pi}\int_{0}^{a}\sqrt{1+\left(\frac{\partial h}{ \partial r}\right)^{2}+\frac{1}{r^{2}}\left(\frac{\partial h}{\partial\theta }\right)^{2}}rdrd\theta-\pi a^{2}\Bigg{)} \tag{105}\] and assuming small perturbations \[r^{2}\left(\frac{\partial h}{\partial r}\right)^{2}+\left(\frac{\partial h}{ \partial\theta}\right)^{2}\ll 1, \tag{106}\] so that a Taylor expansion gives \[E\approx\frac{\gamma}{2}\int_{0}^{2\pi}\int_{0}^{a}\Bigg{(}\left(\frac{ \partial h}{\partial r}\right)^{2}+\frac{1}{r^{2}}\left(\frac{\partial h}{ \partial\theta}\right)^{2}\Bigg{)}rdrd\theta. \tag{107}\] Applying Eq. (54) and denoting \[\Theta_{m,\alpha}=A_{m,\alpha}(t)\cos(m\theta)+B_{m,\alpha}\sin(m\theta) \tag{108}\] we find \[E =\frac{\gamma}{2}\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}\sum_{ \alpha=1}^{\infty}\sum_{\beta=1}^{\infty}\int_{0}^{2\pi}\int_{0}^{a}\Big{[} \Theta_{m,\alpha}(\theta)\Theta_{n,\beta}(\theta)\] \[\times\chi^{\prime}_{m,\alpha}(r)\chi^{\prime}_{n,\beta}(r)r+ \Theta^{\prime}_{m,\alpha}(\theta)\Theta^{\prime}_{n,\beta}(\theta)\] \[\times\chi_{m,\alpha}(r)\chi_{n,\beta}(r)\frac{1}{r}\Big{]}drd\theta\] \[=\gamma\pi\sum_{\alpha=1}^{\infty}\sum_{\beta=1}^{\infty}A_{0, \alpha}A_{0,\beta}\chi^{\prime}_{0,\alpha}\chi^{\prime}_{0,\alpha}rdr\] \[+\frac{\gamma\pi}{2}\sum_{m=1}^{\infty}\sum_{\alpha=1}^{\infty} \sum_{\beta=1}^{\infty}\int_{0}^{a}(A_{m,\alpha}A_{m,\beta}+B_{m,\alpha}B_{m, \beta})\] \[\times\Big{(}\chi^{\prime}_{m,\alpha}\chi^{\prime}_{m,\beta}r+m^ {2}\chi_{m,\alpha}\chi_{m,\beta}\frac{1}{r}\Big{)}dr\] \[=\gamma\pi\sum_{\alpha=1}^{\infty}\sum_{\beta=1}^{\infty}A_{0, \alpha}A_{0,\beta}\] \[\times\int_{0}^{a}\left(r\chi^{\prime}_{0,\alpha}\chi^{\prime}_{0, \beta}+\frac{0^{2}}{r}\chi_{0,\alpha}\chi_{0,\beta}\right)dr\] \[+\frac{\gamma\pi}{2}\sum_{m=1}^{\infty}\sum_{\alpha=1}^{\infty} \sum_{\beta=1}^{\infty}(A_{m,\alpha}A_{m,\beta}+B_{m,\alpha}B_{m,\beta})\] \[\times\int_{0}^{a}\left(r\chi^{\prime}_{m,\alpha}\chi^{\prime}_{m,\beta}+\frac{m^{2}}{r}\chi_{m,\alpha}\chi_{m,\beta}\right)dr. \tag{109}\] We can show that (even for \(m=0\)) \[\int_{0}^{a}\left(r\chi^{\prime}_{m,\alpha}\chi^{\prime}_{m,\beta }+\frac{m^{2}}{r}\chi_{m,\alpha}\chi_{m,\beta}\right)dr\] \[= \int_{0}^{a}\left(\frac{m^{2}}{r}\chi_{m,\alpha}-\chi^{\prime}_{m,\alpha}-ra\chi^{\prime\prime}_{m,\alpha}\right)\chi_{m,\beta}dr\] \[= S_{m,\alpha}\delta_{\alpha\beta} \tag{110}\] where \[S_{m,\alpha} =\frac{1}{2}\omega_{m,\alpha}a\Bigg{(}\omega_{m,\alpha}aJ_{m-1}^ {2}(\omega_{m,\alpha}a)\] \[-2mJ_{m-1}(\omega_{m,\alpha}a)J_{m}(\omega_{m,\alpha}a)\] \[+\omega_{m,\alpha}aJ_{m}^{2}(\omega_{m,\alpha}a)\Bigg{)}. \tag{111}\] \(S_{m,\alpha}\) can be further simplified using the fact that \(\chi_{m,\alpha}=J_{m}(\omega_{m,\alpha}r)\) and the property of Bessel function \(J_{m}^{\prime}(\rho)=J_{m-1}-m/\rho J_{m}(\rho)\) \[S_{m,\alpha} =\frac{1}{2}\omega_{m,\alpha}a\Bigg{(}\omega_{m,\alpha}a(J_{m}^{ \prime}(\omega_{m,\alpha}a))^{2}\] \[-\frac{m^{2}}{\omega_{m,\alpha}a}J_{m}^{2}(\omega_{m,\alpha})+ \omega_{m,\alpha}aJ_{m}^{2}(\omega_{m,\alpha}a)\Bigg{)}. \tag{112}\] The dispersion relation Eq. (53) then tells us that \[S_{m,\alpha}=\frac{1}{2}\left(\omega_{m,\alpha}^{2}a^{2}-m^{2}\right)J_{m}^{2}( \omega_{m,\alpha}a). \tag{113}\] Considering the asymptotic expansion of the Bessel function of first kind, as \(x\to\infty\), \[J_{m}(x)\sim\sqrt{\frac{2}{\pi x}}\cos\left(x-\frac{2m+1}{4}\pi\right)+\mathcal{O }(x^{-3/2}), \tag{100}\] one can easily see that since \(\omega_{m,\alpha}\) increases with \(m\) linearly, \(S_{m,\alpha}\) also increases with \(m\) linearly for large \(m\). Follow the same procedure as in Eq. (101), we can also show that the wave modes are orthogonal, \[\int_{0}^{2\pi}\int_{0}^{a}\Upsilon^{i}_{m,\alpha}(r,\theta)\Upsilon^{j}_{n, \beta}(r,\theta)rdrd\theta=\delta_{ij}\delta_{mn}\delta_{\alpha\beta}C, \tag{102}\] for some constant \(C\), which gives us confidence to apply equipartition theorem. In this subsection we (i) calculated the extra surface energy associated with the perturbations supporting Eq. (56), (ii) showed that perturbed surface area \(S_{n,\alpha}\) increases linearly with \(n\), which leads to the use of cut-off length scale and (iii) showed that the wave modes \(\Upsilon^{i}_{n,\alpha}\) are orthogonal to support Eq. (58) and Eq. (59). ## Appendix D 3D circular thin film with a pinned contact line In this section we layout the technical details for the 3D circular thin film with a pinned contact line. ### Derivation of wave modes Applying the pinned boundary condition Eq. (64) and no-flux boundary condition Eq. (49) to the general solution Eq. (100) \[H_{n}(r,\theta) =(A_{1}\cos(n\theta)+B_{1}\sin(n\theta))\] \[\times(C_{1}J_{n}(\zeta r)+D_{1}Y_{n}(\zeta r))\] \[+(A_{2}\cos(n\theta)+B_{2}\sin(n\theta))\] \[\times(C_{2}I_{n}(\zeta r)+D_{2}K_{n}(\zeta r)). \tag{103}\] we get \[\Theta_{1}J_{n}(\zeta a)+\Theta_{2}I_{n}(\zeta a)=0 \tag{104}\] and \[-\Theta_{1}J^{\prime}_{n}(\zeta a)+\Theta_{2}I^{\prime}_{n}(\zeta a)=0. \tag{105}\] This tells us that \[\Theta_{2}=-\frac{J_{n}(\zeta a)}{I_{n}(\zeta a)}\Theta_{1} \tag{106}\] and gives us the dispersion relation \[2nJ_{n}(\zeta a)I_{n}(\zeta a) +\zeta a\Big{[}J_{n}(\zeta a)I_{n+1}(\zeta a)\] \[-J_{n+1}(\zeta a)I_{n}(\zeta a)\Big{]}=0. \tag{107}\] For each \(n\) the dispersion relation gives a list of suitable frequencies \(\{\zeta_{n,\alpha}:\alpha=1,2,\ldots\}\), with wave modes given by \[\Psi^{1}_{n,\alpha}(r,\theta) =\cos(n\theta)\psi_{n,\alpha}(r), \tag{108}\] \[\Psi^{2}_{n,\alpha}(r,\theta) =\sin(n\theta)\psi_{n,\alpha}(r). \tag{109}\] Here \[\psi_{n,\alpha}(r)=J_{n}(\zeta_{n,\alpha}r)-\frac{J_{n}(\zeta_{n,\alpha}a)}{I_{n}(\zeta_{n,\alpha}a)}I_{n}(\zeta_{n,\alpha}r),\] \[n=0,1,\ldots \tag{110}\] ### Thermal-capillary-wave theory Similar to Appendix C.2 we have \[S =\gamma\pi\sum_{\alpha=1}^{\infty}\sum_{\beta=1}^{\infty}C_{0, \alpha}C_{0,\beta}\] \[\times\int_{0}^{a}\left(r\psi^{\prime}_{0,\alpha}\psi^{\prime}_{0,\beta}+\frac{0^{2}}{r}\psi_{0,\alpha}\psi_{0,\beta}\right)dr\] \[+\frac{\gamma\pi}{2}\sum_{m=1}^{\infty}\sum_{\alpha=1}^{\infty} \sum_{\beta=1}^{\infty}(C_{m,\alpha}C_{m,\beta}+D_{m,\alpha}D_{m,\beta})\] \[\times\int_{0}^{a}\left(r\psi^{\prime}_{m,\alpha}\psi^{\prime}_{m,\beta}+\frac{m^{2}}{r}\psi_{m,\alpha}\psi_{m,\beta}\right)dr, \tag{111}\] where \(C_{m,\alpha}\) and \(D_{m,\alpha}\) comes from the notation \[\Theta_{m,\alpha}(\theta)=C_{m,\alpha}\cos(m\theta)+D_{m,\alpha}\sin(m\theta). \tag{112}\] We now show that for any \(m\) (even when \(m=0\)) \[\int_{0}^{a}\left(r\psi^{\prime}_{m,\alpha}\psi^{\prime}_{m,\beta }+\frac{m^{2}}{r}\psi_{m,\alpha}\psi_{m,\beta}\right)dr\] \[= \int_{0}^{a}\left(\frac{m^{2}}{r}\psi_{m,\alpha}-\psi^{\prime}_{m,\alpha}-r\psi^{\prime\prime}_{m,\alpha}\right)\psi_{m,\beta}dr \tag{113}\] \[= K_{m,\alpha}\delta_{\alpha\beta}\] where the first equality used the fact that \(\psi_{m,\beta}(0)=\psi_{m,\beta}(a)=0\). Recall that \(f_{m,\alpha}(r,\theta)=\Theta_{m,\alpha}(\theta)\psi_{m,\alpha}(r)\) is one of the eigenfunctions that satisfies \[\nabla^{2}\nabla^{2}f_{m,\alpha}(r,\theta)=\omega^{4}_{m,\alpha}f_{m,\alpha}(r,\theta), \tag{114}\] and expanding this equation in polar coordinate gives us \[\psi^{\prime\prime\prime\prime}_{m,\alpha}+\frac{2}{r}\psi^{ \prime\prime\prime}_{m,\alpha}-\frac{1+2m^{2}}{r^{2}}\psi^{\prime\prime}_{m,\alpha}\] \[+\frac{1+2m^{2}}{r^{3}}\psi^{\prime}_{m,\alpha}+\frac{m^{4}-4m^{ 2}}{r^{4}}\psi_{m,\alpha}=\omega^{4}_{m,\alpha}\psi_{m,\alpha}. \tag{115}\] With the help of Mathematica [71], using a similar procedure to before, we found that \[\omega_{m,\alpha}^{4}\int_{0}^{a}\left(r\psi_{m,\alpha}^{\prime}\psi _{m,\beta}^{\prime}+\frac{m^{2}}{r}\psi_{m,\alpha}\psi_{m,\beta}\right)dr\] \[= \omega_{m,\alpha}^{4}\int_{0}^{a}\left(\frac{m^{2}}{r}\psi_{m, \alpha}-\psi_{m,\alpha}^{\prime}-r\psi_{m,\alpha}^{\prime\prime}\right)\psi_{m,\beta}dr\] \[= \omega_{m,\beta}^{4}\int_{0}^{a}\left(\frac{m^{2}}{r}\psi_{m, \beta}-\psi_{m,\beta}^{\prime}-r\psi_{m,\beta}^{\prime\prime}\right)\psi_{m, \alpha}dr\] \[= \omega_{m,\beta}^{4}\int_{0}^{a}\left(r\psi_{m,\alpha}^{\prime} \psi_{m,\beta}^{\prime}+\frac{m^{2}}{r}\psi_{m,\alpha}\psi_{m,\beta}\right)dr. \tag{101}\] If \(\omega_{m,\alpha}\neq\omega_{m,\beta}\) this implies that \[\int_{0}^{a}\left(r\psi_{m,\alpha}^{\prime}\psi_{m,\beta}^{\prime}+\frac{m^{2 }}{r}\psi_{m,\alpha}\psi_{m,\beta}\right)dr=K_{m,\alpha}\delta_{\alpha\beta} \tag{102}\] where \[K_{m,\alpha} =\frac{1}{2}\omega_{m,\alpha}^{2}a^{2}\Bigg{(}I_{m-1}(\omega_{m, \alpha}a)I_{m+1}(\omega_{m,\alpha}a)\frac{J_{m}^{2}(\omega_{m,\alpha}a)}{I_{m} ^{2}(\omega_{m,\alpha}a)}\] \[-J_{m-1}(\omega_{m,\alpha}a)J_{m+1}(\omega_{m,\alpha}a)\Bigg{)}. \tag{103}\] Considering the asymptotic expansion of the Bessel function of the first kind Eq. (100) and the asymptotic expansion of the modified Bessel function of the first kind, as \(x\rightarrow\infty\), \[I_{n}(x)=\exp(x)\sqrt{\frac{1}{2\pi x}}\left(1+\frac{1-4n^{2}}{8x}+\mathcal{O} (x^{-2})\right), \tag{104}\] one can easily see that for large \(n\), \(K_{n,\alpha}\) increases with \(n\) linearly. In this subsection we (i) calculated the additional surface energy associated with the perturbations used in Eq. (71) and (ii) showed that perturbed surface area \(K_{n,\alpha}\) increases linearly with \(n\), which leads to the use cut-off length scale for wave modes.
2306.12479
Quasiperiodicity hinders ergodic Floquet eigenstates
Quasiperiodic systems in one dimension can host non-ergodic states, e.g. localized in position or momentum. Periodic quenches within localized phases yield Floquet eigenstates of the same nature, i.e. spatially localized or ballistic. However, periodic quenches across these two non-ergodic phases were thought to produce ergodic diffusive-like states even for non-interacting particles. We show that this expectation is not met at the thermodynamic limit where the system always attains a non-ergodic state. We find that ergodicity may be recovered by scaling the Floquet quenching period with system size and determine the corresponding scaling function. Our results suggest that while the fraction of spatially localized or ballistic states depends on the model's details, all Floquet eigenstates belong to one of these non-ergodic categories. Our findings demonstrate that quasiperiodicity hinders ergodicity and thermalization, even in driven systems where these phenomena are commonly expected.
Miguel Gonçalves, Pedro Ribeiro, Ivan M. Khaymovich
2023-06-21T18:00:03Z
http://arxiv.org/abs/2306.12479v1
# Quasiperiodicity hinders ergodic Floquet eigenstates ###### Abstract Quasiperiodic systems in one dimension can host non-ergodic states, e.g. localized in position or momentum. Periodic quenches within localized phases yield Floquet eigenstates of the same nature, i.e. spatially localized or ballistic. However, periodic quenches across these two non-ergodic phases were thought to produce ergodic diffusive-like states even for non-interacting particles. We show that this expectation is not met at the thermodynamic limit where the system always attains a non-ergodic state. We find that ergodicity may be recovered by scaling the Floquet quenching period with system size and determine the corresponding scaling function. Our results suggest that while the fraction of spatially localized or ballistic states depends on the model's details, Floquet eigenstates belong to one of these non-ergodic categories. Our findings demonstrate that quasiperiodicity hinders ergodicity and thermalization, even in driven systems where these phenomena are commonly expected. The study of localization and ergodicity in quantum many-body systems has long been a prominent topic of research in Condensed Matter Physics. Among these studies, the existence of many-body localization (MBL) and transitions between ergodic and MBL phases in interacting systems is a hot topic, that is currently under intense scrutiny [1; 2; 3; 4; 5; 6; 7]. A different research direction, that dates back to the paradigmatic Anderson localization [8], focuses in the non-interacting limit, where nontrivial localization properties can already occur and a considerably higher degree of understanding can be attained. Currently, the non-interacting limit is not only of fundamental theoretical interest, but also very relevant experimentally, since it can be simulated in optical lattices, where interactions can be tuned [9]. While in the absence of interactions, any finite amount of random disorder localizes the wave function in 1D short-range Hamiltonians [10; 11], non-ergodic ballistic, localized and even multifractal phases can occur in 1D quasiperiodic systems[12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], in various long-range models [25; 26; 27; 28; 29; 30], as well as claimed in some hierarchical graphs [31; 32; 33; 34; 35; 36]. A simple but non-trivial paradigmatic model where such physics can be well understood, is the Aubry-Andre model, for which an energy-independent ballistic-to-localized transition occurs at a finite strength of the quasiperiodic potential [22; 37]. While the study of localization and ergodicity in periodically driven systems dates back to the periodically kicked quantum rotator [38; 39; 40], it has experienced a resurgence of interest [41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53] due to the possibility to emulate time-periodic Hamiltonians and quasiperiodic potentials in experiments involving ultracold atoms and trapped ions experiments [54; 55; 56]. These (non-equilibrium) Floquet systems are very appealing, because on the one hand, they provide a means to realize complex effective time-independent Hamiltonians by careful choice of the driving protocol [57; 58; 59; 60; 61; 62; 63; 64; 65], and, on the other hand, they can support novel phases of matter with no equilibrium counterpart [66; 67; 68; 69; 70; 71]. A notable example of the latter arises in interacting 1D quasiperiodic systems, where driving can induce a transition from non-ergodic many-body-localized states to ergodic states [43; 44; 47; 48; 55]. For 1D quasiperiodic systems, the localization phase diagram of the Floquet Hamiltonian can show a complex structure at high frequencies, even in the non-interacting limit. Non-ergodic ballistic, localized and multifractal phases and energy-dependent transitions between them can arise in the Floquet Hamiltonian [72; 73; 74; 75; 76; 77; 78], even if they are not present in the undriven model. Interestingly, one of the widely studied non-interacting models was recently realized experimentally in cold atoms [56]. For lower frequencies, transitions into a non-ergodic delocalized phase were reported theoretically even in the absence of interactions [79; 80], where a connection with the frequency-induced ergodic-to-MBL transition observed experimentally in Ref. [55] was made. However, these theoretical studies were mostly carried out for fixed system sizes, possibly motivated by the limited sizes in cold atom experiments. It is however of paramount importance to understand the nature of the thermodynamic-limit state, which requires a detailed finite-size scaling analysis. In this paper we carry out a finite-size scaling analysis at large driving periods for a periodically-driven Aubry Andre model and show that, contrary to previous expectations [79; 80], quenches between ballistic and localized states yield non-ergodic Floquet states in the thermodynamic limit for any finite driving period. We find that quenches between localized states yield localized Floquet states as expected [81; 82; 83; 79; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 286; 287; 289; 288; 289; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 335; 336; 337; 338; 341; 342; 343; 344; 345; 346; 347; 348; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 504; 505; 506; 507; 508; 509; 510; 511; 521; 535; 540; 511; 536; 541; 559; 561; 571; 58; 592; 593; 512; 594; 513; 595; 514; 514; 515; 537; 516; 517; 518; 529; 52; 538; 542; 543; 55; 56; 572; 58; 59; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 80; 81; 84; 89; 82; 85; 86; 87; 89; 88; 87; 88; 89; 91; 89; 92; 80; 82; 89; 80; 83; 88; 89; 92; 810; 84; 85; 89; 86; 88; 87; 89; 93; 94; 88; 88; 89; 95; 89; 96; 97; 98; 99; 100; 99; 111; 112; 133; 14; 150; 152; 154; 156; 157; 168; 179; 180; 183; 184; 185; 186; 187; 188; 189; 190; 187; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 210; 222; 23; 24; 25; 26; 27; 28; 297; 298; 299; 301; 31; 32; 33, 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 58; 59; 60; 61; 62; 63; 64; 65; 67; 69; 70; 72; 73; 74; 75; 76; 78; 79; 81; 82; 83; 84; 85; 86; 87; 88; 89; 99; 90; 910; 112; 134; 15; 16; 17; 19; 188; 199; 21; 195; 22; 233; 24; 26; 27; 28; 28; 29; 96; 97; 198; 199; 300; 110; 111; 12; 13; 140; 15; 16; 17; 199; 31; 20; 21; 22; 23; 241; 28; 25; 29; 31; 32; 33; 336; 38; 37; 39; 41; 42; 43; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 82; 83; 84; 85; 86; 87; 88; 99; 90; 911; 10; 111; 13; 14; 15; 16; 17; 18; 19; 19; 20; 21; 23; 24; 25; 26; 27; 28; 29; 30; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 43; 44; 445; 46; 47; 48; 49; 51; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 71; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 11; 12; 1 \(D_{r}=D_{k}=d\). Henceforth, we define the inverse participation ratios averaged (geometrically) over all eigenstates and configurations of \(\phi\) and \(\kappa\) as \(\mathcal{I}_{r}=\langle\text{IPR}^{\alpha}\rangle_{\phi,\kappa,\alpha}\) and \(\mathcal{I}_{k}=\langle\text{IPR}^{\alpha}_{K}\rangle_{\phi,\kappa,\alpha}\). We averaged over a number of configurations in the interval \(N_{c}\in[400,10^{4}]\), choosing the larger numbers of configurations for the smaller system sizes. such that \(D_{p}^{(\infty)}=D_{p}\), \(p=r,k\). When \(\epsilon<\epsilon^{*}=1-2/V(V>2)\), the quench is only between localized states, as illustrated in Fig. 1(a). In this case, we see that up to weak finite-size effects, \(\langle r\rangle=r_{\text{Poisson}}\), \(D_{r}=0\) and \(D_{k}=1\), clearly showing that the Floquet eigenstates are non-ergodic and localized. Once \(\epsilon\geq\epsilon^{*}\), we start quenching between ballistic and localized phases, which is accompanied by a sharp increase in \(\langle r\rangle\), that approaches \(r_{\text{GUE}}\) as \(\epsilon\) is increased; and in \(D_{r}^{(L)}\), that approaches \(1\), while we still have \(D_{k}^{(L)}\approx 1\). Upon initial observation, this behaviour could indicate a transition into a diffusive ergodic phase. However, when \(L\) is increased, there is a clear overall decrease both in \(\langle r\rangle\) and in \(D\), which is already a clear indication of the fragile nature of the ergodic-phase candidate. The instability of the ergodic phase is further corroborated in Fig. 2, where we also set \(V=3\) and choose \(\epsilon=0.45>\epsilon^{*}\), to quench between ballistic and localized states. In Fig. 2(a) we show the distribution of ratios \(P(r)\) for fixed \(T=150\) and for different system sizes. There, we can clearly see that the distribution of ratios transitions from exhibiting level repulsion to closely resembling the Poisson distribution as \(L\) is increased. Concurrently, Fig. 2(c) demonstrates that the distribution of \(\log\mathcal{I}_{r}\) for the same \(T\) is almost entirely converged for the larger \(L\) used, implying the localization of all (or nearly all) states [note that \(D_{k}\approx 1\), as shown in Fig. 1(c)]. The results so far are in support of an ergodic phase only surviving when \(T\to\infty\) at finite \(L\). With this in mind, we define a correlation length \(\xi(T)\) that diverges when \(T\to\infty\), such that when \(L\ll\xi\), the system is ergodic while when \(L\gg\xi\), the system is non-ergodic. Close to the transition to the diffusive ergodic phase, that is, for large enough \(T\), we assume that \(\xi\) diverges as a power-law in \(T\), \(\xi\sim T^{\beta}\), with an unknown exponent \(\beta\) that may depend on the model parameters. We also assume that in this regime, \(\langle r\rangle\) follows a one-parameter scaling function that satisfies, \[\langle r\rangle=f(L/\xi)=\begin{cases}r_{\text{GUE}}&,L\ll\xi\\ \approx r_{\text{Poisson}}&,L\gg\xi\end{cases}. \tag{6}\] In a similar way, we also assume \(\mathcal{I}_{r}\) follows the one-parameter scaling ansatz \(\mathcal{I}_{r}=L^{\mu}g(L/\xi)\) at large enough \(T\). Using that for \(\xi/L\to\infty\), \(\mathcal{I}_{r}=L^{\mu}g(0)\sim L^{-1}\) (diffusive and ergodic), we get that \(\mu=-1\). In the limit \(L/\xi\to\infty\), the states are localized, \(\mathcal{I}_{r}\sim L^{0}\). We therefore have the following limits for \(g(L/\xi)\) \[L\mathcal{I}_{r}=g(L/\xi)\sim\begin{cases}1&,L\ll\xi\\ L/\xi&,L\gg\xi\end{cases}. \tag{7}\] As a consequence \(\mathcal{I}_{r}\simeq 1/\min(L,\xi)\), meaning that \(\xi(T)\) is, indeed, a \(T\)-dependent localization length of the model. Indeed, as soon as \(L\ll\xi\), the states do not know about \(\xi\) and look like ergodic ones, while in the opposite limit of \(L\gg\xi\), the boundary conditions are not important and all the states are localized at a distance \(\sim\xi\). We note that here we are not considering the scaling function for \(\mathcal{I}_{k}\), since \(\mathcal{I}_{k}\sim L^{-1}\) both in the diffusive and localized phases. In Figs. 2(b,d), we collapse data for different periods in the range \(T\in[125,650]\) and for different \(L\), showing the validity of the scaling ansatzes in Eqs. (6), (7). In Appendix B we provide precise details on how the scaling collapses were computed. From the collapses, we can extract \(\xi(T)\) that we plot in the inset of Fig. 2(b). In this figure, we see that \(\xi(T)\) acquires a power-law behaviour at large \(T\), as expected, giving compatible results when extracted from the scaling collapses of \(\langle r\rangle\) and \(\mathcal{I}_{r}\). By fitting the power-law at large \(T\), we extract \(\beta=2\). We note however, that this exponent is non-universal and depends on the model's parameters as we demonstrate below for other examples. The good scaling collapses in Figs. 2(b,d) confirm our previous affirmations: (i) when the system size increases for fixed \(T\) (that is, fixed \(\xi\)), the Floquet eigenstates flow to a non-ergodic localized phase; (ii) if \(T\) is increased for fixed \(L\), the system flows to a diffusive ergodic phase. This implies that the limits \(T\to\infty\) and \(L\to\infty\) do not commute. Next, in order to inspect how different parts of the \(\mathcal{I}_{r}\) distribution evolve with \(T\) and \(L\), similarly to [76], we define the fraction of states \(x_{r}\) for which the average IPR is bounded by \(\mathcal{I}_{r}=\mathcal{I}_{r}^{*}\), given by \[x_{r}=\int_{-\infty}^{\log\mathcal{I}_{r}}P(y^{\prime})dy^{\prime} \tag{8}\] where \(y^{\prime}=\log\mathcal{I}_{r}\). In Fig. 2(e) we plot \(\log\mathcal{I}_{r}^{*}(x_{r})\), showing that it is a smooth function of \(x_{r}\). To analyse how \(\mathcal{I}_{r}^{*}\) evolves for different fractions \(x_{r}\), in Fig. 2(f) we perform the \(x_{r}\)-dependent collapse of \(\log L\mathcal{I}_{r}^{*}(x_{r})\). We observe that the corresponding scaling functions have the properties of Eq. (7) for the studied fractions of states \(x_{r}\) and can even be collapsed into a single universal curve given in the inset of Fig. 2(f), as detailed in the Figure's caption. Noteworthy, we checked that the correlation lengths \(\xi(x_{r})\) obtained from the scaling collapses at different \(x_{r}\) are almost independent of \(x_{r}\) at large \(T\), as we show in Appendix B. In Appendix A, we also studied the case when the quench's center of mass lies in the ballistic phase. In this case, we observed that the Floquet eigenstates also become non-ergodic in the thermodynamic limit, but ballistic, instead of localized. _Quench's center of mass at critical point.--_ We now turn to the case where the quench's center of mass is exactly at the critical point, that is, \(V=2\). This is studied in Fig. 3. Similarly to the previous case, Fig. 3(a) reveals the evolution breakdown of level repulsion when \(L\) increases, for fixed \(T\). In Fig. 3(b) we see that a scaling collapse for \(\langle r\rangle\) is still possible. It is worth noticing that in this case, however, \(\langle r\rangle\) can take values significantly below \(r_{\text{Poisson}}\) for large \(L\). This can however be a finite-size effect arising from the formation of energy gaps that can only be resolved for large enough \(L\). In this case, the \(P(r)\) distribution should converge to the Poisson distribution in the thermodynamic limit. The main difference for this quench comparing to the \(V=3\) case is that there is clearly a fraction of states that flow to localized behaviour, while the remaining fraction flows to ballistic behaviour, as \(L\) is increased. This is illustrated in Fig. 3(c). There, we define \(x_{r}\) as in Eq. (8) for \(\mathcal{I}_{\tau}\) and we also define \(x_{k}\) using the analogous definition for \(\mathcal{I}_{k}\): \[x_{k}=\int_{-\infty}^{\log\mathcal{I}_{k}}P(y^{\prime})dy^{\prime} \tag{9}\] where \(y^{\prime}=\log\mathcal{I}_{k}\). For the following discussion, we define the \(x_{p}\)-dependent fractal dimensions as \(D_{p}^{(L)}(x_{p})=\mathcal{D}\big{(}\mathcal{I}_{p}^{*}(x_{p})\big{)}\) (see Eq. (5)), with \(p=r,k\). In Fig. 3(c), we can see that for \(x_{r}>x_{r}^{*}\approx 0.35\) (see \(x_{r}^{*}\) indicated in the figure), \(D_{r}^{(L)}(x_{r})\) decreases with \(L\), seemingly towards \(0\). Concomitantly, \(D_{k}^{(L)}(x_{k})\) increases towards \(1\) for \(x_{k}<x_{k}^{*}=1-x_{r}^{*}\approx 0.65\). This is an indication that approximately \(65\%\) of states are localized in the thermodynamic limit. On the other hand, for the remaining fraction of \(\approx 35\%\) states, the results are concomitant with \(D_{r}^{(L)}(x_{r}<x_{r}^{*})\to 1\) and \(D_{k}^{(L)}(x_{k}>x_{k}^{*})\to 0\), as expected for ballistic states. We note that it might happen that a finite fraction of multifractal states survives in the thermodynamic limit. However, for the available system sizes, all the states seem to flow to localized and ballistic ones. That being the case, only a fraction of multifractal states of measure zero, arising at mobility edges between ballistic and localized states, should survive the thermodynamic limit. Finally, in Fig. 3(d) we make scaling collapses of \(\log L\mathcal{I}_{k}^{*}\) and \(\log L\mathcal{I}_{r}^{*}\) for different \(x_{r}\) and \(x_{k}\). We can see that for large enough \(x_{r}\), \(D_{r}\) clearly flows from \(D_{r}=1\) (diffusive) to \(D_{r}=0\) (localized) as \(L\to\infty\) for fixed \(T\). In the same way, for large enough \(x_{k}\), \(\Gamma_{k}\) flows from \(D_{k}=1\) (diffusive) to \(D_{k}=0\) (ballistic) as \(L\to\infty\) for fixed \(T\). For small \(x_{r}\) (\(x_{k}\)), \(D_{r}=1\) (\(D_{k}=1\)) in both the limits \(L/\xi\to\infty\) and \(\xi/L\to\infty\), as indicated by the constant \(\log L\mathcal{I}_{r}^{*}\) (\(\log L\mathcal{I}_{k}^{*}\)) in both these limits. It is nonetheless interesting to notice that even in this case there is a crossover regime at finite \(L\) and \(T\) indicated in Fig. 3(d). _Dualities and universality at small \(T\).--_ Up to now, we verified that when \(L\to\infty\), the Floquet eigenstates become non-ergodic, as in the static limit. At large \(T\), however, there is a very complex structure of mobility edges, and a (quasi)energy-resolved analysis becomes very challenging. On the other hand, for small \(T\), such analysis is still possible and elucidating. In Fig. 4(a), we show an example where it can be clearly seen that for small \(T\), even though the phase diagram can already be quite complex, clear transitions between ballistic (low IPR) and localized (large IPR) phases can still be found. In the static case, hidden dualities with universal behaviour were found to be behind these transi Figure 3: **Quench’s center of mass at critical point:** results for \(V=2,\epsilon=0.2\). (a) \(P(r)\) distribution for \(T=250\), for different system sizes \(L\). (b) Scaling collapses of \(\langle r\rangle\) calculated for driving periods \(T\) belonging to the interval \(T\in[125,1100]\) and for \(L\in[1597-6765]\) (see Appendix for details on the scaling collapse). The inset shows \(\log(\xi/\xi_{0})\) as function of \(\log T\), obtained from the data collapses of \(\langle r\rangle\), \(L\mathcal{I}_{r}^{*}(x_{r}=0.95)\) and \(L\mathcal{I}_{k}^{*}(x_{k}=0.95)\) shown in (d). The cyan line was obtained by fitting for the data points at \(7\) largest \(T\)-values (combining the data from the three shown collapses), yielding a power-law \(\xi\sim T^{2.6}\). (c) Fractal dimensions \(D_{r}^{(L)}(x_{r})\equiv\mathcal{D}\big{(}\mathcal{I}_{r}^{*}(x_{r})\big{)}\) (top) and \(D_{k}^{(L)}(x_{k})\equiv\mathcal{D}\big{(}\mathcal{I}_{k}^{*}(x_{k})\big{)}\) (bottom) [see Eq. (5)], for \(T=125\). (d) Scaling collapses for \(\log\big{(}L\mathcal{I}_{r}^{*}(x_{r})\big{)}\) (top) and \(\log\big{(}L\mathcal{I}_{k}^{*}(x_{k})\big{)}\) (bottom), for different \(x_{r}\) and the same range of \(T\) and \(L\) as in (a). The magenta dashed lines correspond to fits of the collapsed data for \(x_{r}=0.95\) (top) and \(x_{k}=0.95\) (bottom) to the function \(g(y)=g_{-\infty}+\log(1+e^{y-y_{0}})\), with \(y=\log L-\log\xi/\xi_{0}\). In the inset, we show the \(L\)- and \(T\)-dependent fractal dimensions obtained from these fits through \(D_{r(k)}^{(L,T)}\equiv 1-[\log(L\mathcal{I}_{r(k)})]^{\prime}\). tions [93]. Moreover, it was found that ballistic, localized and even critical phases can be understood in terms of renormalization-group flows to simple renormalized effective models [18]. Remarkably, we find that these results can be generalized for the Floquet Hamiltonian. This can be seen by inspecting the dependence of the quasienergies on the potential shift \(\varphi\equiv L\phi\) and on the phase twist \(\kappa\)[93; 18]. We illustrate this for two representative ballistic-to-localized transitions in Figs. 4(b,c), where we see that: (i) at the ballistic (localized) phase, the quasienergy dependence on \(\kappa\) (\(\varphi\)) is dominant and the dependence on \(\varphi\) (\(\kappa\)) becomes irrelevant as \(L\to\infty\) (not shown); (ii) the quasienergies become invariant under switching \(\kappa\) and \(\varphi\) at the critical point. This is exactly the universal behaviour also found for the single-particle energies in the static case [93; 18]. With these results in mind, we conjecture that the hidden dualities and RG universality found at small \(T\) extend to large \(T\), but only for a large enough system size when the system flows to one of the non-ergodic phases. It is however very challenging to verify this conjecture due to the intricate structure of mobility edges at large \(T\) and the limited available system sizes. ## III Discussion Contrary to prior expectations, we have established that time-periodic quenches between non-ergodic ballistic and localized states in non-interacting 1D quasiperiodic systems lead to the emergence of non-ergodic states at the thermodynamic limit, for any finite driving period. To restore ergodicity, the driving period must be scaled with the system size, according to the corresponding scaling functions, which we also determined. We expect our findings to hold in generic driven non-interacting 1D quasiperiodic systems. Even though delocalized phases were previously reported for small enough driving frequencies, no clear phase with ergodic properties surviving the thermodynamic limit was identified so far. For instance, in Ref. [78], a localization-delocalization transition with decreasing driving frequency was recently reported. However, as we detail in Appendix C, the low-frequency extended phases are non-ergodic, either ballistic or multifractal. Our findings raise interesting further questions, such as quenching outcomes between distinct phases in higher dimensions, where ergodic states can exist in static, non-interacting situations. These results also suggest that finite interactions may be crucial for the observation of driving-induced ergodic to MBL transitions reported experimentally [55]. Nonetheless, it is likely that the ergodic to non-ergodic crossover, which we predict for the non-interacting limit, is experimentally accessible. If so, this would allow the experimental determination of the scaling function between the period and the system size which effectively characterises the fragility of the non-interacting ergodic states. ###### Acknowledgements. M. G. and P. R. acknowledge partial support from Fundacao para a Ciencia e Tecnologia (FCT-Portugal) through Grant No. UID/CTM/04540/2019. M. G. acknowledges further support from FCT-Portugal through the Grant SFRH/BD/145152/2019. I. M. K. acknowledges the support by Russian Science Foundation (Grant No. 21-12-00409). We finally acknowledge the Tianhe-2JK cluster at the Beijing Computational Science Research Center (CSRC), the Bob\(|\)Macc supercomputer through computational project project CPCA/A1/470243/2021 and the OBLIVION supercomputer, through projects HPCUE/A1/468700/2021, 2022.15834.CPCA.A1 and 2022.15910.CPCA.A1 (based at the High Performance Computing Center - University of E vora) funded by the ENGAGE SKA Research Infrastructure (reference POCI-01-0145-FEDER-022217 - COMPETE 2020 and the Foundation for Science and Technology, Portugal) and by the BigData@UE project (reference ALT20-03-0246-FEDER-000033 - FEDER and the Alentejo 2020 Regional Operational Program. Computer assistance was provided by CSRC's, Bob\(|\)Macc's and OBLIVION's support teams. Figure 4: (a) \(\log\)IPR as a function of quasienergy \(E\) and driving period \(T\), for \(L=987,V=2,\epsilon=0.8\), and for a fixed random choice of \(\phi\) and \(\kappa\). (b,c) Quasienergy contours (lighter colors correspond to larger quasienergy) in the plane of phases \(\varphi\equiv L\phi\) and \(\kappa\), for \(L=34\) and values of \(T\) indicated above each figure, for the \(\lfloor L/2\rfloor\)-th largest quasienergy (ordered in the interval \(]-\pi,\pi]\), that has \(E\approx 0\). The results were obtained around the ballistic-to-localized transitions indicated by the dashed cyan lines in (a), with the blue and red figures chosen respectively inside the ballistic and localized phases, and the green figure approximately at the critical point.
2304.12200
SplitAMC: Split Learning for Robust Automatic Modulation Classification
Automatic modulation classification (AMC) is a technology that identifies a modulation scheme without prior signal information and plays a vital role in various applications, including cognitive radio and link adaptation. With the development of deep learning (DL), DL-based AMC methods have emerged, while most of them focus on reducing computational complexity in a centralized structure. This centralized learning-based AMC (CentAMC) violates data privacy in the aspect of direct transmission of client-side raw data. Federated learning-based AMC (FedeAMC) can bypass this issue by exchanging model parameters, but causes large resultant latency and client-side computational load. Moreover, both CentAMC and FedeAMC are vulnerable to large-scale noise occured in the wireless channel between the client and the server. To this end, we develop a novel AMC method based on a split learning (SL) framework, coined SplitAMC, that can achieve high accuracy even in poor channel conditions, while guaranteeing data privacy and low latency. In SplitAMC, each client can benefit from data privacy leakage by exchanging smashed data and its gradient instead of raw data, and has robustness to noise with the help of high scale of smashed data. Numerical evaluations validate that SplitAMC outperforms CentAMC and FedeAMC in terms of accuracy for all SNRs as well as latency.
Jihoon Park, Seungeun Oh, Seong-Lyun Kim
2023-04-17T12:15:59Z
http://arxiv.org/abs/2304.12200v1
# SplitAMC: Split Learning for ###### Abstract Automatic modulation classification (AMC) is a technology that identifies a modulation scheme without prior signal information and plays a vital role in various applications, including cognitive radio and link adaptation. With the development of deep learning (DL), DL-based AMC methods have emerged, while most of them focus on reducing computational complexity in a centralized structure. This centralized learning-based AMC (CentaMC) violates data privacy in the aspect of direct transmission of client-side raw data. Federated learning-based AMC (FedeAMC) can bypass this issue by exchanging model parameters, but causes large resultant latency and client-side computational load. Moreover, both CentAMC and FedeAMC are vulnerable to large-scale noise occured in the wireless channel between the client and the server. To this end, we develop a novel AMC method based on a split learning (SL) framework, coined _SplitAMC_, that can achieve high accuracy even in poor channel conditions, while guaranteeing data privacy and low latency. In SplitAMC, each client can benefit from data privacy leakage by exchanging smashed data and its gradient instead of raw data, and has robustness to noise with the help of high scale of smashed data. Numerical evaluations validate that SplitAMC outperforms CentAMC and FedeAMC in terms of accuracy for all SNRs as well as latency. Automatic modulation classification (AMC), split learning, federated learning, noise robustness, latency ## I Introduction With the remarkable development of wireless communication, understanding the radio spectrum plays an essential role in various applications, such as cognitive radio and link adaptation [1, 2]. In this respect, automatic modulation classification (AMC) is emerging as a promising technology in a way that the receiver (Rx) identifies the modulation scheme of the corresponding transmitter (Tx) signal without prior information [3]. As the first of its kind, a traditional AMC method aims to achieve high detection probability through the likelihood function design [4]. However, due to its sensitivity to signal errors, it suffers from performance degradation and computational complexity, especially in the wireless environment where channel state information (CSI) is not given and channel gain fluctuates. The feature-based AMC method can detour these problems by extracting hand-crafted features from the received signal and performing classification tasks. Going further, DL-based AMC can solve the computational complexity problem of likelihood-based AMC under a time-varying channel with deep learning (DL) framework [5]. The existing DL-based AMC method is often rooted in a centralized architecture that enjoys dispersed data among multiple clients by direct local data aggregation on the server [6]. However, such centralized learning-based AMC (CentAMC) violates data privacy leakage while causing significant communication bottlenecks on the server-side. In order to guarantee data privacy, several AMC works apply a distributed learning framework to the AMC method. As its representative, FedeAMC [7], which combines federated learning (FL) with AMC, performs modulation classification by updating the local model through aggregation and redistribution of the model parameters between the server and clients. However, this model exchange incurs a huge communication overhead, so it is ill-suited for the large-sized model. In _addition_, _CentAMC and FedeAMC tend to be vulnerable to large-scale noise_, _leading to accuracy drop at low signal-to-noise ratio (SNR)_, _highlighting the need for alternatives (see Table II)_. Towards a noise-robust, communication-efficient, and data-private AMC, this paper proposes a novel AMC method based on a split learning (SL) framework called _SplitAMC_. The SplitAMC divides the entire deep neural network (DNN) into two partitions depending on cut-layer, an upper model segment and a lower model segment, each stored by the server and clients, respectively. Under this model-split architecture, clients and the server communicate the cut-layer representations, so-called _smashed data_, and its corresponding gradients. In doing so, the SplitAMC can benefit from high accuracy over large-scale Fig. 1: An overview of communication system composed of Tx, Rx with AMC, and server. noise, thanks to the scale of smashed data shown in Fig. 3. Also, exchanging smashed data instead of model parameters or raw data results in improved communication efficiency as well as data privacy guarantee of SplitAMC. The contributions of this paper are summarized below: * By revisiting the SL framework [8], we propose SplitAMC, an AMC method based on SL, that enables smashed data exchange instead of model parameters or raw data exchange. * SplitAMC's smashed data exchange solves data privacy leakage that occurs in CentAMC, while reducing latency at the same time. Latency analysis for SplitAMC is available in Sec. IV. * _Thanks to the large scale of the smashed data, SplitAMC has robust accuracy even in a large-noise environment, which is proven by experiments._ The rest of the paper is organized as follows: Sec. II introduces a system model, including a single-carrier system-based signal model and a wireless channel model. Next, Sec. III describes the operation of our proposed SplitAMC in detail. In Sec. IV, the performance of SplitAMC is validated and analyzed through extensive simulations, compared to CentAMC as well as FedeAMC. ## II System Model In this section, we describe the network topology in which the DL-based approaches including the proposed SplitAMC run, and then sequentially describe the signal model and channel model of communication links included in the network. ### _Network Topology_ As described in Fig. 1, we consider a communication system for AMC consisting of Tx-Rx pairs and server. _1) Tx-Rx Communication Link:_ Each Tx passes the source signal through the sampler and quantizer to perform analog-to-digital conversion (ADC) and encodes it, followed by selecting the modulation type, controlling the transmit power, and transmitting it to the corresponding Rx. Then, each Rx infers the modulation type of Tx through a modulation classifier and demodulates it based on it. Consequently, the goal of this paper is to train a modulation classifier with high accuracy, the key to the aforementioned communication system, via DL-based approach. _2) Rx-Server Communication Link:_ To achieve this goal, DL-based approaches perform additional communication between Rx and a server to train the model. At this time, an analog-modulated signal is assumed to be used. Although digital communication is widely used, it has notable drawbacks such as significant performance drop when deviating from target channel conditions (i.e., outage due to frequent channel fluctuations). On the other hand, analog communication can avoid this and even improve performance thanks to the regularizer effect of the noise [9]. Furthermore, analog-modulated signals can enjoy enhanced data privacy as well as latency gains by enabling over-the-air computation (AirComp) [10]. ### _Communication Model_ #### Ii-B1 Signal Model For Tx-Rx communication link, we use a regular signal model of the unknown single-input single-output single-carrier systems as in [11, 12]. Let \(r(n)\) represent the unknown modulated signals at the Rx, denoted by: \[\begin{split}& r(n)=A_{n}e^{j(2\pi f_{0}nT+\theta_{n})}s(n)+ \sigma(n),\\ & n\in\{0,1,\cdots,N-1\},\end{split} \tag{1}\] where \(s(n)\) and \(\sigma(n)\) are the transmitted modulation signal and additive white Gaussian noise (AWGN), respectively. In addition, \(A_{n}\) is the signal amplitude of symbol \(n\), following rayleigh channel fading, \(f_{0}\) is the carrier frequency offset, \(\theta_{n}\) is the time-varying carrier phase offset of symbol \(n\), \(T\) is the symbol interval, and \(N\) is the number of symbols of the signals and also the number of samples to be used for training of the DL model. The IQ sample, which is the training and test sample of DL, consists of in-phase (\(I\)) and quadrature (\(Q\)) parts of \(r(t)\). Based on the received signal model, the IQ sample can be expressed as: \[\begin{split}& I=\{Real(r(0)),Real(r(1)),\cdots\,Real(r(N-1))\}, \\ & Q=\{Imag(r(0)),Imag(r(1)),\cdots\,Imag(r(N-1))\},\end{split} \tag{2}\] where \(Real(\cdot)\) and \(Imag(\cdot)\) are functions for extracting values corresponding to the real and imaginary parts of the received signal \(r(t)\). Also, we can define the SNR of the received signal as follows: \[\gamma_{data}=10\cdot\log_{10}(\frac{\sum_{n=0}^{N-1}\left|A_{n}s(n)\right|^{2 }}{\sum_{n=0}^{N-1}\left|\sigma(n)\right|^{2}})[dB]. \tag{3}\] #### Ii-B2 Channel Model For Rx-server communication link, we consider the path-loss attenuation, channel fading, and noise, and its SNR is expressed by the following equation: \[\gamma=10\cdot\log_{10}(\frac{h\cdot P\cdot d^{-\alpha}}{\sigma^{2}})[dB], \tag{4}\] where \(P\) is the transmit power, \(d\) is the distance between Tx and Rx, \(\alpha\) is the path loss attenuation exponent, which usually corresponds to a value greater than or equal to 2, and \(\sigma^{2}\) is the variance value of the AWGN in the wireless channel environment. Moreover, \(h\) indicates channel fluctuations and follows an exponential distribution with a mean of 1 (\(h\sim\exp{(1)}\)). ## III Split learning-based AMC This section introduces the operation of SplitAMC method along with two existing AMC methods, CentAMC and FedeAMC. Let \(m\) be the subscript for Rx. Then, \(m\)-th Rx of set \(\mathbb{M}=\{1,2,\cdots,M\}\) produces the IQ data samples via (2). For all \(m\in\mathbb{M}\), the local dataset \(D_{m}=(x_{m},y_{m})\) consists of IQ samples \(x_{m}\) and its corresponding ground-truth labels \(y_{m}\). As depicted in Fig. 1(c), to enable model-split architecture shown in [8], the entire model weight \(w_{m}\) is divided into the upper model segment \(w_{s}\) and the lower model segment \(w_{c,m}\) based on the cut-layer. ### _Training Phase: The Operation of SplitAMC_ **Client-side Forward Propagation (FP).** The \(m\)-th Rx randomly selects \(B\) data-label tuples from the local dataset \(D_{m}\) to compose a batch, and passes it through the lower model segment \(w_{c,m}\) to generate smashed data \(s_{m}\) as follows: \[s_{m}=f(x_{m};w_{c,m}), \tag{5}\] where \(f(\cdot)\) denotes a function that maps \(x_{m}\) to \(s_{m}\), determined by \(w_{c,m}\). Then, the \(m\)-th Rx sends \(s_{m}\) to the server through an analog-modulated signal. \(\bar{s}_{m}\) reflecting the fluctuation and noise of the uplink (UL) wireless channel between Rx and the server in \(s_{m}\) is expressed as follows: \[\bar{s}_{m}=h\cdot s_{m}+\mathbb{N}=h\cdot f(x;w_{c,m})+\mathbb{N}, \tag{6}\] where \(\mathbb{N}\) follows a zero-mean complex Gaussian distribution with variance \(\sigma^{2}\). **Server-side FP & Backpropagation (BP).** To update the server-side model, the smashed data with Gaussian noise \(\bar{s}_{m}\) becomes the input of the upper model segment located on the server, yielding a _softmax output_\(\hat{y}_{m}\) as follows: \[\hat{y}_{m}=g(\bar{s}_{m};w_{s}), \tag{7}\] where \(g(\cdot)\) is a function that maps \(\bar{s}_{m}\) to \(\hat{y}_{m}\) due to the upper model segment \(w_{s}\). Then, by using the cross-entropy function, the loss \(L_{CE}\) can calculated by: \[L_{CE}=-\frac{1}{d_{y}}\sum_{i=1}^{d_{y}}y_{m,i}\log(\hat{y}_{m,i}), \tag{8}\] where \(i\) is the subscript for element, so that \(y_{m,i}\) and \(\hat{y}_{m,i}\) are \(i\)-th elements of ground-truth label \(y_{m}\) and prediction \(\hat{y}_{m}\), respectively, while \(d_{y}\) denotes the dimension of \(y_{m,i}\). With the aid of downlink (DL) communication in the cut-layer, BP is available allowing model update of Rx and server as follows: \[\begin{bmatrix}w_{c,m}\\ w_{s}\end{bmatrix}\leftarrow\begin{bmatrix}w_{c,m}\\ w_{s}\end{bmatrix}-\eta\begin{bmatrix}\nabla_{w_{c,m}}L_{CE}\\ \nabla_{w_{s}}L_{CE}\end{bmatrix}, \tag{9}\] where \(\eta\) and \(\nabla_{w_{c,m}(w_{s})}\) is the learning rate and the partial derivative with respect to \(w_{c,m}(w_{s})\), respectively. After that, the \(m\)-th Rx transmits its lower model segment to the \((m+1)\)-th Rx as follows: \[w_{c,m+1}\gets w_{c,m}, \tag{10}\] completing the single communication round of SplitAMC. The training operation of SplitAMC is detailed in the pseudocode of **Algorithm 1**. ### _Inference Phase: Performance Metrics_ When the model \(w_{m}=[w_{c,m},w_{s}]\) converges after the \(K\) communication rounds, it enters the inference phase. To this end, two types of inference method can be considered. In the first case, all clients download the shared upper model segment of the server, then use it as a classifier for the modulated signal transmitted from the paired Tx, followed by demodulation. In other cases, we can consider an inference method in which the client and server communicate the smashed data of test data and prediction through UL and DL communication, respectively, while retaining the model. In both inference methods, it is possible to measure the performance of the test model \(w_{m}\) via the following performance metrics: #### Iii-B1 Classification Performance The correct classification probability \(P_{cc}\) is employed to evaluate how accurately the test model \(w_{m}\) classifies the modulation scheme. When the total number of test samples is \(N_{test}\), \(P_{cc}\) is defined by : \[P_{cc}=\frac{N_{correct}}{N_{test}}\times 100[\%], \tag{11}\] where \(N_{correct}\) is the number of samples that successfully classified the modulation scheme among all \(N_{test}\) test samples. #### Iii-B2 Latency Model Meanwhile, we can model the latency that occurs in the second inference method mentioned above or training phase. We divide the overall latency into communication latency \(T_{comm}\), consisting of UL latency \(T_{UL}\) & DL latency \(T_{DL}\), and computation latency \(T_{comp}\). Here, we assume a static channel condition, i.e., \(h=1\) in (4) for convenience. In terms of communication latency, it is proportional to the number of transmitted bits and inversely proportional to the Fig. 2: Graphical illustrations of a) centralized learning, b) federated learning (FL), and c) split learning (SL). channel capacity, under the assumption of a static channel. Let \(L_{a}^{b}\) and \(\beta_{a}^{b}\) (\(a\in\{SL,CL,FL\}\), \(b\in\{UL,DL\}\)) denote the number of parameters exchanged and the number of bits per parameter during \(b\) communication between Rx and server in \(a\)-based AMC method, respectively. Then, for the UL (DL) transmission rate \(R_{UL(DL)}=BW\cdot log_{2}(1+\gamma)\) with \(\gamma\) in (4) where \(h=1\) and channel bandwidth \(BW\), the latency for UL (DL) communication is \(\frac{L_{a}^{UL(DL)},\beta_{a}^{UL(DL)}}{R_{UL(DL)}}\) for all \(a\) method. Computational latency \(T_{comp}\) is classified into client-side latency \((T_{client})\) and server-side latency \((T_{server})\). Both latencies are expressed as the ratio of \(C\) to \(f\), where \(C\) is the number of CPU cycles for processing FP as well as BP, and \(f\) is for client-side or server-side computational capacity (in CPU cycles/s). Assuming that the server-side computational capacity is infinite, \(T_{client}\) and \(T_{server}\) become unit computing time \(\tau_{comp}\) and 0, respectively, during 1 communication round. Table I summarizes \(T_{comm}\) and \(T_{comp}\) for 1 communication round of SplitAMC as well as FedeAMC and CentAMC. Note that the proposed SplitAMC can benefit in terms of computational latency, multiplied by the \(\lambda\in[0,1]\), depending on where the cut-layer is located. ### _Other AMC Methods_ #### Iii-C1 CentAMC [6] As described in Fig. 1(a), in CentAMC, all local datasets \(D_{m}\) are aggregated on the server through a wireless channel to form a single global dataset, becoming an input for a global model training of the server. By doing this, CentAMC easily obtains data diversity gain, but it causes a communication bottleneck when sending raw samples and violates data privacy guarantees. #### Iii-C2 FedeAMC [7] In FedeAMC, the server distributes the global model to each local client \(m\in\mathbb{M}\). Then, all local clients train the model using local dataset \(D_{m}\). After that, as shown in Fig. 1(b), local model parameters \(w_{m}\) are transmitted to the server through the wireless channel link. The server takes weighted averaging of aggregated local parameters, yielding a global model \(w\), i.e., \(w=(\sum_{m=1}^{M}|D_{m}|\cdot w_{m})/\sum_{m=1}^{M}|D_{m}|\). This approach can take data diversity gain without direct data exchange, but large communication overhead occurs for large-sized models. ## IV Experimental Results This section evaluates SplitAMC's performance in terms of accuracy, convergence speed, and latency. For comparison, we use the aforementioned CentAMC and FedeAMC, and also SplitAMC with different cut-layer location. For the ResNet-18 model [13] with 4 residual blocks, SplitAMC\({}_{(1,3)}\) and SplitAMC\({}_{(2,2)}\) represent the SplitAMC framework when the cut-layers are located after the 1st and 2nd blocks, respectively. For signal generation between Tx and Rx, Communications Toolbox in MATLAB is adopted. Considering the modulation schemes of QPSK, 16-QAM, and 64-QAM when SNR \(\gamma_{data}\) is 5, 10, and 15 dB, each modulation scheme includes 5000 data symbols, each composed of 1000 IQ samples. Prior to passing the CNN model, each IQ sample is transformed into a constellation image and resized to 64\(\times\)64 dimensions as in [13, 14]. For experiment, we employ Intel i7-10700 CPU and GTX 3080Ti GPU, where the software environment is based on PyTorch v1.10.0 and CUDA v11.3 with Python v3.9.4. Other simulation parameters are given as: \(M=2\), batch size \(B=64\), total communication rounds \(K=50000\) steps, learning rate \(\eta=0.004\), transmit power \(P=100\) mW, distance between Rx-server \(d=100\) m, and path-loss exponent \(\alpha=2\). ### _Accuracy & Convergence Speed_ Table II shows the classification accuracy of AMC methods according to different \(\gamma_{data}\) and channel environments. We consider two wireless communication environments depending on whether there is channel fluctuation, where the averaged SNR with channel fluctuation \(\gamma_{avg}\) and the fixed SNR without channel fluctuation \(\gamma_{fix}\) are the same as \(\gamma\) with \(h=1\) in (4). First, in all cases except for a few cases when \(\gamma_{data}\) is 10dB, the classification accuracy is high in the order of SplitAMC, CentAMC, and FedeAMC, showing the **noise-robustness** of \begin{table} \begin{tabular}{|c|c|c|} \hline \hline \multirow{2}{*}{_Latency_} & \multicolumn{2}{c|}{\(T_{comm}\)} & \(T_{comp}\) \\ \cline{2-3} & \(T_{UL}\) & \(T_{DL}\) & \(T_{client}\) \\ \hline \hline \multirow{2}{*}{**SplitAMC**} & \(\frac{L_{a}^{UL}\cdot\beta_{DL}^{UL}}{R_{UL}}\) & \(\frac{L_{a}^{DL}\cdot\beta_{DL}^{DL}}{R_{DL}}\) & \(\lambda\cdot\tau_{comp}\) \\ & \(\frac{L_{a}^{UL}\cdot\beta_{DL}^{UL}}{R_{UL}}\) & \(\frac{L_{a}^{DL}\cdot\beta_{DL}^{DL}}{R_{DL}}\) & \(\tau_{comp}\) \\ & \(\frac{L_{a}^{UL}\cdot\beta_{DL}^{UL}}{R_{UL}}\) & \(\frac{L_{a}^{DL}\cdot\beta_{DL}^{DL}}{R_{DL}}\) & - \\ \hline \hline \end{tabular} \end{table} TABLE I: A summary of communication and computation latency for a single communication round of AMC methods. SplitAMC. This is rooted in the scale difference of parameters exchanged through the Rx-server link in SplitAMC and FedeAMC shown in Fig. 3, respectively. Considering also CentAMC, which uploads raw data with mean and variance normalized to 0.5, when noise of the same size is injected, the relatively large parameter scale of SplitAMC leads to small performance degradation. These parameter scale gaps can be compensated by increasing the transmit power (e.g., 200 times the transmit power [mW] for 200 times scale difference) which is unachievable given the client's power constraints. Fig. 3 also implies a faster convergence speed of SplitAMC compared to FedeAMC through the parameter distribution change according to the number of steps, which is confirmed from the learning curve of Fig. 4. Returning to Table II, the classification performance tends to deteriorate as \(\gamma_{data}\) or \(\gamma_{avg}(\gamma_{fix})\) decreases and when channel fluctuations exist. \(\gamma_{data}\) and \(\gamma_{avg}(\gamma_{fix})\) are respectively related to the noise inherent in the IQ data itself and the noise applied to the resized constellation image, and it is confirmed that the performance reduction for \(\gamma_{data}\) is noticeable among them, indicating the importance of the Tx-Rx communication link state. The existence of channel fluctuations reduces the accuracy since it is difficult to learn a generalized model in the training phase. Focusing on the gap in classification accuracy between AMC methods, the worse the communication channel conditions, the greater the difference, proving the effectiveness of SplitAMC. In addition, in Table II as well as in Fig. 4, there is no significant difference in performance in terms of accuracy or convergence speed for cut-layer in SplitAMC. ### _Latency Measurement for Training_ Fig. 5 compares latency between AMC methods according to the ratio of \(\tau_{comp}\) and \(\tau_{comm}\), which is the reciprocal of uplink rate \(R_{UL}\). Here, to reflect the UL-DL asymmetric capacity of the cellular network, it is assumed that \(R_{DL}=10R_{UL}\). Other parameters for calculating latency are as follows: \(\beta_{a}^{b}=32\) bits for all \(a\) & \(b\), \(\gamma=10\) dB for UL communication, \(BW=10\) MHz, and \(50\) total communication rounds. The first thing to note is that overall latency performance improves when the ratio is 10:1, i.e., when \(\tau_{comm}\) is small. In other words, this means that the communication payload size of UL, which determines UL latency, is the main factor influencing latency. Thanks to the low dimension of smashed data, SplitAMC outperforms the baselines, except for CentAMC at 10:1. In the case of CentAMC, \(T_{comp}\) converges to 0 under our premise that the server-side computational capacity is infinite, and this dramatically reduces overall latency when \(\tau_{comp}\) : \(\tau_{comm}\) is 10:1. Regarding the cut-layer, the closer the cut-layer is to the input layer, the better in terms of latency. This is because the client-side computational latency varies according to \(\lambda\) as the cut-layer location changes, although there is no huge gap in the dimension of activation. When considering its accuracy and convergence speed together, it is optimal to place the cut-layer close to the input layer in SplitAMC, even beneficial in terms of client memory. Note that the above method is also applicable for the inference phase, by replacing the payload size. \begin{table} \begin{tabular}{|c|c c|c c|c c|c c|c c|c c|} \hline \hline \multirow{2}{*}{Channel SNR} & \multicolumn{4}{c|}{\(\gamma_{data}=5\)dB} & \multicolumn{4}{c|}{10dB} & \multicolumn{4}{c|}{15dB} \\ \cline{2-13} & \multicolumn{3}{c|}{**Avg. SNR (\(=\gamma_{avg}\) )**} & \multicolumn{3}{c|}{**Fixed SNR (\(=\gamma_{fix}\))**} & \multicolumn{3}{c|}{**Avg. SNR**} & \multicolumn{3}{c|}{**Fixed SNR**} & \multicolumn{3}{c|}{**Avg. SNR**} & \multicolumn{3}{c|}{**Fixed SNR**} \\ \cline{2-13} & -10dB & +10dB & -10dB & +10dB & -10dB & +10dB & -10dB & +10dB & -10dB & +10dB & -10dB & +10dB \\ \hline \hline **SplitAMC\({}_{(1,3)}\)** & 81.2 \% & 82.3 \% & 81.7 \% & 82.0 \% & 97.8 \% & 98.6 \% & 98.6 \% & 99.0 \% & 99.6 \% & 99.9 \% & 99.7 \% & 99.9 \% \\ \hline **SplitAMC\({}_{(2,2)}\)** & 81.1 \% & 81.1 \% & 80.8 \% & 82.5 \% & 97.7 \% & 98.9 \% & 98.6 \% & 99.0 \% & 99.6 \% & 99.9 \% & 99.8 \% & 99.9 \% \\ \hline **CentAMC** & 66.6 \% & 67.5 \% & 68.4 \% & 68.7 \% & 77.3 \% & 78.4 \% & 77.9 \% & 78.7 \% & 97.4 \% & 97.5 \% & 97.7 \% & 97.8 \% \\ \hline **FedeAMC** & 64.7 \% & 64.4 \% & 64.4 \% & 65.1 \% & 79.4 \% & 79.2 \% & 79.1 \% & 78.6 \% & 93.0 \% & 93.5 \% & 92.9 \% & 93.4 \% \\ \hline \hline \end{tabular} \end{table} TABLE II: Classification performance \(P_{cc}\) of SplitAMC and comparison groups on datasets with different SNRs \(\gamma_{data}\). Fig. 3: Cumulative distributions of parameter values in SplitAMC and FedeAMC for different steps. ## V Conclusion In this paper, we revisited the SL framework and applied it to AMC to design SplitAMC framework. Unlike the existing AMC methods, in SplitAMC, the client and server each hold a fraction of the entire model and exchange the output and gradient of the cut-layer, thereby resulting in a small memory size, computational cost, and communication payload size. Moreover, with the help of large scale inherent in smashed data, SplitAMC guarantees noise-robustness that can achieve high accuracy even when noise with large variance is ejected. Numerical evaluations validate the accuracy, convergence speed, and latency of SplitAMC. It is worth noting that this work focused on connecting parametric properties in distributed learning (i.e., scale and dimension) to metrics in wireless communication (i.e., classification accuracy and latency). As a future work, we will exploit the classification performance as the number of clients increases as in [15]. Also, investigating the accuracy-privacy tradeoff for noise variance [16] in SplitAMC could be an interesting topic, deferred as future work. ## Acknowledgement This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2022-0-00420,Development of Core Technologies enabling 6G End-to-End On-Time Networking & No.2021-0-00270, Development of 5G MEC framework to improve food factory productivity, automate and optimize flexible packaging)
2310.00943
Semi-Blind Image Deblurring Based on Framelet Prior
The problem of image blurring is one of the most studied topics in the field of image processing. Image blurring is caused by various factors such as hand or camera shake. To restore the blurred image, it is necessary to know information about the point spread function (PSF). And because in the most cases it is not possible to accurately calculate the PSF, we are dealing with an approximate kernel. In this paper, the semi-blind image deblurring problem are studied. Due to the fact that the model of the deblurring problems is an ill-conditioned problem, it is not possible to solve this problem directly. One of the most efficient ways to solve this problem is to use the total variation (TV) method. In the proposed algorithm, by using the framelet transform and fractional calculations, the TV method is improved. The proposed method is used on different types of images and is compared with existing methods with different types of tests.
M. Zarebnia, R. Parvaz
2023-10-02T07:25:05Z
http://arxiv.org/abs/2310.00943v1
# Semi-Blind Image Deblurring Based on Framelet Prior ###### Abstract The problem of image blurring is one of the most studied topics in the field of image processing. Image blurring is caused by various factors such as hand or camera shake. To restore the blurred image, it is necessary to know information about the point spread function (PSF). And because in the most cases it is not possible to accurately calculate the PSF, we are dealing with an approximate kernel. In this paper, the semi-blind image deblurring problem are studied. Due to the fact that the model of the deblurring problems is an ill-conditioned problem, it is not possible to solve this problem directly. One of the most efficient ways to solve this problem is to use the total variation (TV) method. In the proposed algorithm, by using the framelet transform and fractional calculations, the TV method is improved. The proposed method is used on different types of images and is compared with existing methods with different types of tests. Department of Mathematics, University of Mohaghegh Ardabili, 56199-11367 Ardabil, Iran. _Keywords:_ Framelet; Fractional calculations; Semi-blind deblurring; Total variation. ## 1 Introduction Digital image processing is one of the widely used branches of computer science and mathematics. In this branch of science, various topics such as the image deblurring, object detection and image enhancement are studied. In this paper, image deblurring is studied. When taking an image, various factors cause the image to blur, including hand shake or camera shake. In addition, the images taken from the sky by the telescope and also the images taken by the microscope and medical equipment can be studied in this category of issues. Therefore, the study of this topic has many uses in various sciences such as astronomy and medicine. Assuming that \(X\) and \(Y\) in \(\mathbb{R}^{n\times m}\) indicate the clear and blurred images, respectively. Then the model used for the blurred image process is as follows \[Y=k\otimes X+N.\] In this model, \(\otimes\) shows the two dimensional convolution operator, and \(k\in\mathbb{R}^{r\times s}\) represents blur kernel that obtained by point spread function (PSF). This function is formulated based on the physical process that causes the blurred image. For example, the Moffat function is used in astronomical telescope. The last parameter that appears in this model is \(N\). The effect of noise on the blurred image can be linear or non-linear and can also have different sources. Among these noises, we can refer to the poisson and gaussian noises. For a better understanding of this model, the details of the blurring process are given for an example in Figure 1. The method of solving this problem is different depending on the type of noise [12, 15]. Also, The image deblurring model can be rewritten as a system of equations as below \[y=Kx+n,\] where \(y,x\in\mathbb{R}^{nm\times 1}\) represent the pixels of the blurred and clear images in vector form, respectively. \(n\) is used for noise. Also, \(K\in\mathbb{R}^{nm\times nm}\) represents the matrix that is obtained according to the PSF and boundary conditions. For example, if the PSF is nonseparable and the boundary conditions are considered to be zero, then this matrix is the block Toeplitz with Toeplitz blocks (BTTB). The reader can find a full discussion of the structure of this matrix in [6]. In addition to the ill-conditioned of this problem, the large size of the equations of this system also makes it very difficult to directly solve this problem. One of the most effective ways to reduce the amount of calculations and increase the speed to solve this problem is to use the total variation (TV) method. This method is used in [13] and the effectiveness of this method attracted the opinion of researchers. In the following years, TV used in various papers such as [1, 9]. To improve the efficiency of this method, various penalty terms and norms have been studied in relation to the TV method, for example [10, 11]. In this paper, we consider a special case of this type of problem, which is known as the semi-blind deblurring problem. In this type of problem there is information from the blur kernel along with the error. The mathematical model of this type of problem is written as follows \[Y=(k_{0}+e)\otimes X+N,\] where \(k_{0}\) and \(e\in\mathbb{R}^{r\times s}\) denote the observed PSF and error, respectively. In this model, the value of \(y\) and \(k_{0}\) is known and the aim is to approximate \(e\) and \(x\). Also, This model can be rewritten as a system of equations as below \[y=(K_{0}+E)x+n, \tag{1.1}\] where \(K_{0},E\in\mathbb{R}^{nm\times nm}\) according to the boundary conditions and structure of \(k\) and \(e_{0}\) are obtained. This problem has been studied in various papers such as [3, 16]. With the recent development of mathematical science, new tools have been introduced for image processing. Among these tools, framelet transform and fractional calculations deserve mention. These tools have been used in various algorithms to solve the problem of image deblurring, among which we can refer to [7, 8]. The framelet transform is known to enhance the restored image due to the sparsity of representations in image domains. Additionally, when fractional derivative combined with the total variation model, the restored image exhibits distinct and sharp edges. In the proposed algorithm, these tools are used to improve the total variation method. After presenting the proposed model, the alternating direction method of multipliers has been used to solve the proposed model, and the results of the method show the improvement of the algorithm. The output of the paper is organized as follows: In Section 2, introductory discussions about framelet transform and fractional derivatives are presented and then by using these tools a model is introduced for semi-blind image deblurring. In Section 3, the numerical algorithm based on the alternating direction method of multipliers is introduced for the proposed model. Numerical results and analysis related to the proposed algorithm are given in Section 4. The summary of the paper is presented in Section 5. ## 2 Preliminaries and proposed model In this section, in the first subsection, the preliminary topics used in this paper is presented, and then the proposed model for semi-blind image deblurring is introduced. ### Framelet transform and discrete fractional-order gradient In this subsection, the concepts of framelet transform are briefly studied. The reader can find more information about this concept in [5]. Let \(\phi=\{\varphi_{\mu}\}_{\mu=1}^{r}\subset L^{2}(\mathbb{R}^{d})\), if Figure 1: Process of image blurring. there are two constants \(A\) and \(B\), so that for the affine system \[\psi_{\mu,j,k}:=2^{\frac{jd}{2}}\varphi_{\mu}(2^{j}t-k),\ \ j\in\mathbb{Z},k\in \mathbb{Z}^{d},\mu=1,\cdots,r,\] the following relations is satisfied \[A\|f\|^{2}\leq\sum_{\mu,j,k}\left|\langle f,\psi_{\mu,j,k}\rangle\right|^{2} \leq B\|f\|^{2},\ \ \ \forall f\in L^{2}(\mathbb{R}^{d}),\] where \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|\) are shown the inner product and norm in \(L^{2}(\mathbb{R})\), respectively, then \(\phi\) is called framelet. When \(A=B\), \(\phi\) is named tight framelet and if \(A=B=1\), \(\phi\) is named Parseval frame. Also, when \(\|\varphi_{\mu}\|=1\), Parseval frame is called wavelet. By using the synthesis and analysis operators [5], the frame operator is considered as \(S_{\phi}:L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d}),\ S_{\phi}f= \sum_{\mu}\langle f,\varphi_{\mu}\rangle\varphi_{\mu}\), where \(f\in L^{2}(\mathbb{R}^{d})\). By using this operator, we can write \[f=\sum_{\mu}\langle f,S_{\phi}^{-1}\varphi_{\mu}\rangle\varphi_{\mu}=\sum_{ \mu}\langle f,\varphi_{\mu}\rangle S_{\phi}^{-1}\varphi_{\mu},\ \ \ \forall f\in L^{2}(\mathbb{R}^{d}).\] Therefore, based on the above relation, an image can be expanded by the frame. In this paper, Parseval frame that is constructed from B-spline with refinement mask \(h_{0}=1/4[1,2,1]\) and two framelet masks \(h_{1}=\sqrt{2}/4[1,0,-1]\) and \(h_{2}=1/4[-1,2,-1]\) used for image transform. In the rest of this paper, the matrix of framelet transform is shown with \(W.\) Also, an example for this type of transform is given in Figure 2. Another concept used in the proposed model is fractional derivative. There are various definitions for fractional derivatives, but in this paper, we focus on the G-L fractional-order derivatives specifically designed for functions with a discrete structure [4]. Consider \(\phi_{i}^{\alpha}:=(-1)^{i}\frac{\Gamma(\alpha+1)}{\Gamma(l+1)\Gamma(\alpha+1 -l)}\), the discrete fractional-order gradient of Figure 2: Example of framelet transform. order \(\alpha\in\mathbb{R}^{+}\) is defined as \(\nabla^{\alpha}X=[D_{h}^{\alpha}X,D_{v}^{\alpha}X]^{T}\) where horizontal and vertical derivatives are obtained as \[D_{h}^{\alpha}X_{i,j}=\sum_{l=0}^{q-1}\phi_{l}^{\alpha}X_{i-l,j},\quad D_{v}^{ \alpha}X_{i,j}=\sum_{l=0}^{q-1}\phi_{l}^{\alpha}X_{i,j-l}.\] In the above formulas, \(q\) represents the number of neighboring pixels. In this paper, this value is selected equal to \(15\) for the numerical results. ### Proposed model The proposed model to solve the problem (1.1) is presented as follows \[(x,E)=\arg\min_{x,E}\frac{1}{2}\|(K_{0}+E)x-y\|^{2}+\sum_{i=1}^{2}\lambda_{i} \|\omega_{i}x\|+\frac{\lambda_{3}}{2}\|E\|^{2}+\delta_{[0,1]}(x), \tag{2.1}\] where \(\omega_{1}=W,\omega_{2}=\nabla^{\alpha}\), and \(\|\cdot\|\) denotes the \(2\)-norm and \(\delta\) is a convex projection operator where defined as \[\delta_{[0,1]}(x)=\begin{cases}0,&\text{if }x\in[0,1],\\ \infty,&\text{if }x\notin[0,1].\end{cases}\] Also, in this minimization problem, \(\{\lambda_{i}\}_{i=1}^{3}\) are considered as positive parameters. According to the definition of jointly convex, this model is clearly not a jointly convex model for (x,e). However, as will be seen in the rest of the paper, this model can be divided into two sub-models. As can be seen in the proposed model, framelet transform and discrete fractional-order gradient are used. Combining the framelet transform and fractional derivative with the total variation model has been observed to improve the quality of restored images. This is primarily because the framelet transform promotes sparsity in image representations, leading to accurate restoration. Moreover, by using fractional derivative, the restored image obtain sharp edges. In the next section, an algorithm for the numerical solution of this model is presented. ## 3 The solution of the proposed model The problem (2.1) can be divided into the following two sub-problems. To solve the problem (2.1), the iterative method is used. Suppose that at the \(k\)th step the values \(x^{k}\) and \(e^{k}\) have been calculated, then at the \((k+1)\)th step, \(x^{k+1}\) can be calculated by solving the following sub-problem. \[x^{k+1}=\arg\min_{x}\frac{1}{2}\|(K_{0}+E^{k})x-y\|^{2}+\sum_{i= 1}^{2}\lambda_{i}\|\omega_{i}x\|\] \[+\delta_{[0,1]}(x)+\frac{\beta_{1}}{2}\|x-x^{k}\|^{2}. \tag{3.1}\] In this formula, \(\beta_{1}\) is regarded as positive parameter and the last sentence has been added to control and minimize any significant alterations. likewise, by considering \(\beta_{2}\) as positive parameter, after calculating \(x^{k+1}\), \(e^{k+1}\) is obtained as follows. \[E^{k+1}=\arg\min_{E}\frac{1}{2}\|(K_{0}+E)x^{k+1}-y\|^{2}+\frac{ \lambda_{3}}{2}\|E\|^{2}+\frac{\beta_{2}}{2}\|E-E^{k}\|. \tag{3.2}\] ### Solving subproblems In this subsection, the method of solving the subproblems is presented. In the first step, the alternating direction method of multipliers (ADMM) is used for the sub-problem (3.1). By auxiliary variables \(\{\eta_{i}\}_{i=1}^{3}\), the subprobelem (3.1) can be written as \[x^{k+1}=\arg\min_{x}\frac{1}{2}\|(K_{0}+E^{k})x-y\|^{2}+\sum_{i= 1}^{2}\lambda_{i}\|\eta_{i}\|+\delta_{[0,1]}(\eta_{3})+\frac{\beta_{1}}{2}\|x -x^{k}\|^{2},\] \[s.t:\eta_{1}=Wx,\ \ \eta_{2}=\nabla^{\alpha}x,\ \ \eta_{3}=x.\] Then the augmented Lagrangian for this problem is obtained as \[L(x,\eta_{1},\eta_{2},\eta_{3}) =\frac{1}{2}\|(k_{0}+E^{k})x-y\|^{2}+\sum_{i=1}^{2}\lambda_{i}\| \eta_{i}\|+\delta_{[0,1]}(\eta_{3})\] \[+\frac{\beta_{1}}{2}\|x-x^{k}\|^{2}+\sum_{i=1}^{3}\big{(}\langle \partial_{i},\varpi_{i}x-\eta_{i}\rangle+\frac{\beta_{3}}{2}\|\varpi_{i}x- \eta_{i}\|^{2}\big{)},\] where \(\beta_{3}>0\) and \(\varpi_{3}=I\) (identity matrix). Then the extended iterative algorithm for ADMM is obtained as \[x^{k+1} =\arg\min_{x}\frac{1}{2}\|(K_{0}+E^{k})x-y\|^{2}+\sum_{i=1}^{3} \big{(}\langle\theta_{i},\varpi_{i}x-\eta_{i}\rangle+\frac{\beta_{3}}{2}\| \varpi_{i}x-\eta_{i}\|^{2}\big{)}\] \[\quad+\frac{\beta_{1}}{2}\|x-x^{k}\|^{2}, \tag{3.3}\] \[\eta_{i}^{k+1} =\arg\min_{\eta_{i}}\lambda_{i}\|\eta_{i}\|+\langle\theta_{i}^{k },\varpi_{i}x^{k+1}-\eta_{i}\rangle+\frac{\beta_{3}}{2}\|\varpi_{i}x^{k+1}- \eta_{i}\|^{2},\ \ i=1,2,\] (3.4) \[\eta_{3}^{k+1} =\arg\min_{\eta_{3}}\delta_{[0,1]}(\eta_{3})+\langle\theta_{3}^{ k},x^{k+1}-\eta_{3}\rangle+\frac{\beta_{3}}{2}\|x^{k+1}-\eta_{3}\|^{2},\] (3.5) \[\theta_{i}^{k+1} =\theta_{i}^{k}+\beta_{3}(\varpi_{i}x^{k+1}-\eta_{i}^{k+1}),\ \ i=1,2,3. \tag{3.6}\] To solve problem (3.3), the periodic boundary condition is considered. Since by using this condition, the blur matrix can be calculated by fast Fourier transform [6]. Then the solution of (3.3), by considering the periodic condition and using the optimal condition is obtained as follows. \[x^{k+1}=\mathcal{F}^{-1}\Big{[}\frac{\mathcal{F}\big{[}(K_{0}+E^{k})^{*}\,y+ \beta_{1}x^{k}+\sum_{i=1}^{3}\varpi_{i}^{*}(\beta_{3}\eta_{i}-\theta_{i}) \big{]}}{\mathcal{F}\big{[}(K_{0}+E^{k})^{*}(K_{0}+E^{k})+(\beta_{1}+2\beta_{3 })I+\beta_{3}(\nabla^{\alpha})^{*}\nabla^{\alpha}\big{]}}\Big{]}, \tag{3.7}\] where \(*\), \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) represent the complex conjugacy, fast Fourier transform and inverse fast Fourier transform, respectively. The closed form solution of (3.4), by using the proximal mapping [2], can be written as \[\eta_{i}^{k+1}=\max\Big{\{}\|\omega_{i}x^{k}+\frac{\theta_{i}^{k}}{\beta_{3}}\|- \frac{\lambda_{i}}{\beta_{3}},0\Big{\}}\frac{\omega_{i}x^{k}+\frac{\theta_{i}^ {k}}{\beta_{3}}}{\|\omega_{i}x^{k}+\frac{\theta_{i}^{k}}{\beta_{3}}\|},\;\;i=1,2. \tag{3.8}\] Also, the solution of (3.5) can be obtained as \[\eta_{3}=\max\Big{\{}0,\min\big{\{}1,x^{k}+\frac{\theta_{3}^{k}}{\beta_{3}} \big{\}}\Big{\}}. \tag{3.9}\] In summary, based on the relations mentioned in the previous part, the algorithm for calculating the value of \(x^{k+1}\) can be expressed as Algorithm 1. ``` Input:\(y\), \(x^{k}\), \(K_{0}\), \(E^{k}\), \(\beta_{1}\), \(\beta_{3}\), \(\lambda_{1}\), \(\lambda_{2}\), \(\{\theta_{i}^{0}\}_{i=1}^{3}\), \(j=0\); repeat obtain \(x^{j}\) by using (3.7); obtain \(\eta_{i}^{j}\), \(i=1,2\) by using (3.8); obtain \(\eta_{3}^{j}\) by using (3.9); update \(\theta_{i}^{j}\) by using (3.6); \(j=j+1\); untilConverged; Output: Deblurred image \(x^{k+1}\gets x^{j}\). ``` **Algorithm 1**. As the last step in solving the sub-problems, using fast Fourier transform, the solution of (3.2) is calculated as follows. \[E^{k+1}=\mathcal{F}^{-1}\Big{[}\frac{\mathcal{F}\big{[}(y-K_{0}x^{k+1})(x^{k+1 })^{*}+\beta_{2}E^{k}\big{]}}{\mathcal{F}\big{[}x^{k+1}(x^{k+1})^{*}+(\lambda_ {3}+\beta_{2})I\big{]}}\Big{]}. \tag{3.10}\] Therefore, based on the above discussion, the final algorithm for calculating the clear image is expressed as Algorithm 2. The proposed method is a convergent method and the convergence of the proposed method can be proven with the exact same method as in articles [3] and [16]. In the simulation results section, the convergence of the method is studied using the obtained results. ``` Input: The maximum number of iterations \((\text{MaxIt}),k=0\), the tolerance (tol), \(\{\lambda_{i}\}_{i=1}^{3}\), \(\beta_{1}\), \(u^{0}\) and \(E^{0}\) as start values; repeat obtain \(x^{k}\) by using Algorithm 1; ``` **Algorithm 2**. obtain \(E^{k}\) by solving (3.10); \(k=k+1\); **until** Error= \(\frac{\|u^{k+1}-u^{k}\|}{\|u^{k+1}\|}\leq\text{tol}\) or \(k\leq\text{Maxlt}\); **Output:** Deblurred image \(x\gets x^{k}\). ## 4 Simulation results In this section, we study the results of the algorithm described in the previous section and demonstrate the efficiency of the algorithm through various tests. ### Implementation platform and dataset details The system used for simulation includes Windows 10-64bit and Intel(R) Core(TM)i3-5005U [email protected]. Also, MATLAB 2014b and its internal functions are used in all simulations. Internal functions fspecial and wgn are used to generate the PSF and white Gaussian noise, respectively. To calculate the n-by-m noise matrix, the structure of the function wgn is considered as N=wgn(n,m,p,\({}^{\prime}\)dBm\({}^{\prime}\)), and it should be noted that the dBm unit is different from the dB unit. Additionally, other units such as dBW and ohm can be used for calculations. Also, the function imfilter(-,-, 'circular\({}^{\prime}\)') is used to produce the blurred image. After calculating the value of \(k\) by fspecial, the PSF error matrix \(e\) is calculated as e=std*randn (-), and then the matrix \(k_{0}\) is considered as \(k_{0}=k-e\). The images used in this section are from the USC-SIPI images database 1. In this section, the command rgb2gray is used to convert a color image to a gray image. Also, in cases where the size of the image is different from the size of the source, the command imresize is used to reduce the size of the image. In the calculations related to the proposed algorithm, the value of tol is considered as \(10^{-3}\). Also, in the numerical experiments, the following ranges are selected for input parameters: \(\lambda_{1},\lambda_{2},\lambda_{3}\in\{10^{-6},10^{-5},10^{-4},10^{-3},10^{ 3},10^{5}\}\), \(\beta_{1},\beta_{2}\in\{0.1,1,10\}\), \(\beta_{3}\in\{10^{l};i=-6,\cdots,0\}\) and \(\alpha\in\{0.25,0.5,0.75,1,1.5,1.75\}\). After restoring the image, three quantities are used to evaluate the numerical results and compare them with other methods: peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and feature similarity (FSIM). More information about these values can be found in [14]. Footnote 1: [http://sipi.usc.edu/database/](http://sipi.usc.edu/database/) ### Numerical experiments As first example, 5.2.10 (\(256\times 256\)), \(k_{1}\)=fspecial(\({}^{\prime}\)gaussian\({}^{\prime}\), [15 15],1.5), \(k_{2}\)=fspecial(\({}^{\prime}\)motion\({}^{\prime}\),10,45) and std=0.001 are considered. Also, in this section noise matrix is regarded as \(N\)=wgn(-,-,4,\({}^{\prime}\)dBm\({}^{\prime}\)). In Table 1, the results of the proposed method are compared with the results of other methods for two PSFs. For another experiment, the results of restored image and its enlargement part for PSF \(k_{1}\) are given in Figure 3. The results show the proper restore of the clear image. Also, in Figure 4, the results of the proposed algorithm are compared with the methods presented in [3] and [16]. In this figure, a portion of the image is enlarged to demonstrate the effect of the method on the image. By analyzing these results, it can be seen that the proposed algorithm improves the quality of the restored image. To check the convergence of the proposed algorithm, the figures of error, PSNR, SSIM and FSIM are given in Figure 5. The results of this figure shows the convergence of the method. Figure 4: (a) Ground-truth image, (b) degraded image with \(k_{2}\), restored image by: (c) method in [16], (d) method in [3], (e) proposed algorithm. In the next example, 4.1.05 (\(256\times 256\)) is chosen. For this example, \(k=\text{fspecial}\) ('motion',10,45) is considered as PSF. In Table 2, the results of various algorithms and the proposed algorithm are compared for different values of the std. Visual comparison with different values of std for proposed method and methods in [3, 16] is given in Figure 6. By comparing the results presented for this example, it is evident that the proposed method outperforms the other methods being compared. Also, in order to check the convergence of the proposed algorithm, the Error, PSNR, FSIM and SSIM curves are drawn in Figure 7. Figure 6: Visual comparison for 4.1.05 with: (a-d) std=0.0005, (e-h) std=0.005, (i-l) std=0.05. As the last example, boat (\(512\times 512\)) with \(k_{3}=\texttt{fspecial}(\text{'gaussian'},\)[19 19],2) and \(k_{4}=\texttt{fspecial}(\text{'motion'},\)20,135) is studied. In Table 3, for different values of std and PSF, values PSNR, FSIM and SSIM are reported and these quantities are compared with methods in [3, 16]. It can be seen that the results of the proposed method for these values are better compared to the other methods. Also, the results of the restored image are shown in Figure 8. In order to facilitate a more accurate comparison, a specific part of the image has been enlarged. By comparing the results, it can be seen that the proposed method has better efficiency. ## 5 Conclusion In this paper, semi-blind image deblurring is studied. Solving this type of problem is not easy because, unlike non-blind image deblurring, there is no complete information about the Point Spread Function (PSF). In the first step, a model to solve this problem using framelet transform and discrete fractional-order gradient is introduced. Then, a method based on ADMM is used to solve the proposed model. The proposed method improves the restored image by incorporating framelet and fractional calculations. The results of the proposed algorithm are compared with other methods, and these results confirm the effectiveness of our proposed algorithm.
2305.04993
Nilpotent Residual of a Finite Group
Let $F$ be a nilpotent group acted on by a group $H$ via automorphisms and let the group $G$ admit the semidirect product $FH$ as a group of automorphisms so that $C_G(F) = 1$. We prove that the order of $\gamma_\infty(G)$, the rank of $\gamma_\infty(G)$ are bounded in terms of the orders of $\gamma_{\infty}(C_G(H))$ and $H$, the rank of $\gamma_{\infty}(C_G(H))$ and the order of $H$, respectively in cases where either $FH$ is a Frobenius group; $FH$ is a Frobenius-like group satisfying some certain conditions; or $FH=\langle \alpha,\beta\rangle$ is a dihedral group generated by the involutions $\alpha$ and $\beta$ with $F =\langle \alpha\beta\rangle$ and $H =\langle\alpha \rangle$.
Eliana Rodrigues, Emerson de Melo, Gülin Ercan
2023-05-08T18:56:53Z
http://arxiv.org/abs/2305.04993v1
# Nilpotent residual ###### Abstract. Let \(F\) be a nilpotent group acted on by a group \(H\) via automorphisms and let the group \(G\) admit the semidirect product \(FH\) as a group of automorphisms so that \(C_{G}(F)=1\). We prove that the order of \(\gamma_{\infty}(G)\), the rank of \(\gamma_{\infty}(G)\) are bounded in terms of the orders of \(\gamma_{\infty}(C_{G}(H))\) and \(H\), the rank of \(\gamma_{\infty}(C_{G}(H))\) and the order of \(H\), respectively in cases where either \(FH\) is a Frobenius group; \(FH\) is a Frobenius-like group satisfying some certain conditions; or \(FH=\langle\alpha,\beta\rangle\) is a dihedral group generated by the involutions \(\alpha\) and \(\beta\) with \(F=\langle\alpha\beta\rangle\) and \(H=\langle\alpha\rangle\). Key words and phrases:Frobenius groups, Frobenius-like groups, Dihedral groups, Automorphisms, Nilpotent residual 2020 Mathematics Subject Classification: 20D45 ## 1. Introduction Throughout all groups are finite. Let a group \(A\) act by automorphisms on a group \(G\). For any \(a\in A\), we denote by \(C_{G}(a)\) the set \(\{x\in G:x^{a}=x\}\), and write \(C_{G}(A)=\bigcap_{a\in A}C_{G}(a).\) In this paper we focus on a certain question related to the strong influence of the structure of such fixed point subgroups on the structure of \(G\), and present some new results when the group \(A\) is a Frobenius group or a Frobenius-like group or a dihedral group of automorphisms. In what follows we denote by \(A^{\#}\) the set of all nontrivial elements of \(A\), and we say that \(A\) acts coprimely on \(G\) if \((|A|,|G|)=1\). Recall that a Frobenius group \(A=FH\) with kernel \(F\) and complement \(H\) can be characterized as a semidirect product of a normal subgroup \(F\) by \(H\) such that \(C_{F}(h)=1\) for every \(h\in H^{\#}\). Prompted by Mazurov's problem 17.72 in the Kourokva Notebook [26], some attention was given to the situation where a Frobenius group \(A=FH\) acts by automorphisms on the group \(G\). In the case where the kernel \(F\) acts fixed-point-freely on \(G\), some results on the structure of \(G\) were obtained by Khukhro, Makarenko and Shumyatsky in a series of papers [8], [9], [10], [11], [12], [13], [14]. They observed that various properties of \(G\) are in a certain sense close to the corresponding properties of the fixed-point subgroup \(C_{G}(H)\), possibly also depending on \(H\). In particular, when \(FH\) is metacyclic they proved that if \(C_{G}(H)\) is nilpotent of class \(c\), then the nilpotency class of \(G\) is bounded in terms of \(c\) and \(|H|\). In addition, they constructed examples showing that the result on the nilpotency class of \(G\) is no longer true in the case of non-metacyclic Frobenius groups. However, recently in [6] it was proved that if \(FH\) is supersolvable and \(C_{G}(H)\) is nilpotent of class \(c\), then the nilpotency class of \(G\) is bounded in terms of \(c\) and \(|FH|\). Later on, as a generalization of Frobenius group the concept of a Frobenius-like group was introduced by Ercan and Guloglu in [16], and their action studied in a series of papers [18], [19],[20],[23],[24],[21]. A finite group \(FH\) is said to be Frobenius-like if it has a nontrivial nilpotent normal subgroup \(F\) with a nontrivial complement \(H\) such that \(FH/F^{\prime}\) is a Frobenius group with Frobenius kernel \(F/F^{\prime}\) and complement \(H\) where \(F^{\prime}=[F,F]\). Several results about the properties of a finite group \(G\) admitting a Frobenius-like group of automorphisms \(FH\) aiming at restrictions on \(G\) in terms of \(C_{G}(H)\) and focusing mainly on bounds for the Fitting height and related parameters as a generalization of earlier results obtained for Frobenius groups of automorphisms; and also new theorems for Frobenius-like groups based on new representation-theoretic results. In these papers two special types of Frobenius-like groups have been handled. Namely, Frobenius-like groups \(FH\) for which \(F^{\prime}\) is of prime order and is contained in \(C_{F}(H)\); and the Frobenius-like groups \(FH\) for which \(C_{F}(H)\) and \(H\) are of prime orders, which we call Type I and Type II, respectively throughout the remainder of this paper. In [25] Shumyatsky showed that the techniques developed in [14] can be used in the study of actions by groups that are not necessarily Frobenius. He considered a dihedral group \(D=\langle\alpha,\beta\rangle\) generated by two involutions \(\alpha\) and \(\beta\) acting on a finite group \(G\) in such a manner that \(C_{G}(\alpha\beta)=1\). In particular, he proved that if \(C_{G}(\alpha)\) and \(C_{G}(\beta)\) are both nilpotent of class \(c\), then \(G\) is nilpotent and the nilpotency class of \(G\) is bounded solely in terms of \(c\). In [5], a similar result was obtained for other groups. It should also be noted that in [24] an extension of [25] about the nilpotent length obtained by proving that the nilpotent length of a group \(G\) admitting a dihedral group of automorphisms in the same manner is equal to the maximum of the nilpotent lengths of the subgroups \(C_{G}(\alpha)\) and \(C_{G}(\beta)\). Throughout we shall use the expression "\((a,b,\dots)\)-bounded" to abbreviate "bounded from above in terms of \(a,b,\dots\) only". Recall that the rank \(\mathbf{r}(G)\) of a finite group \(G\) is the minimal number \(r\) such that every subgroup of \(G\) can be generated by at most \(r\) elements. Let \(\gamma_{\infty}(G)\) denote the _nilpotent residual_ of the group \(G\), that is the intersection of all normal subgroups of \(G\) whose quotients are nilpotent. Recently, in [4], de Melo, Lima and Shumyatsky considered the case where \(A\) is a finite group of prime exponent \(q\) and of order at least \(q^{3}\) acting on a finite \(q^{\prime}\)-group \(G\). Assuming that \(|\gamma_{\infty}(C_{G}(a))|\leq m\) for any \(a\in A^{\#}\), they showed that \(\gamma_{\infty}(G)\) has \((m,q)\)-bounded order. In addition, assuming that the rank of \(\gamma_{\infty}(C_{G}(a))\) is at most \(r\) for any \(a\in A^{\#}\), they proved that the rank of \(\gamma_{\infty}(G)\) is \((m,q)\)-bounded. Later, in [3], it was proved that the order of \(\gamma_{\infty}(G)\) can be bounded by a number independent of the order of \(A\). The purpose of the present article is to study the residual nilpotent of finite groups admitting a Frobenius group, or a Frobenius-like group of Type I and Type II, or a dihedral group as a group of automorphisms. Namely we obtain the following results. **Theorem A** Let \(FH\) be a Frobenius, or a Frobenius-like group of Type I or Type II, with kernel \(F\) and complement \(H\). Suppose that \(FH\) acts on a finite group \(G\) in such a way that \(C_{G}(F)=1\). Then * \(|\gamma_{\infty}(G)|\) is bounded solely in terms of \(|H|\) and \(|\gamma_{\infty}(C_{G}(H))|\); * the rank of \(\gamma_{\infty}(G)\) is bounded in terms of \(|H|\) and the rank of \(\gamma_{\infty}(C_{G}(H))\). **Theorem B** Let \(D=\langle\alpha,\beta\rangle\) be a dihedral group generated by two involutions \(\alpha\) and \(\beta\). Suppose that \(D\) acts on a finite group \(G\) in such a manner that \(C_{G}(\alpha\beta)=1\). Then * \(|\gamma_{\infty}(G)|\) is bounded solely in terms of \(|\gamma_{\infty}(C_{G}(\alpha))|\) and \(|\gamma_{\infty}(C_{G}(\beta))|\); * the rank of \(\gamma_{\infty}(G)\) is bounded in terms of the rank of \(\gamma_{\infty}(C_{G}(\alpha))\) and \(\gamma_{\infty}(C_{G}(\beta))\). The paper is organized as follows. In Section 2 we list some results to which we appeal frequently. Section 3 is devoted to the proofs of two key propositions which play crucial role in proving Theorem A and Theorem B whose proofs are given in Section 4. ## 2. Preliminaries If \(A\) is a group of automorphisms of \(G\), we use \([G,A]\) to denote the subgroup generated by elements of the form \(g^{-1}g^{a}\), with \(g\in G\) and \(a\in A\). Firstly, we recall some well-known facts about coprime action, see for example [7], which will be used without any further references. **Lemma 2.1**.: _Let \(Q\) be a group of automorphisms of a finite group \(G\) such that \((|G|,|Q|)=1\). Then_ 1. \(G=C_{G}(Q)[G,Q]\)_._ 2. \(Q\) _leaves some Sylow_ \(p\)_-subgroup of_ \(G\) _invariant for each prime_ \(p\in\pi(G)\)_._ 3. \(C_{G/N}(Q)=C_{G}(Q)N/N\) _for any_ \(Q\)_-invariant normal subgroup_ \(N\) _of_ \(G\)_._ We list below some facts about the action of Frobenius and Frobenius-like groups. Throughout, a non-Frobenius Frobenius-like group is always considered under the hypothesis below. **Hypothesis*** Let \(FH\) be a non-Frobenius Frobenius-like group with kernel \(F\) and complement \(H\). Assume that a Sylow \(2\)-subgroup of \(H\) is cyclic and normal, and \(F\) has no extraspecial sections of order \(p^{2m+1}\) such that \(p^{m}+1=|H_{1}|\) for some subgroup \(H_{1}\leq H\). It should be noted that Hypothesis* is automatically satisfied if either \(|FH|\) is odd or \(|H|=2\). **Theorem 2.2**.: _Suppose that a finite group \(G\) admits a Frobenius group or a Frobenius-like group of automorphisms \(FH\) with kernel \(F\) and complement H such that \(C_{G}(F)=1\). Then \(C_{G}(H)\neq 1\) and \(\mathbf{r}(G)\) is bounded in terms of \(\mathbf{r}(C_{G}(H))\) and \(|H|\)._ **Proposition 2.3**.: _Let \(FH\) be a Frobenius, or a Frobenius-like group of Type I or Type II. Suppose that \(FH\) acts on a \(q\)-group \(Q\) for some prime \(q\) coprime to the order of \(H\) in case \(FH\) is not Frobenius. Let \(V\) be a \(kQFH\)-module where \(k\) is a field with characteristic not dividing \(|QH|.\) Suppose further that \(F\) acts fixed-point freely on the semidirect product \(VQ\). Then we have \(C_{V}(H)\neq 0\) and_ \[Ker(C_{Q}(H)\text{ on }C_{V}(H))=Ker(C_{Q}(H)\text{ on }V).\] Proof.: See [17] Proposition 2.2 when \(FH\) is Frobenius; [18] Proposition C when \(FH\) is Frobenius-like of Type I; and [22] Proposition 2.1 when \(FH\) is Frobenius-like of Type II. It can be easily checked that [17] Proposition 2.2 is valid when \(C_{Q}(F)=1\) without the coprimeness condition \((|Q|,|F|)=1\). The proof of the following theorem can be found in [25] and in [2]. **Theorem 2.4**.: _Let \(D=\langle\alpha,\beta\rangle\) be a dihedral group generated by two involutions \(\alpha\) and \(\beta\). Suppose that \(D\) acts on a finite group \(G\) in such a manner that \(C_{G}(\alpha\beta)=1\). Then_ 1. \(G=C_{G}(\alpha)C_{G}(\beta)\)_;_ 2. _the rank of_ \(G\) _is bounded in terms of the rank of_ \(C_{G}(\alpha)\) _and_ \(C_{G}(\beta)\) **Proposition 2.5**.: _Let \(D=\langle\alpha,\beta\rangle\) be a dihedral group generated by the involutions \(\alpha\) and \(\beta.\) Suppose that \(D\) acts on a \(q\)-group \(Q\) for some prime \(q\) and let V be a \(kQD\)-module for a field \(k\) of characteristic different from \(q\) such that the group \(F=\langle\alpha\beta\rangle\) acts fixed point freely on the semidirect product \(VQ\). If \(C_{Q}(\alpha)\) acts nontrivially on \(V\) then we have \(C_{V}(\alpha)\neq 0\) and \(Ker(C_{Q}(\alpha)\) on \(C_{V}(\alpha))=Ker(C_{Q}(\alpha)\) on \(V)\)._ Proof.: This is Proposition C in [24]. The next two results were established in [15, Lemma 1.6]. **Lemma 2.6**.: _Suppose that a group \(Q\) acts by automorphisms on a group \(G\). If \(Q=\langle q_{1},\ldots,q_{n}\rangle\), then \([G,Q]=[G,q_{1}]\cdots[G,q_{n}].\)_ **Lemma 2.7**.: _Let \(p\) be a prime, \(P\) a finite \(p\)-group and \(Q\) a \(p^{\prime}\)-group of automorphisms of \(P\)._ * _If_ \(|[P,q]|\leq m\) _for every_ \(q\in Q\)_, then_ \(|Q|\) _and_ \(|[P,Q]|\) _are_ \(m\)_-bounded._ * _If_ \(r([P,q])\leq m\) _for every_ \(q\in Q\)_, then_ \(r(Q)\) _and_ \(r([P,Q])\) _are_ \(m\)_-bounded._ We also need the following fact whose proof can be found in [1]. **Lemma 2.8**.: _Let \(G\) be a finite group such that \(\gamma_{\infty}(G)\leq F(G)\). Let \(P\) be a Sylow \(p\)-subgroup of \(\gamma_{\infty}(G)\) and \(H\) be a Hall \(p^{\prime}\)-subgroup of \(G\). Then \(P=[P,H]\)._ ## 3. Key Propositions We prove below a new proposition which studies the actions of Frobenius and Frobenius-like groups and forms the basis in proving Theorem A. **Proposition 3.1**.: _Assume that \(FH\) be a Frobenius group, or a Frobenius-like group of Type I or Type II with kernel \(F\) and complement \(H\). Suppose that \(FH\) acts on a \(q\)-group \(Q\) for some prime \(q\). Let \(V\) be an irreducible \(\mathbb{F}_{p}QFH\)-module where \(\mathbb{F}_{p}\) is a field with characteristic \(p\) not dividing \(|Q|\) such that \(F\) acts fixed-point-freely on the semidirect product \(VQ\). Additionaly, we assume that \(q\) is coprime to \(|H|\) in case where \(FH\) is not Frobenius. Then \(\mathbf{r}([V,Q])\) is bounded in terms of \(\mathbf{r}([C_{V}(H),C_{Q}(H)])\) and \(|H|\)._ Proof.: Let \(\mathbf{r}([C_{V}(H),C_{Q}(H)])=s.\) We may assume that \(V=[V,Q]\) and hence \(C_{V}(Q)=0\). By Clifford's Theorem, \(V=V_{1}\oplus\cdots\oplus V_{t}\), direct sum of of \(Q\)-homogeneous components \(V_{i}\), which are transitively permuted by \(FH\). Set \(\Omega=\{V_{1},\ldots,V_{t}\}\) and fix an \(F\)-orbit \(\Omega_{1}\) in \(\Omega\). Throughout, \(W=\Sigma_{U\in\Omega_{1}}U.\) Now, we split the proof into a sequence of steps. _(1) We may assume that \(Q\) acts faithfully on \(V\). Furthermore \(Ker(C_{Q}(H)\) on \(C_{V}(H))=Ker(C_{Q}(H)\) on \(V)=1\)._ Proof. Suppose that \(Ker(Q\) on V\()\neq 1\) and set \(\overline{Q}=Q/Ker(Q\) on V). Note that since \(C_{Q}(F)=1\), \(F\) is a Carter subgroup of \(QF\) and hence also a Carter subgroup of \(\overline{Q}F\) which implies that \(C_{\overline{Q}}(F)=1\). Notice that the equality \(\overline{C_{Q}(H)}=C_{\overline{Q}}(H)\) holds in case \(FH\) is Frobenius (see [14] Theorem 2.3). The same equality holds in case where \(FH\) is non-Frobenius due to the coprimeness condition \((q,|H|)=1.\) Then \([C_{V}(H),C_{Q}(H)]=[C_{V}(H),C_{\overline{Q}}(H)]\) and so we may assume that \(Q\) acts faithfully on \(V\). Notice that by Proposition 2.3 we have \[Ker(C_{Q}(H)\text{ on }C_{V}(H))=Ker(C_{Q}(H)\text{ on }V)=1\] establishing the claim. _(2) We may assume that \(Q=\langle c^{F}\rangle\) for any nonidentity element \(c\in C_{Z(Q)}(H)\) of order \(q\). In particular \(Q\) is abelian._ Proof. We obtain that \(C_{Z(Q)}(H)\neq 1\) as \(C_{Q}(F)=1\) by Proposition 2.3. Let now \(1\neq c\in C_{Z(Q)}(H)\) of order \(q\) and consider \(\langle c^{FH}\rangle=\langle c^{F}\rangle\), the minimal \(FH\)-invariant subgroup containing \(c\). Since \(V\) is an irreducible \(QFH\)-module on which \(Q\) acts faithfully we have that \(V=[V,\langle c^{F}\rangle]\). Thus we may assume that \(Q=\langle c^{F}\rangle\) as claimed. _(3) \(V=[V,c]\cdot[V,c^{f_{1}}]\cdots[V,c^{f_{n}}]\) where \(n\) is a \((s,|H|)\)-bounded number. Hence it suffices to bound \(\mathbf{r}([W,c])\)._ Proof. Notice that the group \(C_{Q}(H)\) embeds in the automorphism group of \([C_{V}(H),C_{Q}(H)]\) by step (1). Then \(C_{Q}(H)\) has \(s\)-bounded rank by Lemma 2.7. This yields by Theorem 2.2 that \(Q\) has \((s,|H|)\)-bounded rank. Thus, there exist \(f_{1}=1,\ldots,f_{n}\) in \(F\) for an \((s,|H|)\)-bounded number \(n\) such that \(Q=\langle c^{f_{1}},\ldots,c^{f_{n}}\rangle\). Now \(V=[V,c]\cdot[V,c^{f_{2}}]\cdots[V,c^{f_{n}}]=\prod_{i=1}^{n}[V,c]^{f_{i}}\) by Lemma 2.6. This shows that we need only to bound \(\mathbf{r}([V,c])\) suitably. In fact it suffices to show that \(\mathbf{r}([W,c])\) is suitably bounded as \(V=\Sigma_{h\in H}W^{h}\). _(4) \(H_{1}=Stab_{H}(\Omega_{1})\neq 1\). Furthermore the rank of the sum of members of \(\Omega_{1}\) which are not centralized by \(c\) and contained in a regular \(H_{1}\)-orbit, is suitably bounded._ Proof.: Fix \(U\in\Omega_{1}\) and set \(Stab_{F}(U)=F_{1}\). Choose a transversal \(T\) for \(F_{1}\) in \(F.\) Let \(W=\sum_{t\in T}U^{t}\) where \(T\) is a transversal for \(F_{1}\) in \(F\) with \(1\in T.\) Then we have \(V=\sum_{h\in H}W^{h}\). Notice that \([V,c]\neq 0\) by (_1_ ) which implies that \([W,c]\neq 0\) and hence \([U^{t},c]=U^{t}\) for some \(t\in T\). Without loss of generality we may assume that \([U,c]=U.\) Suppose that \(Stab_{H}(\Omega_{1})=1\). Then we also have \(Stab_{H}(U^{t})=1\) for all \(t\in T\) and hence the sum \(X_{t}=\sum_{h\in H}U^{th}\) is direct for all \(t\in T.\) Now, \(U\leq X_{1}\). It holds that \[C_{X_{t}}(H)=\{\sum_{h\in H}v^{h}\ :\ v\in U^{t}\}.\] Then \(|U|=|C_{X_{1}}(H)|=|[C_{X_{1}}(H),c]|\leq|[C_{V}(H),C_{Q}(H)]|\) implies \(\mathbf{r}(U)\leq s.\) On the other hand \(V=\bigoplus_{t\in T}X_{t}\) and \[[C_{V}(H),c]=\bigoplus\{[C_{X_{t}}(H),c]:t\in T\ \text{with}\ \ [U^{t},c]\neq 0\} \leq[C_{V}(H),C_{Q}(H)].\] In particular, \(\{t\in T:[U^{t},c]\neq 0\}\) is suitably bounded whence \(\mathbf{r}([W,c])\) is \((s,|H|)\)-bounded. Hence we may assume that \(Stab_{H}(\Omega_{1})\neq 1.\) Notice that every element of a regular \(H_{1}\)-orbit in \(\Omega_{1}\) lies in a regular \(H\)-orbit in \(\Omega\). Let \(U\in\Omega_{1}\) be contained in a regular \(H_{1}\)-orbit of \(\Omega_{1}\). Let \(X\) denote the sum of the members of the \(H\)-orbit of \(U\) in \(\Omega\), that is \(X=\bigoplus_{h\in H}U^{h}\). Then \(C_{X}(H)=\{\sum_{h\in H}v^{h}\ :\ v\in U\}\). If \([U,c]\neq 0\) then by repeating the same argument in the above paragraph we show that \(\mathbf{r}(U)\leq s\) is suitably bounded. On the other hand the number, say \(m\), of all \(H\)-orbits in \(\Omega\) containing a member \(U\) such that \([U,c]\neq 0\) is suitably bounded because \(m\leq\mathbf{r}([C_{V}(H),c])\leq s.\) It follows then that the rank of the sum of members of \(\Omega_{1}\) which are not centralized by \(c\) and contained in a regular \(H_{1}\)-orbit, is suitably bounded. _(5) We may assume that \(FH\) is not Frobenius._ Proof.: Assume the contrary that \(FH\) is Frobenius. Let \(H_{1}=Stab_{H}(\Omega_{1})\) and pick \(U\in\Omega_{1}\). Set \(S=Stab_{FH_{1}}(U)\) and \(F_{1}=F\cap S\). Then \(|F:F_{1}|=|\Omega_{1}|=|FH_{1}:S|\) and so \(|S:F_{1}|=|H_{1}|\). Since \((|F_{1}|,|H_{1}|)=1\), by the Schur-Zassenhaus theorem there exists a complement, say \(S_{1}\) of \(F_{1}\) in \(S\) with \(|H_{1}|=|S_{1}|\). Therefore there exists a conjugate of \(U\) which is \(H_{1}\)-invariant. There is no loss in assuming that \(U\) is \(H_{1}\)-invariant. On the other hand if \(1\neq h\in H_{1}\) and \(x\in F\) such that \(U^{xh}=U^{x}\), then \([h,x]\in Stab_{F}(U)=F_{1}\) and so \(F_{1}x=F_{1}x^{h}=(F_{1}x)^{h}\). This implies that \(F_{1}x\cap C_{F}(h)\) is nonempty. Now the Frobenius action of \(H\) on \(F\) forces that \(x\in F_{1}\). This means that for each \(x\in F\setminus F_{1}\) we have \(Stab_{H_{1}}(U^{x})=1\). Therefore \(U\) is the unique member of \(\Omega_{1}\) which is \(H_{1}\)-invariant and all the \(H_{1}\)-orbits other than \(\{U\}\) are regular. By (_4_ ), the rank of the sum of all members of \(\Omega_{1}\) other than \(U\) is is suitably bounded. In particular \(\mathbf{r}(U)\) and hence \(\mathbf{r}([W,c])\) is suitably bounded in case where \([U^{x},c]\neq 0\) for some \(x\in F\setminus F_{1}\). Thus we may assume that \(c\) is trivial on \(U^{x}\) for all \(x\in F\setminus F_{1}\). Now we have \([W,c]=[U,c]=U.\) Due to the action by scalars of the abelian group \(Q\) on \(U\), it holds that \([Q,F_{1}]\leq C_{Q}(U)\). We also know that \(c^{x}\) is trivial on \(U\) for each \(x\in F\setminus F_{1}\). Since \(C_{Q}(F)=1\), there are prime divisors of \(|F|\) different from \(q.\) Let \(F_{q^{\prime}}\) denote the \(q^{\prime}\)-Hall subgroup of \(F.\) Clearly we have \(C_{Q}(F_{q^{\prime}})=1\). Let now \(y=\prod_{f\in F_{q^{\prime}}}c^{f}\). Then we have \[1=y=(\prod_{f\in F_{1}\cap F_{q^{\prime}}}c^{f})(\prod_{f\in F_{q^{\prime}} \setminus F_{1}}c^{f})\in c^{|F_{1}\cap F_{q^{\prime}}|}C_{Q}(U).\] As a consequence \(c\in C_{Q}(U)\), because \(q\) is coprime to \(|F_{q^{\prime}}|\). This contradiction establishes the claim. _(6) We may assume that the group \(FH\) is Frobenius-like of Type II._ Proof.: On the contrary we assume that \(FH\) is Frobenius-like of Type I. By (4), we have \(H_{1}=Stab_{H}(\Omega_{1})\neq 1\). Choose a transversal \(T_{1}\) for \(H_{1}\) in \(H.\) Now \(V=\bigoplus_{h\in T_{1}}W^{h}.\) Also we can guarantee the existence of a conjugate of \(U\) which is \(H_{1}\)-invariant by means of the Schur-Zassenhaus Theorem as in (5). There is no loss in assuming that \(U\) is \(H_{1}\)-invariant. Set now \(Y=\Sigma_{x\in F^{\prime}}U^{x}\) and \(F_{2}=Stab_{F}(Y)\) and \(F_{1}=Stab_{F}(U)\). Clearly, \(F_{2}=F^{\prime}F_{1}\) and \(Y\) is \(H_{1}\)-invariant. Notice that for all nonidentity \(h\in H\), we have \(C_{F}(h)\leq F^{\prime}\leq F_{2}\). Assume first that \(F=F_{2}\). This forces that we have \(V=Y\). Clearly, \(Y\neq U\), that is \(F^{\prime}\not\leq F_{1}\), because otherwise \(Q=[Q,F]=1\) due to the scalar action of the abelian group \(Q\) on \(U\). So \(F^{\prime}\cap F_{1}=1\) which implies that \(|F:F_{1}|\) is a prime. Then \(F_{1}\unlhd F\) and \(F^{\prime}\leq F_{1}\) which is impossible. Therefore \(F\neq F_{2}\). If \(1\neq h\in H\) and \(t\in F\) such that \(Y^{th}=Y^{t}\) then \([h,t]\in F_{2}\). Now, \(F_{2}t=F_{2}t^{h}=(F_{2}t)^{h}\) and this implies the existence of an element in \(F_{2}t\cap C_{F}(h)\). Since \(C_{F}(h)\leq F^{\prime}\leq F_{2}\) we get \(t\in F_{2}\). In particular, for each \(t\in F\setminus F_{2}\) we have \(Stab_{H}(Y^{t})=1\). Let \(S\) be a transversal for \(F_{2}\) in \(F\). For any \(t\in S\setminus F_{2}\) set \(Y_{t}=Y^{t}\) and consider \(Z_{t}=\Sigma_{h\in H}{Y_{t}}^{h}\). Notice that \(V=Y\oplus\bigoplus_{t\in S\setminus F_{2}}Z_{t}\). As the sum \(Z_{t}\) is direct we have \[C_{Z_{t}}(H)=\{\sum_{h\in H}v^{h}\ :\ v\in Y_{t}\}\] with \(|C_{Z_{t}}(H)|=|Y_{t}|.\) Then \(\mathbf{r}([Y_{t},c])=\mathbf{r}([C_{Z_{t}}(H),c])\leq s\) for each \(t\in S\setminus F_{2}\) with \([Y_{t},c]\neq 0\). On the other hand, \[\Sigma\{\mathbf{r}([C_{Z_{t}}(H),c]):t\in S\ \text{with}\ [Y_{t},c]\neq 0\} \leq\mathbf{r}([C_{V}(H),c])\leq s\] whence \(|\{t\in S\setminus F_{2}:[Y_{t},c]\neq 0\}|\) is suitably bounded. So the claim is established if there exists \(t\in S\setminus F_{2}\) such that \([Y_{t},c]\neq 0,\) since we have \(V=Y\oplus\bigoplus_{t\in S\setminus F_{2}}Z_{t}\). Thus we may assume that \(c\) is trivial on \(\bigoplus_{t\in S\setminus F_{2}}Z_{t}\) and hence \([V,c]=[Y,c].\) There are two cases now: We have either \(F^{\prime}\cap F_{1}=1\) or \(F^{\prime}\leq F_{1}.\) First assume that \(F^{\prime}\leq F_{1}.\) Then we get \(F_{1}=F_{2}\) because \(F_{2}=F^{\prime}F_{1}.\) Now \(U=Y.\) Due to the action by scalars of the abelian group \(Q\) on \(U\), it holds that \([Q,F_{1}]\leq C_{Q}(U)\). From this point on we can proceed as in the proof of step \((\tilde{5})\) and observe that \(C_{Q}(F_{q^{\prime}})=1\). Letting now \(y=\prod_{f\in F_{q^{\prime}}}c^{f}\), we have \[1=y=(\prod_{f\in F_{1}\cap F_{q^{\prime}}}c^{f})(\prod_{f\in F_{q^{\prime}} \setminus F_{1}}c^{f})\in c^{|F_{1}\cap F_{q^{\prime}}|}C_{Q}(U).\] implying that \(c\in C_{Q}(U)\), because \(q\) is coprime to \(|F_{q^{\prime}}|\). Thus we have \(F_{1}\cap F^{\prime}=1\). First assume that \(H_{1}=H\). Then \(Y\) is \(H\)-invariant and \(F_{1}H\) is a Frobenius group. Note that \(C_{U}(F_{1})=1\) as \(C_{V}(F)=1\), and hence \(C_{Y}(F_{1})=1\) since \(F^{\prime}\leq Z(F).\) We consider now the action of \(QF_{1}H\) on \(Y\) and the fact that \(\mathbf{r}([C_{Y}(H),C_{Q}(H)])\leq s.\) Then step \((\tilde{5})\), we obtain that \(\mathbf{r}(Y)=\mathbf{r}([Y,Q])\) is \((s,|H|)\)-bounded. Next assume that \(H_{1}\neq H.\) Choose a transversal for \(H_{1}\) in \(H\) and set \(Y_{1}=\Sigma_{h\in T_{1}}Y^{h}\). Clearly this sum is direct and hence \[C_{Y_{1}}(H)=\{\sum_{h\in T_{1}}v^{h}\ :\ v\in Y\}\] with \(|[C_{Y_{1}}(H),c]|=|[Y,c]|.\) Then \(\mathbf{r}([Y,c])=\mathbf{r}([C_{Y_{1}}(H),c])\leq s\) establishing claim \((\tilde{6})\). _(7) The proposition follows._ Proof. From now on \(FH\) is a Frobenius-like group of Type II, that is, \(H\) and \(C_{F}(H)\) are of prime orders. By step \((\ref{eq:1})\) we have \(H=H_{1}=Stab_{H}(\Omega_{1})\) since \(|H|\) is a prime. Now \(V=W\). We may also assume by the Schur-Zassenhaus theorem as in the previous steps that there is an \(H\)-invariant element, say \(U\) in \(\Omega\). Let \(T\) be a transversal for \(F_{1}=Stab_{F}(U)\) in \(F\). Then \(F=\bigcup_{t\in T}F_{1}t\) implies \(V=\bigoplus_{t\in T}U^{t}.\) It should also be noted that we have \(|\{t\in T:[U^{t},c]\neq 0\}|\) is suitably bounded as \[[C_{V}(H),c]=\bigoplus\{[C_{X_{t}}(H),c]:t\in T\ \text{with}\ \ [U^{t},c]\neq 0\} \leq[C_{V}(H),C_{Q}(H)]\] where \(X_{t}=\bigoplus_{h\in H}U^{th}.\) Let \(X\) be the sum of the components of all regular \(H\)-orbits on \(\Omega\), and let \(Y\) denote the sum of all \(H\)-invariant elements of \(\Omega\). Then \(V=X\oplus Y.\) Suppose that \(U^{th}=U^{t}\) for \(t\in T\) and \(1\neq h\in H\). Now \([t,h]\in F_{1}\) and so the coset \(F_{1}t\) is fixed by \(H\). Since the orders of \(F\) and \(H\) are relatively prime we may assume that \(t\in C_{F}(H)\). Conversely for each \(t\in C_{F}(H)\), \(U^{t}\) is \(H\)-invariant. Hence the number of components in \(Y\) is \(|T\cap C_{F}(H)|=|C_{F}(H):C_{F_{1}}(H)|\) and so we have either \(C_{F}(H)\leq F_{1}\) or not. If \(C_{F}(H)\not\leq F_{1}\) then \(C_{F_{1}}(H)=1\) whence \(F_{1}H\) is Frobenius group acting on \(U\) in such a way that \(C_{U}(F_{1})=1\). Then \(\mathbf{r}(U)\) is \((s,|H|)\)-bounded by step (5) since \(\mathbf{r}([C_{U}(H),C_{Q}(H)])\leq s\) holds. This forces that \(\mathbf{r}([V,c])\) is bounded suitably and hence the claim is established. Thus we may assume that \(C_{F}(H)\leq F_{1}.\) Then \(Y=U\) is the unique \(H\)-invariant \(Q\)-homogeneous component. If \([U^{t},c]\neq 0\) for some \(t\in F\setminus F_{1}\) we can bound \(\mathbf{r}(U)\) and hence \(\mathbf{r}([V,c])\) suitably. Thus we may assume that \(c\) is trivial on \(U^{t}\) for each \(t\in F\setminus F_{1}.\) Due to the action of the abelian group \(Q\) on \(U\), it holds that \([Q,F_{1}]\leq C_{Q}(U)\). From this point on we can proceed as in the proof of step (5) and observe that \(C_{Q}(F_{q^{\prime}})=1\). Letting now \(y=\prod_{f\in F_{q^{\prime}}}c^{f}\), we have \[1=y=(\prod_{f\in F_{1}\cap F_{q^{\prime}}}c^{f})(\prod_{f\in F_{q^{\prime}} \setminus F_{1}}c^{f})\in c^{|F_{1}\cap F_{q^{\prime}}|}C_{Q}(U).\] implying that \(c\in C_{Q}(U)\), because \(q\) is coprime to \(|F_{q^{\prime}}|\). This final contradiction completes the proof of Proposition 3.1. The next proposition studies the action of a dihedral group of automorphisms and is essential in proving Theorem B. Proposition 3.2.: _Let \(D=\langle\alpha,\beta\rangle\) be a dihedral group generated by two involutions \(\alpha\) and \(\beta\). Suppose that \(D\) acts on a \(q\)-group \(Q\) for some prime \(q\). Let \(V\) be an irreducible \(\mathbb{F}_{p}QD\)-module where \(\mathbb{F}_{p}\) is a field with characteristic \(p\) not dividing \(|Q|\). Suppose that \(C_{VQ}(F)=1\) where \(F=\langle\alpha\beta\rangle\). If \(max\{\mathbf{r}([C_{V}(\alpha),C_{Q}(\alpha)]),\mathbf{r}([C_{V}(\beta),C_{Q} (\beta)])\}\leq s\), then \(\mathbf{r}([V,Q])\) is \(s\)-bounded._ Proof.: We set \(H=\langle\alpha\rangle\). So \(D=FH\). By Lemma 2.6 and Theorem 2.4, we have \([V,Q]=[V,C_{Q}(\alpha)][V,C_{Q}(\beta)]\). Then it is sufficient to bound the rank of \([V,C_{Q}(H)]\). Following the same steps as in the proof of Proposition 3.1 by replacing Proposition 2.3 by Proposition 2.4, we observe that \(Q\) acts faithfully on \(V\) and \(Q=\langle c^{F}\rangle\) is abelian with \(c\in C_{Z(Q)}(H)\) of order \(q\). Furthermore \(Ker(C_{Q}(H)\) on \(C_{V}(H))=Ker(C_{Q}(H)\) on \(V)=1\). Note that it suffices to bound \(\mathbf{r}([V,c])\) suitably. Let \(\Omega\) denote the set of \(Q\)-homogeneous components of the irreducible \(QD\)-module \(V.\) Let \(\Omega_{1}\) be an \(F\)-orbit of \(\Omega\) and set \(W=\Sigma_{U\in\Omega_{1}}U\) Then we have \(V=W+W^{\alpha}\). Suppose that \(W^{\alpha}\neq W\). Then for any \(U\in\Omega_{1}\) we have \(Stab_{H}(U)=1\). Let \(T\) be a tranversal for \(Stab_{F}(U)=F_{1}\) in \(F\). It holds that \(V=\Sigma_{t\in T}X_{t}\) where \(X_{t}=U^{t}+U^{t\alpha}.\) Now \([V,c]=\Sigma_{t\in T}[X_{t},c]\) and \(C_{V}(H)=\Sigma_{t\in T}C_{X_{t}}(H)\) where \(C_{X_{t}}(H)=\{w+w^{\alpha}:w\in U^{t}\}\). Since \([V,c]\neq 0\) there exists \(t\in T\) such that \([U^{t},c]\neq 0\), that is \([U^{t},c]=U^{t}.\) Then \([C_{X_{t}}(H),c]=C_{X_{t}}(H).\) Since \(\mathbf{r}([C_{V}(H),C_{Q}(H)])\leq s\) we get \(\mathbf{r}(U)=\mathbf{r}(C_{X_{t}}(H))\leq s\). Furthermore it follows that \(|\{t\in T:[U^{t},c]\neq 0\}|\) is \(s\)-bounded and as a consequence \(\mathbf{r}([V,c])\) is suitably bounded. Thus we may assume that \(W^{\alpha}=W\) which implies that \(\Omega_{1}=\Omega\) and \(H\) fixes an element, say \(U\), of \(\Omega\) as desired. Let \(U^{t}\in\Omega\) be \(H\)-invariant. Then \([t,\alpha]\in F_{1}.\) On the other hand \(t^{-1}t^{\alpha}=t^{-2}\) since \(\alpha\) inverts \(F\). So \(F_{1}t\) is an element of \(F/F_{1}\) of order at most \(2\) which implies that the number of \(H\)-invariant elements of \(\Omega\) is at most \(2\). Let now \(Y\) be the sum of all \(H\)-invariant elements of \(\Omega\). Then \(V=Y\oplus\bigoplus_{i=1}^{m}X_{i}\) where \(X_{1},\ldots X_{m}\) are the sums of elements in \(H\)-orbits of length \(2.\) Let \(X_{i}=U_{i}\oplus U_{i}^{\alpha}\). Notice that if \([U_{i},c]\neq 0\) for some \(i\), then we obtain \(\mathbf{r}(U)=\mathbf{r}(U_{i})\leq s\) by a similar argument as above. On the other hand we observe that the number of \(i\) for which \([U_{i},c]\neq 0\) is \(s\)-bounded by the the hypothesis that \(\mathbf{r}([C_{V}(H),c])\leq s\). It follows now that \(\mathbf{r}([V,c])\) is suitably bounded in case where \([U_{i},c]\neq 0\) for some \(i\). Thus we may assume that \(c\) centralizes \(\bigoplus_{i=1}^{m}X_{i}\) and that \([U,c]=U\). Due to the scalar action by scalars of the abelian group \(Q\) on \(U\), it holds that \([Q,F_{1}]\leq C_{Q}(U)\). As \(F_{1}\unlhd FH\), we have \([Q,F_{1}]\leq C_{Q}(V)=1.\) Clearly we have \(C_{Q}(F_{q^{\prime}})=1\) where \(F_{q^{\prime}}\) denotes the Hall \(q^{\prime}\)-part of \(F\) whose existence is guaranteed by the fact that \(C_{Q}(F)=1.\) Let now \(y=\prod_{f\in F_{q^{\prime}}}c^{f}\). Then we have \[1=y=(\prod_{f\in F_{1}\cap F_{q^{\prime}}}c^{f})(\prod_{f\in F_{q^{\prime}} \setminus F_{1}}c^{f})\in c^{|F_{1}\cap F_{q^{\prime}}|}C_{Q}(U).\] As a consequence \(c\in C_{Q}(U)\), because \(q\) is coprime to \(|F_{q^{\prime}}|\). This contradiction completes the proof of Proposition 3.2. ## 4. Proofs of theorems Firstly, we shall give a detailed proof for Theorem A part (b). The proof of Theorem A (a) can be easily obtained by just obvious modifications of the proof of part (b). First, we assume that \(G=PQ\) where \(P\) and \(Q\) are \(FH\)-invariant subgroups such that \(P\) is a normal \(p\)-subgroup for a prime \(p\) and \(Q\) is a nilpotent \(p^{\prime}\)-group with \(|[C_{P}(H),C_{Q}(H)]|=p^{s}\). We shall prove that \(\mathbf{r}(\gamma_{\infty}(G))\) is \(((s,|H|)\)-bounded. Clearly \(\gamma_{\infty}(G)=[P,Q]\). Consider an unrefinable \(FH\)-invariant normal series \[P=P_{1}>P_{2}>\cdots>P_{k}>P_{k+1}=1.\] Note that its factors \(P_{i}/P_{i+1}\) are elementary abelian. Let \(V=P_{k}\). Since \(C_{V}(Q)=1\), we have that \(V=[V,Q]\). We can also assume that \(Q\) acts faithfully on \(V\). Proposition 3.1 yields that \(\mathbf{r}(V)\) is \((s,|H|)\)-bounded. Set \(S_{i}=P_{i}/P_{i+1}\). If \([C_{S_{i}}(H),C_{Q}(H)]=1\), then \([S_{i},Q]=1\) by Proposition 2.3. Since \(C_{P}(Q)=1\) we conclude that each factor \(S_{i}\) contains a nontrivial image of an element of \([C_{P}(H),C_{Q}(H)]\). This forces that \(k\leq s\). Then we proceed by induction on \(k\) to obtain that \(\mathbf{r}([P,Q])\) is an \((s,|H|)\)-bounded number, as desired. Let \(F(G)\) denote the Fitting subgroup of a group \(G\). Write \(F_{0}(G)=1\) and let \(F_{i+1}(G)\) be the inverse image of \(F(G/F_{i}(G))\). As is well known, when \(G\) is soluble, the least number \(h\) such that \(F_{h}(G)=G\) is called the Fitting height \(h(G)\) of \(G\). Let now \(r\) be the rank of \(\gamma_{\infty}(C_{G}(H))\). Then \(C_{G}(H)\) has \(r\)-bounded Fitting height (see for example Lemma 1.4 of [15]) and hence \(G\) has \((r,|H|)\)-bounded Fitting height. We shall proceed by induction on \(h(G)\). Firstly, we consider the case where \(h(G)=2\). Indeed, let \(P\) be a Sylow \(p\)-subgroup of \(\gamma_{\infty}(G)\) and \(Q\) an \(FH\)-invariant Hall \(p^{\prime}\)-subgroup of \(G\). Then, by the preceeding paragraphs and Lemma 2.8, the rank of \(P=[P,Q]\) is \((r,|H|)\)-bounded and so the rank of \(\gamma_{\infty}(G)\) is \((r,|H|)\)-bounded. Assume next that \(h(G)>2\) and let \(N=F_{2}(G)\) be the second term of the Fitting series of \(G\). It is clear that the Fitting height of \(G/\gamma_{\infty}(N)\) is \(h-1\) and \(\gamma_{\infty}(N)\leq\gamma_{\infty}(G)\). Hence, by induction we have that \(\gamma_{\infty}(G)/\gamma_{\infty}(N)\) has \((r,|H|)\)-bounded rank. As a consequence, it holds that \[\mathbf{r}(\gamma_{\infty}(G))\leq\mathbf{r}(\gamma_{\infty}(G)/\gamma_{ \infty}(N))+\mathbf{r}(\gamma_{\infty}(N))\] completing the proof of Theorem A(b). The proof of Theorem B can be directly obtained as in the above argument by replacing Proposition 3.1 by Proposition 3.2; and Proposition 2.3 by Proposition 2.5.
2302.11487
Improving Model Choice in Classification: An Approach Based on Clustering of Covariance Matrices
This work introduces a refinement of the Parsimonious Model for fitting a Gaussian Mixture. The improvement is based on the consideration of clusters of the involved covariance matrices according to a criterion, such as sharing Principal Directions. This and other similarity criteria that arise from the spectral decomposition of a matrix are the bases of the Parsimonious Model. We show that such groupings of covariance matrices can be achieved through simple modifications of the CEM (Classification Expectation Maximization) algorithm. Our approach leads to propose Gaussian Mixture Models for model-based clustering and discriminant analysis, in which covariance matrices are clustered according to a parsimonious criterion, creating intermediate steps between the fourteen widely known parsimonious models. The added versatility not only allows us to obtain models with fewer parameters for fitting the data, but also provides greater interpretability. We show its usefulness for model-based clustering and discriminant analysis, providing algorithms to find approximate solutions verifying suitable size, shape and orientation constraints, and applying them to both simulation and real data examples.
David Rodríguez-Vítores, Carlos Matrán
2023-02-22T16:43:30Z
http://arxiv.org/abs/2302.11487v2
# Improving Model Choice in Classification: An Approach Based on Clustering of Covariance Matrices. ###### Abstract This work introduces a refinement of the Parsimonious Model for fitting a Gaussian Mixture. The improvement is based on the consideration of groupings of the covariance matrices according to a criterion, such as sharing Principal Directions. This and other similarity criteria that arise from the spectral decomposition of a matrix are the bases of the Parsimonious Model. The classification can be achieved with simple modifications of the CEM (Classification Expectation Maximization) algorithm, using in the M step suitable estimation methods known for parsimonious models. This approach leads to propose Gaussian Mixture Models for model-based clustering and discriminant analysis, in which covariance matrices are clustered according to a parsimonious criterion, creating intermediate steps between the fourteen widely known parsimonious models. The added versatility not only allows us to obtain models with fewer parameters for fitting the data, but also provides greater interpretability. We show its usefulness for model-based clustering and discriminant analysis, providing algorithms to find approximate solutions verifying suitable size, shape and orientation constraints, and applying them to both simulation and real data examples. ## 1 Introduction In this paper, we introduce methodological applications arising of cluster analysis of covariance matrices. Throughout, we will show that appropriate clustering criteria on these objects provide useful tools in the analysis of classic problems in Multivariate Analysis. The chosen framework is that of multivariate classification under a Gaussian Mixture Model, a setting where a suitable reduction of the involved parameters is a fundamental goal leading to the Parsimonious Model. We focus on this hierarchized model, designed to explain data with a minimum number of parameters, by introducing intermediate categories associated with clusters of covariance matrices. Gaussian Mixture Models approaches to discriminant and cluster analysis are well-established and powerful tools in multivariate statistics. For a fixed number \(k\), both methods aim to fit k multivariate Gaussian distributed components to a data set in \(\mathbb{R}^{d}\), with the key difference that labels providing the source group of the data are known (supervised classification) or unknown (unsupervised classification). In the supervised problem, we handle a data set with \(N\) observations \(y_{1},\ldots,y_{N}\) on \(\mathbb{R}^{d}\) and associated la bels \(z_{i,j},i=1,\ldots,N\), \(j=1,\ldots,k\), where \(z_{i,j}=1\) if the observation \(y_{j}\) belongs to the group \(i\) and \(0\) otherwise. Denoting by \(\phi(\cdot|\mu,\Sigma)\) the density of a multivariate Gaussian distribution on \(\mathbb{R}^{d}\) with mean \(\mu\) and covariance matrix \(\Sigma\), we seek to maximize the complete log-likelihood function \[CL\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma}\Big{)}=\sum_{j =1}^{N}\sum_{i=1}^{k}z_{i,j}\log\Biggl{(}\pi_{i}\phi(y_{j}|\mu_{i},\Sigma_{i}) \Biggr{)}, \tag{1}\] with respect to the weights \(\boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{k})\) with \(0\leq\pi_{i}\leq 1,\ \sum_{i=1}^{k}\pi_{i}=1\), the means \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{k})\) and the covariance matrices \(\boldsymbol{\Sigma}=(\Sigma_{1},\ldots,\Sigma_{k})\). In the unsupervised problem the labels \(z_{i,j}\) are unknown, and fitting the model involves the maximization of the log-likelihood function \[L\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma}\Big{)}=\sum_{j =1}^{N}\log\Biggl{(}\sum_{i=1}^{k}\pi_{i}\phi(y_{j}|\mu_{i},\Sigma_{i})\Biggr{)}\, \tag{2}\] with respect to the same parameters. This maximization is more complex, and it is usually performed via the EM algorithm (Dempster et al., 1977), where we repeat iteratively the following two steps. The E step, which consists in computing the expected values of the unobserved variables \(z_{i,j}\) given the current parameters, and the M step, in which we are looking for the parameters maximizing the complete log-likelihood (1) for the values \(z_{i,j}\) computed in the E step. Therefore, both model-based techniques require the maximization of (1), for which optimal values of the weights and the mean are easily computed: \[n_{i}=\sum_{j=1}^{N}z_{i,j}\,\ \pi_{i}=\frac{n_{i}}{N}\,\ \mu_{i}=\frac{\sum_{j =1}^{N}z_{i,j}\,y_{j}}{n_{i}}. \tag{3}\] With these optimal values, if we denote \(S_{i}=(1/n_{i})\sum_{j=1}^{N}z_{i,j}(y_{j}-\mu_{i})(y_{j}-\mu_{i})^{T}\), the problem of maximizing (1) with respect to \(\Sigma_{1},\ldots,\Sigma_{k}\) is equivalent to the problem of maximizing \[(\Sigma_{1},\ldots,\Sigma_{k})\mapsto\sum_{i=1}^{k}\ \ \log\Bigl{(}W_{d}\bigl{(}n_{i}S_{i}|n_{i},\Sigma_{i}\bigr{)} \Bigr{)}\, \tag{4}\] where \(W_{d}(\,\cdot\,|n_{i},\Sigma_{i})\) is the d-dimensional Wishart distribution with parameters \(n_{i},\Sigma_{i}\). For even moderate dimension \(d\), the large number of involved parameters in relation with the size of the data set could result in a poor behavior of standard unrestricted methods. In order to improve the solutions, regularization techniques are often invoked. In particular, many authors have proposed estimating the maximum likelihood parameters under some additional constraints on the covariance matrices \(\Sigma_{1},\ldots,\Sigma_{k}\), which lead us to solve the maximization of (4) under these constraints. Between these proposals, a prominent place is occupied by the so called **Parsimonious Model**, a broad set of hierarchized constraints capable of adapting to the several conceptual situations that may occur in practice. A common practice in multivariate statistics consists in assuming that covariance matrices share a common part of their structure. For example, if \(\Sigma_{1}=\ldots=\Sigma_{k}=I_{d}\), the clustering method described above gives just the k-means. If we assume common covariance matrices \(\Sigma_{1}=\ldots=\Sigma_{k}=\Sigma\), the procedure described above coincides with linear discriminant analysis (LDA) in the supervised case, and with the method proposed in Friedman and Rubin (1967) in the unsupervised case. General theory to organize these relationships between covariance matrices is based on the spectral decomposition, beginning with the analysis of Common Principal Components (Flury, 1984, 1988). In the discriminant analysis setting, the use of the spectral decomposition was first proposed in Flury et al. (1994), and in the clustering setting in Banfield and Raftery (1993). The term "Parsimonious model" and the fourteen levels given in Table 1 were introduced in Celeux and Govaert (1995) for the clustering setting and later, in Bensmail and Celeux (1996), for the discriminant setup. Given a positive definite covariance matrix \(\Sigma_{i}\), the spectral decomposition of reference is \[\Sigma_{i}=\gamma_{i}\beta_{i}\Lambda_{i}\beta_{i}^{T}\,\] where \(\gamma_{i}=\det(\Sigma_{i})^{1/d}>0\) governs the size of the groups, \(\Lambda_{i}\) is a diagonal matrix with positive entries and determinant equal to \(1\) that controls the shape, and \(\beta_{i}\) is an orthogonal matrix that controls the orientation. Given \(k\) covariance matrices \(\Sigma_{1},\ldots,\Sigma_{k}\), the spectral decomposition enables to establish the fourteen different parsimonious levels in Table 1, allowing differences or not in the parameters associated to size, shape and orientation. To fit a Gaussian mixture model under a parsimonious level \(\mathscr{M}\) in the Table 1, we must face the maximization of (4) under the parsimonious restriction. That is, we should find \[(\hat{\Sigma}_{1},\ldots,\hat{\Sigma}_{k})=\\ \operatorname*{argmax}_{(\Sigma_{1},\ldots,\Sigma_{k})\in\mathscr{M} }\;\sum_{i=1}^{k}\;\log\Bigl{(}W_{d}\bigl{(}n_{i}S_{i}|n_{i},\Sigma_{i}\bigr{)} \Bigr{)}, \tag{5}\] where we say that \((\Sigma_{1},\ldots,\Sigma_{k})\in\mathscr{M}\) if the k covariance matrices verify the level. We should remark that the Common Principal Components model (Flury, 1984, 1988) plays a key role in this hierarchy, which in any case is based on simple geometric interpretations. Restrictions are also often used to solve a well-known problem that appears in model-based clustering, the unboundedness of the log-likelihood function (2). With no additional constraints, the problem of maximizing (2) is not even well defined, a fact that could lead to uninteresting spurious solutions, where some groups would be associated to a few, almost collinear, observations. Although we will also use these restrictions, we will not discuss on this line in this work. A recent review of approaches for dealing with this problem can be found in Garcia-Escudero et al. (2017). The aim of this paper is to introduce a generalization of equation (5), that allows us to give a likelihood-based classification of the involved covariance matrices and to create intermediate parsimonious levels. Given a parsimonious level \(\mathscr{M}\) and \(G\in\{1,\ldots,k\}\), a classification of the k sample covariance matrices \(S_{1},\ldots,S_{k}\) is given by a vector of indexes \(\mathbf{\hat{P}}\in\{1,\ldots,G\}^{k}\), and the covariance matrices \(\mathbf{\hat{\Sigma}}=(\hat{\Sigma}_{1},\ldots,\hat{\Sigma}_{k})\) verifying \[\bigl{(}\mathbf{\hat{P}},\mathbf{\hat{\Sigma}}\bigr{)}=\; \operatorname*{argmax}_{\boldsymbol{P},\mathbf{\Sigma}}\Biggl{(}\sum_{g=1}^{G} \max_{\{\Sigma_{i}:P_{i}=g\}\in\mathscr{M}}\\ \sum_{i:P_{i}=g}\log\Bigl{(}W_{d}(n_{i}S_{i}|n_{i},\Sigma_{i}) \Bigr{)}\Biggr{)}. \tag{6}\] Solving this equation will allow us to fit Gaussian Mixture Models with intermediate parsimonious levels, in which the common parameters of a parsimonious level will be shared within each of the \(G\) classes given by the vector of indexes \(\mathbf{\hat{P}}\), but varying between the different classes. For example, we could obtain, say, two classes of covariance matrices that share their principal directions within each class, resulting in a better interpretation of the final classification and allowing a considerable reduction of the number of parameters to be estimated. We will use these ideas for fitting Gaussian Mixture Models in discriminant analysis and cluster analysis, also imposing the determinant and shape constraints of Garcia-Escudero et al. (2020) to avoid the unboundedness of the objective function. We will analyze some examples where the proposed models result in less parameters and more interpretability fitting the data, being better suited when compared \begin{table} \begin{tabular}{c c c c c c} \hline Name & \(\Sigma_{i}\) & Size & Shape & Orientation & Parameters \\ \hline EII & \(\lambda I\) & Equal & Spherical & - & 1 \\ VII & \(\lambda_{i}I\) & Variable & Spherical & - & k \\ EEI & \(\lambda\Lambda\) & Equal & Equal & Canonical & 1 + (d-1) \\ EVI & \(\lambda\Lambda_{i}\) & Equal & Variable & Canonical & 1+k(d-1) \\ VEI & \(\lambda_{i}\Lambda\) & Variable & Equal & Canonical & k + (d-1) \\ VVI & \(\lambda_{i}\Lambda_{i}\) & Variable & Variable & Canonical & k + k(d-1) \\ EEE & \(\lambda\beta\Lambda\beta^{T}\) & Equal & Equal & Equal & 1+(d-1)+d(d-1)/2 \\ EEV & \(\lambda\beta_{i}\Lambda\beta^{T}_{i}\) & Equal & Equal & Variable & 1+(d-1)+kd(d-1)/2 \\ EVE & \(\lambda\beta\Lambda\beta^{T}\) & Equal & Variable & Equal & 1+k(d-1)+d(d-1)/2 \\ VEE & \(\lambda_{i}\beta\Lambda\beta^{T}\) & Variable & Equal & Equal & k+(d-1)+d(d-1)/2 \\ VVE & \(\lambda_{i}\beta\Lambda_{i}\beta^{T}\) & Variable & Variable & Equal & k+k(d-1)+d(d-1)/2 \\ EVV & \(\lambda\beta_{i}\Lambda_{i}\beta^{T}_{i}\) & Equal & Variable & Variable & 1+k(d-1)+kd(d-1)/2 \\ VEV & \(\lambda_{i}\beta_{i}\Lambda\beta^{T}_{i}\) & Variable & Equal & Variable & k+(d-1)+kd(d-1)/2 \\ VVV & \(\lambda_{i}\beta_{i}\Lambda_{i}\beta^{T}_{i}\) & Variable & Variable & Variable & k(1+(d-1)+d(d-1)/2) \\ \hline \end{tabular} \end{table} Table 1: Parsimonious levels based on the spectral decomposition of \(\Sigma_{1},\ldots,\Sigma_{k}\). with the 14 parsimonious models. We point out that, as it is becoming usual in the literature, to carry out the comparisons, we will use the Bayesian Information Criterion (BIC). It has been noticed by many authors that BIC selection works properly in model based clustering, as well as in discriminant analysis. Fraley and Raftery (2002) includes a detailed justification for the use of BIC, based on previous references. A summary of the comparison of BIC with other techniques for model selection can also be found in Biernacki and Govaert (1999). The paper is organized as follows. Section 2 approaches the problem of the parsimonious classification of covariance matrices given by equation (6), focusing on its computation for the most interesting restrictions in terms of dimensionality reduction and interpretability. Throughout, we will only work with models based on the parsimonious levels of proportionality (VEE) and common principal components (VVE), although the extension to other levels is straightforward. Section 3 applies the previous theory for the estimation of Gaussian Mixture Models in cluster analysis and discriminant analysis, including some simulation examples for their illustration. Section 4 includes real data examples, where we will see the gain in interpretability that can arise from these solutions. Some conclusions are outlined in Section 5. Finally, in the appendices A and B we include theoretical results and technical details about the algorithms. Additional graphical material for real data examples is provided in the Appendix C. ## 2 Parsimonious Classification of Covariance Matrices Given \(n_{1},\ldots,n_{k}\) independent observations from \(k\) groups with different distributions, and \(S_{1},\ldots,S_{k}\) the sample covariance matrices, a group classification may be provided according to different similarity criteria. In the general case, given a similarity criterion \(f\) depending on the sample covariance matrices and the sample lengths, the problem of classifying \(k\) covariance matrices in \(G\) classes, \(1\leq G\leq k\), typically would consist in solving the equation \[\boldsymbol{P^{\bullet}}=\operatorname*{argmax}_{\boldsymbol{P}\in\mathscr{ F}}\quad\sum_{g=1}^{G}f\Big{(}\left\{(S_{i},n_{i}):P_{i}=g\right\}\Big{)},\] where \(\mathscr{H}=\{\boldsymbol{P}\in\{1,\ldots,G\}^{k}:\forall\ g=1,\ldots,G\ \exists\ i\text{ verifying }P_{i}=g\}\). In this work, we focus on the Gaussian case, proposing different similarity criteria based on the parsimonious levels that arise from the spectral decomposition of a covariance matrix. Multivariate procedures based on parsimonious decompositions assume that the theoretical covariance matrices \(\Sigma_{1},\ldots,\Sigma_{k}\) jointly verify one level \(\mathscr{M}\) out of the fourteen in Table 1. To elaborate on this idea, we include now some useful notation. In a parsimonious model \(\mathscr{M}\), we write \((\Sigma_{1},\ldots,\Sigma_{k})\in\mathscr{M}\) if these matrices share some common parameters \(C\), and they have variable parameters \(\mathbf{V}=(V_{1},\ldots,V_{k})\) (specified in the model \(\mathscr{M}\)). We will denote by \(\Sigma(V_{i},C)\) the covariance matrix with the size, shape and orientation parameters associated to \((V_{i},C)\). Therefore, under the parsimonious level \(\mathscr{M}\), we are assuming that \[\Sigma_{i}=\Sigma(V_{i},C)\qquad i=1,\ldots,k\.\] If the \(n_{i}\) observations of group \(i\) are independent and arise from a distribution \(N(\mu_{i},\Sigma_{i})\), then \(n_{i}S_{i}\) follows a d-dimensional Wishart distribution with parameters \(n_{i},\Sigma_{i}\). Therefore, given the level of parsimony \(\mathscr{M}\), it is natural to consider the maximized log-likelihood under the level \(\mathscr{M}\) as a similarity criterion for the covariance matrices. This allows us to measure their resemblance in the features associated to the common part of the decomposition in the theoretical model. Thus, the similarity criterion for the parsimonious level \(\mathscr{M}\) is \[f_{\mathscr{M}}\Big{(}\big{\{}(S_{i},n_{i}),i=1,\ldots,r\big{\}} \Big{)}=\\ \max_{V_{1},\ldots,V_{r},C}\sum_{i=1}^{r}\ \log\Bigl{(}W_{d}\bigl{(}n_{i}S_{i} |n_{i},\Sigma(V_{i},C)\bigr{)}\Bigr{)}.\] Consequently, given a level of parsimony \(\mathscr{M}\), the covariance matrix classification problem in \(G\) classes consists in solving the equation \[\boldsymbol{P^{\bullet}} =\operatorname*{argmax}_{\boldsymbol{P}\in\mathscr{H}}\ \sum_{g=1}^{G}f_{\mathscr{M}}\Big{(}\big{\{}(S_{i},n_{i}):P_{i}=g\big{\}} \Big{)}\] \[=\operatorname*{argmax}_{\boldsymbol{P}\in\mathscr{H}}\Biggl{(} \max_{V_{1},\ldots,V_{k},C_{1},\ldots,C_{G}}\ \sum_{g=1}^{G}\] \[\sum_{i:P_{i}=g}\log\Bigl{(}W_{d}\bigl{(}n_{i}S_{i}|n_{i},\Sigma( V_{i},C_{g})\bigr{)}\Bigr{)}\Biggr{)}. \tag{7}\] In order to avoid the combinatorial problem of maximizing within \(\mathscr{H}\), denoting the variable parameters by \(\mathbf{V}=(V_{1},\ldots,V_{k})\) and the common parameters by \(\mathbf{C}=(C_{1},\ldots,C_{G})\), we focus on the problem of maximizing \[W(\mathbf{P},\mathbf{V},\mathbf{C})=\\ \sum_{g=1}^{G}\sum_{i:P_{i}=g}\ \log\!\left(W_{d}\!\left(n_{i}S_{i}|n_{i}, \Sigma(V_{i},C_{g})\right)\right)\,,\] since the value \(\mathbf{P}^{*}\) maximizing this function agrees with the optimal \(\mathbf{P}^{*}\) in (7). This problem will be referred to as **Classification \(\mathbf{G}\)-\(\mathbf{\mathscr{M}}\)**. From the expression of the d-dimensional Wishart density, we can see that maximizing \(W\) is equivalent to minimizing with respect to the same parameters the function \[\sum_{g=1}^{G}\sum_{i:P_{i}=g}n_{i}\!\left(\log\!\left(\left|\Sigma(V_{i},C_{g} )\right|+\mathrm{tr}\!\left(\Sigma(V_{i},C_{g})^{-1}S_{i}\right)\right)\!,\right.\] which can be achieved through a simple modification of the CEM algorithm (Classification Expectation Maximization, introduced in Celeux and Govaert (1992)), for any of the fourteen parsimonious levels. A sketch of the algorithm is presented here: **Classification \(\mathbf{G}\)-\(\mathbf{\mathscr{M}}\) :** Starting from an initial estimation \(\mathbf{C^{0}}=(C_{1}^{0},\ldots,C_{G}^{0})\) of the common parameters, which may be taken as the parameters of \(G\) different matrices \(S_{i}\) randomly chosen between \(S_{1},\ldots,S_{k}\), the \(m^{th}\) iteration consists of the following steps: * **P-V step**: Given the common parameters \(\mathbf{C^{m}}=(C_{1}^{m},\ldots,C_{G}^{m})\), we maximize with respect to the partition \(\mathbf{P}\) and the variable parameters \(\mathbf{V}\). For each \(i=1,\ldots,k\), we compute \[\tilde{V}_{i,g}=\quad\underset{V}{\mathrm{argmax}}\ W_{d}\!\left(n_{i}S_{i}|n _{i},\Sigma(V,C_{g})\right)\] for \(1\leq g\leq G\), and we define: \[P_{i}^{m+1}=\underset{g\in\{1,\ldots,G\}}{\mathrm{argmax}}\ W_{d}\!\left(n_{i}S_{i}|n _{i},\Sigma(\tilde{V}_{i,g},C_{g})\right)\!.\] * **V-C step:** Given the partition \(\mathbf{P^{m+1}}\), we compute the values \((\mathbf{V^{m+1}},\mathbf{C^{m+1}})\) maximizing \(W(\mathbf{P^{m+1}},\mathbf{V},\mathbf{C})\). The maximization can be done individually for each of the groups created, by maximizing for each \(g=1,\ldots,G\) the function \[(V_{i_{g,1}},\ldots,V_{i_{g,l}},C_{g})\longmapsto\\ \sum_{s=1}^{l}\ \log\!\left(W_{d}\!\left(n_{i_{g,s}}S_{i_{g,s}}|n_{i_{g,s}}, \Sigma(V_{i_{g,s}},C_{g})\right)\right)\,,\] where \(\{i_{g,1},\ldots,i_{g,l}\}=\{i:P_{i}^{m+1}=g\}\). The maximization for each of the 14 parsimonious levels can be done, for instance, with the techniques in Celeux and Govaert (Celeux and Govaert, 1995). The methodology proposed therein for common orientation models uses modifications of the Flury algorithm (Flury and Gautschi, 1986). However, for these models we will use the algorithms subsequently developed by Browne and McNicholas (2014, 2014), often implemented the software available for parsimonious model fitting, which allow more efficient estimation of the common orientation parameters. For each of the fourteen parsimonious models, the variable parameters in the solution \(\mathbf{\hat{V}}\) may be computed as a function of the parameters \((\mathbf{\hat{P}},\mathbf{\hat{C}})\), the sample covariance matrices \(S_{1},\ldots,S_{k}\) and the sample lengths \(n_{1},\ldots,n_{k}\). Therefore, the function \(W\) could be written as \(W(\mathbf{P},\mathbf{C})\), and the maximization could be seen as a particular case of the coordinate descent algorithm explained in Bezdek et al. (1987). As it was already noted, we focus on the development of the algorithm only for two particular (the most interesting) parsimonious levels. First of all, we are going to keep models flexible enough to enable the solution of (6), when taking \(G=k\) (no grouping is assumed), to coincide with the unrestricted solution, \(\hat{\Sigma}_{i}=S_{i}\). The first six models do not verify this condition. For the last eight models, the numbers of parameters are \[\delta_{\mathrm{VOL}}\cdot 1+\delta_{\mathrm{SHAPE}}\cdot(d-1)+\delta_{\mathrm{ ORIENT}}\cdot\frac{d(d-1)}{2}\,\] where \(\delta_{\mathrm{VOL}},\delta_{\mathrm{SHAPE}}\) and \(\delta_{\mathrm{ORIENT}}\) take the value 1 if the given parameter is assumed to be common, and \(k\) if it is assumed to be variable between groups. When \(d\) and \(k\) are large, the main source of variation in the number of parameters is related to considering common or variable orientation, followed by considering common or variable shape. For example, if \(d=9,k=6\), the number of parameters related to each constraint are detailed in Table 2. Our primary motivation is exemplified through Table 2: to raise alternatives for the models with variable orientation. For that, we look for models with orientation varying in \(G\) classes, with \(1\leq G\leq k\). We consider the case where size and shape are variable across all groups (\(G\) different Common Principal Components, G-CPC) and also the case where shape parameters are additionally common within each of the \(G\) classes (proportionality to \(G\) different matrices, G-PROP). Apart from the parameter reduction, these models can provide an easier interpretation of the variables involved in the problem, which is often a hard task in multidimensional problems with several groups. We keep the size variable, since it does not cause a major increase in the number of parameters, and it is easy to interpret. Therefore, the models we are considering are: * **Classification G-CPC**: We are looking for \(G\) orthogonal matrices \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{G})\) and a vector of indexes \(\boldsymbol{P}=(P_{1},\ldots,P_{k})\in\mathscr{H}\) such that \[\Sigma_{i}=\gamma_{i}\beta_{P_{i}}\Lambda_{i}\beta_{P_{i}}^{T}\quad i=1, \ldots,k\,\] where \(\boldsymbol{\gamma}=(\gamma_{1},\ldots,\gamma_{k})\) and \(\boldsymbol{\Lambda}=(\Lambda_{1},\ldots,\Lambda_{k})\) are the variable size and shape parameters. The number of parameters is \(k+k(d-1)+Gd(d-1)/2\). In the situation of Table 2, taking \(G=2\) the number of parameters is \(126\), while allowing for variable orientation it is \(276\). To solve (7), we have to find a vector of indexes \(\mathbf{\hat{P}}\), \(G\) orthogonal matrices \(\mathbf{\hat{\beta}}\) and variable parameters \(\mathbf{\hat{\gamma}}\) and \(\mathbf{\hat{\Lambda}}\) minimizing \[(\mathbf{P}, \boldsymbol{\Lambda},\boldsymbol{\gamma},\boldsymbol{\beta})\longmapsto\] \[\sum_{g=1}^{G}\sum_{i:P_{i}=g}n_{i}\left(d\log\!\left(\gamma_{i} \right)+\frac{1}{\gamma_{i}}\operatorname{tr}\left(\Lambda_{i}^{-1}\beta_{g}^ {T}S_{i}\beta_{g}\right)\right).\] (8) * **Classification G-PROP**: We are looking for \(G\) orthogonal matrices \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{G})\), \(G\) shape matrices \(\mathbf{\Lambda}=(\Lambda_{1},\ldots,\Lambda_{G})\) and \(\boldsymbol{P}=(P_{1},\ldots,P_{k})\in\mathscr{H}\) such that \[\Sigma_{i}=\gamma_{i}\beta_{P_{i}}\Lambda_{P_{i}}\beta_{P_{i}}^{T}\quad i=1, \ldots,k\,\] where \(\boldsymbol{\gamma}=(\gamma_{1},\ldots,\gamma_{k})\) are the variable size parameters. The number of parameters is \(k+G(d-1)+Gd(d-1)/2\). In the situation of Table 2, the number of parameters if we take \(G=2\) is \(94\). To solve (7), we have to find a vector of indexes \(\mathbf{\hat{P}}\), \(G\) orthogonal matrices \(\boldsymbol{\hat{\beta}}\), \(G\) shape matrices \(\mathbf{\hat{\Lambda}}\) and the variable size parameters \(\mathbf{\hat{\gamma}}\) minimizing \[(\mathbf{P}, \boldsymbol{\Lambda},\boldsymbol{\gamma},\boldsymbol{\beta})\longmapsto\] \[\sum_{g=1}^{G}\sum_{i:P_{i}=g}n_{i}\left(d\log\!\left(\gamma_{i} \right)+\frac{1}{\gamma_{i}}\operatorname{tr}\left(\Lambda_{g}^{-1}\beta_{g}^ {T}S_{i}\beta_{g}\right)\right).\] (9) Explicit algorithms for finding the minimum of (8) and (9) are given in Section B.2 in the Appendix. The results given by both algorithms are illustrated in the following example, where we have randomly created \(100\) covariance matrices \(\Sigma_{1},\ldots,\Sigma_{100}\) according: \[\Sigma_{i}=X\!\left(\operatorname{U}(\alpha)\operatorname{Diag}(1,Y) \operatorname{U}(\alpha)^{T}\right)\quad i=1,\ldots,100\,\] where \(\operatorname{U}(\alpha)\) represents the rotation of angle \(\alpha\), \(\operatorname{Diag}(1,Y)\) is the diagonal matrix with entries \(1,Y\), and \(X,Y,\alpha\) are uniformly distributed random variables with distributions: \[X\sim U\!\left(0.5,2\right)\,\ Y\sim U\!\left(0,0.5\right)\,\ \alpha\sim U \!\left(0,\pi\right)\,.\] For each \(i=1,\ldots,100\), we have taken \(S_{i}\) as the sample covariance matrix computed from \(200\) independent observations from a distribution \(N(0,\Sigma_{i})\), and we have applied \(4\)-CPC and \(4\)-PROP to obtain different classifications of \(S_{1},\ldots,S_{100}\). The partitions obtained by both methods allow us to classify the covariance matrices according to both criteria. Figure 1 shows the \(95\%\) confident ellipses representing the sample covariance matrices associated to each class (coloured lines) together with the estimations of the common axes or the common proportional matrix within each class (black lines). \begin{table} \begin{tabular}{l c c c} \hline & Size & Shape & Orientation \\ \hline Common & 1 & 8 & 36 \\ Variable & 6 & 48 & 216 \\ \hline \end{tabular} \end{table} Table 2: Number of parameters associated with each feature when \(k=6,d=9\). ## 3 Gaussian Mixture Models In a Gaussian Mixture Model (GMM), data are assumed to be generated by a random vector with probability density function: \[f(y)=\sum_{i=1}^{k}\pi_{i}\phi(y|\mu_{i},\Sigma_{i})\,\] where \(0\leq\pi_{i}\leq 1,\ \sum_{i=1}^{k}\pi_{i}=1\). The idea of introducing covariance matrix restrictions given by parsimonious decomposition in the estimation of GMMs has become a common tool for statisticians, and methods are implemented in the software \(R\) in many packages. In this paper we use for the comparison the results given by the package _mclust_(Fraley and Raftery, 2002; Scrucca et al., 2016), although there exists many others widely known (\(Rmixmod\): Lebret et al. (2015); \(mixtools\): Benaglia et al. (2009)). The aim of this section is to explore how we can fit GMMs in different contexts with the intermediate parsimonious models explained in Section 2, allowing the common part of the covariance matrices in the decomposition to vary between \(G\) classes. That is, with the same notation as in Section 2, we want to study GMMs with density function \[f(y)=\sum_{g=1}^{G}\sum_{i:P_{i}=g}\pi_{i}\phi\Big{(}y|\mu_{i},\Sigma(V_{i},C_{ g})\Big{)}\, \tag{10}\] where \(\mathbf{P}=(P_{1},\ldots,P_{k})\in\mathscr{H}\) is a fixed vector of indexes, \(\mathbf{V}=(V_{1},\ldots,V_{k})\) are the variable parameters, \(\mathbf{C}=(C_{1},\ldots,C_{G})\) are the common parameters among classes and \(\Sigma(V_{i},C_{g})\) is the covariance matrix with the parameters given by \((V_{i},C_{g})\). The following subsections exploit the potential of these particular GMMs for cluster analysis and discriminant analysis. A more general situation where only part of the labels are known could also be considered, following the same line as in Dean et al. (2006), but it will not be discussed in this work. As already noted in the Introduction, the criterion we are going to use for model selection between all the estimated models is BIC (Bayesian Information Criterion), choosing the model with a higher value of the BIC approximation given by \[\text{BIC}=2\cdot\text{loglikelihood}-\log(N)\cdot p\,\] where \(N\) is the number of observations and \(p\) is the number of independent parameters to be estimated Figure 1: Classification of \(S_{1},\ldots,S_{100}\), represented by their 95% confidence ellipses. The first row shows the classes and axes estimations given by the 4-CPC model, and the second row shows the classes and proportional matrix estimations given by the 4-PROP model. in the model. This criterion is used for the comparison of the intermediate models G-CPC and G-PROP with the fourteen parsimonious models estimated in the software \(R\) with the functions in the _mclust_ package. In addition, within the framework of discriminant analysis, the quality of the classification given by the best models, in terms of BIC, is also compared using cross validation techniques. ### Model-Based Clustering Given \(y_{1},\ldots,y_{N}\) independent observations of a d-dimensional random vector, clustering methods based on fitting a GMM with k groups seek to maximize the log-likelihood function (2). From the fourteen possible restrictions considered in Celeux and Govaert (1995), we can compute fourteen different maximum likelihood solutions in which size, shape and orientation are common or not between the \(k\) covariance matrices. For a particular level \(\mathscr{M}\) in Table 1, the fitting requires the maximization of the the log-likelihood \[L\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{V},C \Big{|}y_{1},\ldots,y_{N}\Big{)}=\] \[\sum_{j=1}^{N}\log\Biggl{(}\sum_{i=1}^{k}\pi_{i}\phi\Big{(}y_{j} \big{|}\mu_{i},\Sigma(V_{i},C)\Big{)}\Biggr{)}\,\] where \(\boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{k})\) are the weights, with \(0\leq\pi_{i}\leq 1\), \(\sum_{i=1}^{k}\pi_{i}=1\), \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{k})\) the means, \(\boldsymbol{V}=(V_{1},\ldots,V_{k})\) the variable parameters and \(C\) the common parameters. Estimation under the parsimonious restriction is performed via the EM algorithm. In the GMM context, we can see the complete data as pairs \((y_{j},z_{j})\), where \(z_{j}\) is an unobserved random vector such that \(z_{i,j}=1\) if the observation \(y_{j}\) comes from distribution \(i\), and \(z_{i,j}=0\) otherwise. With the ideas of Section 2, we are going to fit Gaussian Mixture Models with parsimonious restrictions, but allowing the common parameters to vary between different classes. Assuming a parsimonious level of decomposition \(\mathscr{M}\) and a number \(G\in\{1,\ldots,k\}\) of classes, we are supposing that our data are independent observations from a distribution with density function (10). The log-likelihood function given a fixed vector of indexes \(\boldsymbol{P}\) is \[L_{\boldsymbol{P}}\Big{(}\boldsymbol{\pi},\boldsymbol{\mu}, \boldsymbol{V},\boldsymbol{C}\Big{|}y_{1},\ldots,y_{N}\Big{)}=\] \[\sum_{j=1}^{N}\log\Biggl{(}\sum_{g=1}^{G}\sum_{i:P_{i}=g}\pi_{i} \phi\Big{(}y_{j}\big{|}\mu_{i},\Sigma(V_{i},C_{g})\Big{)}\Biggr{)}.\] For each \(\boldsymbol{P}\in\mathscr{H}\), we can fit a model. In order to choose the best value for the vector of indexes \(\boldsymbol{P}\), we should compare the BIC values given by the different models estimated. As the number of parameters is the same, the best value for \(\boldsymbol{P}\) can be obtained by taking \[\boldsymbol{P}^{\star}=\operatorname*{argmax}_{P\in\mathscr{H}}\Bigl{[}\max_{ \boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{V},\boldsymbol{C}}\quad L_{P} \Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{V},\boldsymbol{C}\Big{|} y_{1},\ldots,y_{N}\Big{)}\Bigr{]}\.\] In order to avoid the combinatorial problem of maximizing within \(\mathscr{H}\), we can take \(\boldsymbol{P}\) as if it were a parameter, and we are going to focus on the problem of maximizing \[L\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{P}, \boldsymbol{V},\boldsymbol{C}\Big{|}y_{1},\ldots,y_{N}\Big{)}=\] \[\sum_{j=1}^{N}\log\Biggl{(}\sum_{g=1}^{G}\sum_{i:P_{i}=g}\pi_{i} \phi\Big{(}y_{j}\big{|}\mu_{i},\Sigma(V_{i},C_{g})\Big{)}\Biggr{)}, \tag{11}\] that will be referred to as **Clustering G-\(\mathscr{M}\)**. Therefore, given the unobserved variables \(z_{i,j}\), for \(i=1,\ldots,k\) and \(j=1,\ldots,N\), the complete log-likelihood is \[CL\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{P}, \boldsymbol{V},\boldsymbol{C}\Big{|}y_{1},\ldots,y_{N},z_{1,1},\ldots,z_{k,N} \Big{)}=\] \[\sum_{j=1}^{N}\left[\sum_{g=1}^{G}\sum_{i:P_{i}=g}z_{i,j}\log \Biggl{(}\pi_{i}\phi\Big{(}y_{j}\big{|}\mu_{i},\Sigma(V_{i},C_{g})\Big{)} \Biggr{)}\right]. \tag{12}\] The proposal of this section is to fit this model given a parsimonious level \(\mathscr{M}\) and fixed values of \(k\) and \(G\in\{1,\ldots,k\}\), introducing also constraints to avoid the unboundedness of the log-likelihood function (11). For this purpose, we introduce the determinant and shape constraints studied in Garcia-Escudero et al. (2020). For \(i=1,\ldots,k\), denote by \((\lambda_{i,1},\ldots,\lambda_{i,d})\) the diagonal elements of the shape matrix \(\Lambda_{i}\) (which may be the same within classes). We impose \(k\) constraints controlling the shape of each group, in order to avoid solutions that are almost contained in a subspace of lower dimension, and a size constraint in order to avoid the presence of very small clusters. Given \(c_{sh},c_{vol}\geq 1\), we impose: \[\max_{\begin{subarray}{c}l=1,\ldots,d\\ \min\limits_{l=1,\ldots,d}\lambda_{i,l}\end{subarray}}\leq c_{sh},\ i=1,\ldots,k, \quad\frac{\max\limits_{\begin{subarray}{c}i=1,\ldots,k\\ \min\limits_{i=1,\ldots,k}\gamma_{i}\end{subarray}}}{\min\limits_{i=1,\ldots,k }\gamma_{i}}\leq c_{vol}\, \tag{13}\] **Remark 1**.: _With these restrictions, the theoretical problem of maximizing (11) is well defined. If \(Y\) is a random vector following a distribution \(\mathbb{P}\), the problem consists in maximizing_ \[\begin{split}&\mathrm{E}\Bigg{[}\mathrm{log}\Bigg{(}\sum_{g=1}^{G} \sum_{i:P_{i}=g}\pi_{i}\phi\Big{(}Y\left|\mu_{i},\Sigma\big{(}V_{i},C_{g}\big{)} \big{)}\right)\Bigg{]}=\\ &\int\mathrm{log}\Bigg{(}\sum_{g=1}^{G}\sum_{i:P_{i}=g}\pi_{i} \phi\Big{(}y\left|\mu_{i},\Sigma\big{(}V_{i},C_{g}\big{)}\big{)}\Bigg{)}\, \mathrm{d}\mathbb{P}(y)\end{split} \tag{14}\] _with respect to \(\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{P},\boldsymbol{V},\boldsymbol{C}\), defined as above, and verifying (13). If \(\mathbb{P}_{N}\) stands for the empirical measure \(\mathbb{P}_{N}=(1/N)\sum_{i=1}^{N}\delta_{\{y_{i}\}}\), by replacing \(\mathbb{P}\) by \(\mathbb{P}_{N}\), we recover the original sample problem of maximizing (11) under the determinant and shape constraints (13). This approach guarantees that the objective function is bounded, allowing results to be stated in terms of existence and consistence of the solutions (see Section A in the Appendix)._ Now, we are going to give a sketch of the EM algorithm used for the estimation of these intermediate parsimonious clustering models, for each of the fourteen levels. Clustering G-\(\boldsymbol{\mathcal{M}}:\)Starting from an initial solution of the parameters \(\boldsymbol{\pi^{0}},\boldsymbol{\mu^{0}},\boldsymbol{P^{0}},\boldsymbol{V^{0} },\boldsymbol{C^{0}}\), we have to repeat the following steps until convergence: * **E step**: Given the current values of the parameters \(\boldsymbol{\pi^{m}},\boldsymbol{\mu^{m}},\boldsymbol{P^{m}},\boldsymbol{V^{m }},\boldsymbol{C^{m}}\), we compute the posterior probabilities \[\begin{split} z_{i,j}=\frac{\pi_{i}^{m}\phi\Big{(}y_{j}|\mu_{i}^{m },\Sigma\big{(}V_{i}^{m},C_{P_{i}}^{m}\big{)}\Big{)}}{\sum_{l=1}^{k}\pi_{l}^{ m}\phi\Big{(}y_{j}|\mu_{l}^{m},\Sigma\big{(}V_{l}^{m},C_{P_{l}}^{m}\big{)} \Big{)}}\\ \text{for }i=1,\ldots,k,\ j=1,\ldots,N.\end{split}\] (15) * **M step**: In this step, we have to maximize (12) given the expected values \(\{z_{i,j}\}_{i,j}\). The optimal values for \(\boldsymbol{\pi^{m+1}},\boldsymbol{\mu^{m+1}}\) are given by (3). With these optimal values, if we denote \(S_{i}=(1/n_{i})\sum_{j=1}^{N}z_{i,j}(y_{j}-\mu_{i}^{m+1})(y_{j}-\mu_{i}^{m+1})^ {T}\), then we have to find the values \(\boldsymbol{P^{m+1}},\boldsymbol{V^{m+1}},\boldsymbol{C^{m+1}}\) verifying the determinant and shape constraints (13) maximizing (P,V,C) \(\longmapsto\) \[CL\Big{(}\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{P},\boldsymbol{V}, \boldsymbol{C}\Big{|}y_{1},\ldots,y_{N},z_{1,1},\ldots,z_{k,N}\Big{)}\.\] If we remove the determinant and shape constraints, the solution of this maximization coincides with the classification problem presented in Section 2 for the computed values of \(n_{1},\ldots,n_{k}\) and \(S_{1},\ldots,S_{k}\). A simple modification of that algorithm, computing on each step the optimal size and shape constrained parameters (instead of the unconstrained version) with the _optimal truncation_ algorithm presented in Garcia-Escudero et al. (2020) allows the maximization to be completed. Determinant and shape constraints can be incorporated in the algorithms together with the parsimonious constraints following the lines developed in Garcia-Escudero et al. (2022). As already noted in Section 2, we keep only the clustering models G-CPC and G-PROP, the most interesting in terms of parameter reduction and interpretability. For these models, explicit algorithms are explained in Section B.3 in the Appendix. Now, we are going to illustrate the results of the algorithms in two simulation experiments: * **Clustering G-CPC**: In this example, we simulate \(n=100\) observations from each of 6 Gaussian distributions, with means \(\mu_{1},\ldots,\mu_{6}\) and covariance matrices verifying \[\begin{split}\Sigma_{i}=&\lambda_{i}\beta_{1} \Lambda_{i}\beta_{1}^{T},\quad i=1,2,3\\ \Sigma_{i}=&\lambda_{i}\beta_{2}\Lambda_{i}\beta_{ 2}^{T},\quad i=4,5,6\.\end{split}\] In Figure 2, we can see in the first plot the 95 % confidence ellipses of the six theoretical Gaussian distributions together with the 100 independent observations simulated from these distributions. The second plot represents the clusters created by the maximum likelihood solution for the 2-CPC model, taking \(c_{sh}=c_{vol}=100\). The numbers labeling the ellipses represent the class of covariance matrices sharing the orientation. Finally, the third plot represents the best solution estimated by _mclust_ for \(k=6\), corresponding to the parsimonious model VEV, with equal shape and variable size and orientation. The BIC value in the 2-CPC model (31 d.f.) is -3937.08, whereas the best model VEV (30 d.f.) estimated with _mclust_ has BIC value -3960.07. Therefore, the GMM estimated with the 2-CPC restriction has higher BIC than all the parsimonious models. Finally, the number of observations assigned to different clusters from the original ones is 82 for the 2-CPC model and 91 for the VEV model. * **Clustering G-PROP**: In this example, we simulate \(n=100\) observations from each of 6 Gaussian distributions, with means \(\mu_{1},\ldots,\mu_{6}\) and covariance matrices verifying: \[\Sigma_{i}= \lambda_{i}A_{1},\quad i=1,2,3\] \[\Sigma_{i}= \lambda_{i}A_{2},\quad i=4,5,6\.\] Figure 3 is analogous to Figure 2, but in the proportionality case. The BIC value for the 2-PROP model (27 d.f.) with \(c_{sh}=c_{vol}=100\) is -3873.127, whereas the BIC value for the best model fitted by _mclust_ is -3919.796, which corresponds to the unrestricted model VVV (35 d.f.). Now, the number of observations wrongly assigned to the source groups is 64 for the 2-PROP model, while it is 71 for the VVV model. To evaluate the sensitivity of our proposal for the detection of the true underlying model, we have used the models described in the two previous examples. Once a model and a particular size \(n\) (=50, 100, 200) have been chosen, the simulation planning produces a sample containing \(n\) random elements generated from each \(N(\mu_{i},\Sigma_{i}),\ i=1,\ldots,6\). We repeated every simulation plan 1000 times, comparing for every sample the BIC obtained for the underlying clustering model vs the best parsimonious model estimated by _mclust_. Table 3 includes the proportions of times in which 2-CPC or 2-PROP model improves the best _mclust_ model for each value of \(n\). Of course, the accuracy of the approach should depend on the dimension, the number of groups, the overlapping... However, even in the case of a large overlapping, as in the present examples, the proportions reported in Table 3 show that moderate values of N suffice to get very high proportions of success. ### Discriminant Analysis The parsimonious model introduced in Bensmail and Celeux (1996) for discriminant analysis has been developed in conjunction with model-based clustering. The \(R\) package _mclust_(Fraley and Raftery, 2002, Scrucca et al., 2016) also includes functions for fitting these models, denoted by EDDA (Eigenvalue Decomposition Discriminant Analysis). In this context, given a parsimonious level \(\mathscr{M}\) and a number \(G\) of classes, we can also consider fitting an intermediate model for each fixed \(\textbf{{P}}\in\mathscr{H}\), by maximizing the complete log-likelihood \[CL_{\textbf{{P}}}\Big{(}\boldsymbol{\pi},\boldsymbol{\mu}, \textbf{{V}},\textbf{{C}}\Big{|}y_{1},\ldots,y_{N},z_{1,1},\ldots,z_{k,N}\Big{)}=\] \[\sum_{j=1}^{N}\left[\sum_{g=1}^{G}\sum_{i:P_{i}=g}z_{i,j}\log \left(\pi_{i}\phi\Big{(}y_{j}|\mu_{i},\Sigma(V_{i},C_{g})\Big{)}\right) \right]. \tag{16}\] Model comparison is done through BIC, and consequently we could try to choose \(P\) maximizing the log-likelihood (11). However, given that in the model fitting we are maximizing the complete log-likelihood (16), it is not unreasonable trying to find the value of \(P\) maximizing (16). Proceeding in this manner, we can think of \(P\) as a parameter, and the problem consists in maximizing (12). Model estimation is simple from model-based clustering algorithms: with a single iteration of the M step, we can compute the values of the parameters. A new set of observations can be classified computing the posterior probabilities, \begin{table} \begin{tabular}{c c c c} \hline Example & n=50 & n=100 & n=200 \\ \hline 2-CPC & 0.570 & 0.927 & 1.000 \\ 2-PROP & 0.933 & 0.999 & 1.000 \\ \hline \end{tabular} \end{table} Table 3: Proportions of times in which clustering 2-CPC or 2-PROP model improves the best _mclust_ model in terms of BIC, for each simulation size \(n\). with the formula (15) of the E step, and assigning each new observation to the group with higher posterior probability. Since the groups are known, the complete log-likelihood (12) is bounded under mild conditions, and it is not required to impose eigenvalue constraints, although it may be interesting in some examples with almost degenerated variables. To summarize the quality of the classification given by the best models (selected through BIC) in the different examples, other indicators based directly on classification errors are provided: * **MM**: Model Misclassification, or training error. Proportion of observations misclassified by the model fitted with all observations. * **LOO**: Leave One Out error. * **CV(K,p):** Cross Validation error. Considering each observation as labeled or unlabeled with probability \(p\) and \(1-p\), we compute the proportion of unlabeled observations misclassified by the model fitted with the labeled observations. The indicator CV(K,p) represents the mean of the proportions obtained in K repetitions of the process. When several classification methods are compared, the same K random partitions are Figure 3: From left to right: 1. Theoretical Gaussian distributions and observations simulated from each distribution. 2. Solution estimated by clustering through 2-PROP model. 3. Best clustering solution estimated by _mclust_ in terms of BIC. Figure 2: From left to right: 1. Theoretical Gaussian distributions and observations simulated from each distribution. 2. Solution estimated by clustering through 2-CPC model. 3. Best clustering solution estimated by _mclust_ in terms of BIC. used to compute the values of this indicator. In the line of the previous section, only the discriminant analysis models G-CPC and G-PROP are considered. Table 4 and 5 show the results of applying these models to the simulation examples of Figure 2, 3. In both situations, the classification obtained with our model slightly improves that given by _mclust_. As we did in the clustering setting, in order to evaluate the sensitivity of our proposal for the detection of the true underlying model, simulations have been repeated 1000 times, for each size \(n\) (=30, 50, 100, 200). Table 6 shows the proportions of times in which 2-CPC or 2-PROP model improves the best _mclust_ model for each value of N. **Remark 2**.: _In discriminant analysis, the weights \(\boldsymbol{\pi}=(\pi_{1},\ldots,\pi_{k})\) might not be considered as parameters. Model-based methods assume that observations from the \(i^{th}\) group follow a distribution with density function \(f(\cdot,\theta_{i})\). If \(\pi_{i}\) is the proportion of observations of group \(i\), the classifier minimizing the expected misclassification rate is known as Bayes classifier, and it assigns an observation \(y\) to the group with higher posterior probability_ \[P\big{(}y\in\mathrm{Group\ i}\big{)}=\frac{\pi_{i}f(y,\theta_{i})}{\sum_{l=1} ^{k}\pi_{l}f(y,\theta_{l})}. \tag{17}\] _The values of \(\boldsymbol{\pi},\theta_{1},\ldots,\theta_{k}\) are usually unknown, and the classification is performed with estimations \(\boldsymbol{\hat{\pi}},\hat{\theta}_{1},\ldots,\hat{\theta}_{k}\). Whereas \(\hat{\theta}_{1},\ldots,\hat{\theta}_{k}\) are always parameters estimated from the sample, the values of \(\boldsymbol{\hat{\pi}}\) may be seen as part of the classification rule, if we think that they represent a characteristic of a particular sample we are classifying, or real parameters, if we assume that the observations \((z_{j},y_{j})\) arise from a GMM such that_ \[z_{j}\sim\mathrm{mult}\Big{(}1,\{1,\ldots,k\}, \{\pi_{1},\ldots,\pi_{k}\}\Big{)}\] \[y_{j}\big{|}z_{j}\sim f\big{(}\cdot,\theta_{z_{j}}\big{)}\,\] _where \(mult()\) denotes the multinomial distribution, and the weights verify \(0\leq\pi_{i}\leq 1\), \(\sum_{i=1}^{k}\pi_{i}=1\). In accordance with mclust, for model comparison we are not considering \(\boldsymbol{\pi}\) as parameters, although its consideration would only mean adding a constant to all BIC values computed. However, in order to define the theoretical problem, the situation where we are considering \(\boldsymbol{\pi}\) as a parameter is more interesting. If \((Z,Y)\) is a random vector following a distribution \(\mathbb{P}\) in \(\{1,\ldots,k\}\times\mathbb{R}^{d}\), the theoretical problem consists in maximizing_ \[\mathrm{E}\bigg{[}\!\sum_{g=1}^{G}\!\sum_{i:P_{i}=g}\mathrm{I}(Z= i)\log\!\left(\pi_{i}\phi\Big{(}Y|\mu_{i},\Sigma(V_{i},C_{g})\Big{)}\right) \!\bigg{]}=\] \[\int\sum_{g=1}^{G}\!\sum_{i:P_{i}=g}\mathrm{I}(z=i)\log\!\left( \pi_{i}\phi\Big{(}y|\mu_{i},\Sigma(V_{i},C_{g})\Big{)}\right)\mathrm{d} \mathbb{P}(z,y). \tag{18}\] _with respect to the parameters \(\boldsymbol{\pi},\boldsymbol{\mu}\),\(\boldsymbol{P}\),\(\boldsymbol{V}\),\(\boldsymbol{C}\). Given \(N\) observations \((z_{j},y_{j}),\ j=1,\ldots,N\) of \(\mathbb{P}\), the problem of \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{Example} & n=30 & n=50 & n=100 & n=200 \\ \hline 2-CPC & 0.443 & 0.782 & 0.975 & 1.000 \\ 2-PROP & 0.971 & 1.000 & 1.000 & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 6: Proportions of times in which discriminant analysis 2-CPC or 2-PROP model improves the best _mclust_ model in terms of BIC, for each simulation size \(n\). \begin{table} \begin{tabular}{c|c c c c c} \hline \hline model & loglik & df & BIC & MM & LOO & CV(300,0.9) \\ \hline _mclust_: VVV & -1874.865 & 30 & -3941.637 & 66/600 & 71/600 & 0.1187 \\ 2-CPC & -1874.74 & 26 & **-3915.801** & 65/600 & 69/600 & 0.1161 \\ \hline \hline \end{tabular} \end{table} Table 4: Classification results for data in Figure 2 for the best _mclust_ model and 2-CPC. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline model & loglik & df & BIC & MM & LOO & CV(300,0.9) \\ \hline _mclust_: VVV & -1852.765 & 30 & -3897.439 & 62/600 & 69/600 & 0.1102 \\ 2-PROP & -1853.056 & 22 & **-3846.845** & 64/600 & 68/600 & 0.1083 \\ \hline \hline \end{tabular} \end{table} Table 5: Classification results for data in Figure 3 for the best _mclust_ model and 2-PROP. maximizing (18) agrees with the sample problem presented above the remark when taking empirical measure \(\mathbb{P}_{N}\), with the obvious relation \(z_{i,j}=\mathrm{I}(z_{j}=i)\). Arguments like those presented in Section A in the Appendix for the cluster analysis problem would give existence and consistency of solutions also in this setting._ ## 4 Real Data Examples To illustrate the usefulness of the G-CPC and G-PROP models in both settings, we show four real data examples in which our models outperform the best parsimonious models fitted by _mclust_, in terms of BIC. The two first examples are intended to illustrate the methods in simple and well-known data sets, while the latter involve greater complexity. ### Cluster Analysis: IRIS Here we revisit the famous _Iris data set_, which consists of observations of four features (length and width of sepals and petals) of 50 samples of three species of Iris (setosa, versicolor and virginica), and is available in the base package of \(R\). We apply the functions of package _mclust_ for model-based clustering, letting the number of clusters to search equal to 3, to obtain the best parsimonious model in terms of BIC value. Table 7 compares this model with the models 2-CPC and 2-PROP, fitted with \(c_{sh}=c_{vol}=100\). With some abuse of notation, we include in the table the Model Misclassification (MM), representing here the number of observations assigned to different clusters than the originals, after identifying the clusters created with the originals in a logical manner. From Table 7 we can appreciate that the best clustering model in terms of BIC is the 2-PROP model. In Figure 4 we can see the clusters created by this model. These clusters coincide with the real groups, except for four observations. From this example, we can also see the advantage of the intermediate models G-CPC and G-PROP in terms of interpretability. In the solution found with G-PROP the covariance matrices associated to two of the three clusters are proportional. Each cluster represents a group of individuals with similar features, which in absence of labels, we could see as a subclassification within the Iris specie. In this subclassification associated to the groups with proportional covariance matrices, both groups share not only the principal directions, but also the same proportion of variability between the directions. In many biological studies, principal components are of great importance. When working with phenotypic variables, principal components may be interpreted as "growing directions" (see e.g. Thorpe (1983)). From the estimated model, we can conclude that in the Iris data, it is reasonable to think that there are three groups, two of them with similar "growing pattern", since not only the principal components are the same, but also the shape is common. However, this biological interpretation will become even more evident in the following example. ### Discriminant Analysis: CRABS The data set consists of measures of 5 features over a set of 200 crabs from two species, orange and blue, and from both sexes, and it is available in the \(R\) package _MASS_(Venables and Ripley, 2002). For each specie and sex (labeled OF, OM, BF, BM) there are 50 observations. The variables are measures in mm of the following features: frontal lobe (FL), rear width (RW), carapace length (CL), carapace width (CW) and body depth (BD). Applying the classification function of the _mclust_ library, the best parsimonious model in terms of BIC is EEV. Table 8 shows the result for the EEV model, together with the discriminant analysis models 2-CPC and 2-PROP, with \(c_{sh}=c_{vol}=100000\) (with these values, the solutions agrees with the unrestricted solutions). The results show that the comparison given by BIC can differ from those obtained by cross validation techniques, partially because BIC mainly measures the fit of the data to the model. However, in the parsimonious context, model selection is usually performed via BIC, in order to avoid the very time-consuming process of evaluating every possible model with cross validation techniques. Figure A1 represents the solution estimated by 2-PROP model. The solution given by this model al \begin{table} \begin{tabular}{c|c c c c} \hline model & loglik & df & BIC & MM \\ \hline _mclust_: VEV & -186.074 & 38 & -562.550 & 5/150 \\ 2-CPC & -185.538 & 38 & -561.480 & 5/150 \\ 2-PROP & -192.177 & 35 & **-559.727** & 4/150 \\ \hline \end{tabular} \end{table} Table 7: Iris data solutions for clustering with _mclust_, 2-CPC and 2-PROP. lows for a better biological interpretation than the one given by the parsimonious model EEV, where orientation varies along the 4 groups, making the comparison quite complex. In the 2-PROP model, the groups of males of both species share proportional matrices, and the same is true for the females. Returning to the biological interpretation of the previous example, under the 2-PROP model, we can state that crabs of the same sex have the same "growing pattern", despite of being from different species. ### Cluster Analysis: GENE Expression CANCER In this example, we work with the _Gene expression cancer RNA-Seq Data Set_, which can be downloaded from the UCI Machine Learning Repository. This data set is part of the data collected by "The Cancer Genome Atlas Pan-Cancer analysis project"" [Weinstein et al., 2013]. The data set we are working with consists of a random extraction of gene expressions of patients having different types of tumor: BRCA (breast carcinoma), KIRC (kidney renal clear-cell carcinoma), COAD (colon adenocarcinoma), LUAD (lung squamous carcinoma) and PRAD (prostate adenocarcinoma). In total, the data set contains the information of 801 patients, and for each patient we have information of 20531 variables, which are the RNA sequencing values of 20531 genes. To reduce the dimensionality and to apply model-based clustering algorithms, we have removed the genes with almost zero sum of squares (\(<10^{-5}\)) and applied PCA to the remaining genes. We have taken the first 14 principal components, the minimum number of components retaining more than 50 % of the total variance. Applying model-based clustering methods looking for 5 groups to this reduced data set, we have found that 3-CPC, fitted with \begin{table} \begin{tabular}{c|c c c c c c} \hline model & loglik & df & BIC & MM & LOO & CV(300,0.8) & CV(300,0.95) \\ \hline _mclust_: EEV & -1247.693 & 65 & -2839.776 & 8/200 & 9/200 & 0.0513 & 0.0521 \\ 2-CPC & -1271.470 & 60 & -2860.839 & 7/200 & 9/200 & 0.0536 & 0.0514 \\ 2-PROP & -1278.906 & 52 & **-2833.324** & 8/200 & 11/200 & 0.0546 & 0.0613 \\ \hline \end{tabular} \end{table} Table 8: Crabs data solutions for discriminant analysis with _mclust_, 2-CPC and 2-PROP. Figure 4: Clustering obtained from 2-PROP model in the Iris data set. Color represents the clusters created. The ellipses are the contours of the estimated mixture densities, grouped into the classes given by indexes in black. Point shapes represent the original groups. Observations lying on different clusters from the originals are marked with red circles. \(c_{sh}=c_{vol}=1000\), improves the BIC value obtained by the best parsimonious model estimated by _mclust_. The results obtained from 3-CPC, presented in Table 9, significantly improve the assignment error made by \(mclust\). Figure A2 shows the projection of the solution obtained by 3-CPC onto the first six principal components computed in the preprocessing steps. ### Discriminant Analysis: ITALIAN OLIVE OIL The data set contains information about the composition in percentage of eight fatty acids (palmitic, palmitoleic, stearic, oleic, linoleic, linolenic, arachidonic and eicosenoic) found in the lipid fraction of 572 Italian olive oils, and it is available in the \(R\) package _pdfCluster_[15]. The olive oils are labeled according to a two level classification: 9 different areas that are grouped at the same time in three different regions. * SOUTH: Apulia North, Calabria, Apulia South, Sicily. * SARDINIA: Sardinia inland, Sardinia coast. * CENTRE-NORTH: Umbria, Liguria east, Liguria west. In this example, we have evaluated the performance of different discriminant analysis models, for the problem of classifying the olive oils between areas. The best parsimonious model fitted with _mclust_ is the VVE model, with variable size and shape and equal orientation. Note that in this example, due to the dimension \(d=8\), there is a significant difference in the number of parameters between models with common or variable orientation. Therefore, BIC selection will tend to choose models with common orientation, despite the fact that this hypothesis might not be very precise. This reason suggests that intermediate models could be of great interest also in this example. Given that the last variable _eicosenoic_ is almost degenerated in some areas, we fit the models with \(c_{sh}=c_{vol}=10000\), and the shape constraints are effective in some groups. We have found 3 different intermediate models improving the BIC value obtained with _mclust_. Results are displayed in Table 10. The best solution found in terms of BIC is given by the 3-CPC model, which is also the solution with the best values for the other indicators. The classification of the areas in classes given in this solution is: * CLASS 1: Umbria. * CLASS 2: Apulia North, Calabria, Apulia South, Sicily. * CLASS 3: Sardinia inland, Sardinia coast, Liguria east, Liguria west. Note that areas in class 2 exactly agree with areas from the South Region. This classification coincides with the separation in classes given by 3-PROP, whereas 2-PROP model grouped together class 1 and class 3. These facts support that our intermediate models have been able to take advantage of the apparent difference in the structure of the covariance matrices from the South region and the others. When we are looking for a three-class separation, instead of splitting the areas from the Centre-North and Sardinia into these two regions, all Centre-North and Sardinia areas are grouped together, except Umbria, which forms a group alone. Figure A3 represents the solution in the principal components of the group Umbria, and we can appreciate the characteristics of this area. The plot corresponding to the second and third variables allows us to see clear differences in some of its principal components. Additionally, we can see that it is also the area with less variability in many directions. In conclusion, a different behavior of the variability in the olive oils from this area seems to be clear. This could be related to the geographical situation of Umbria (the only non-insular and non-coastal area under consideration). \begin{table} \begin{tabular}{c|c c c c} \hline model & loglik & df & BIC & MM \\ \hline _mclust_: VVV & -44121.24 & 599 & -92247.32 & 64/801 \\ 3-CPC & -44561.12 & 417 & **-91910.25** & 6/801 \\ \hline \end{tabular} \end{table} Table 9: Cancer data solutions for clustering with _mclust_ and 3-CPC. ## 5 Conclusions and further directions Cluster analysis of structured data opens up interesting research prospects. This fact is widely known and used in applications where the data themselves share some common structure, and thus clustering techniques are a key tool in functional data analysis. More recently, the underlying structures of the data have increased in complexity, leading, for example, to consider probability distributions as data, and to use innovative metrics, such as earth-mover or Wasserstein distances. This configuration has been used in cluster analysis, for example, in del Barrio et al. (2019), from a classical perspective, but also including new perspectives: meta-analysis of procedures, aggregation facilities.... Nevertheless, to the best of our knowledge, this is the first occasion in which a clustering procedure is used as a selection (of an intermediate model) step in an estimation problem. Our proposal allows improvements in the estimation process and, arguably, often a gain in the interpretability of the estimation thanks to the chosen framework: Classification through the Gaussian Mixture Model. The presented methodology enhances the so-called parsimonious model leading to the inclusion of intermediate models. They are linked to geometrical considerations on the ellipsoids associated to the covariance matrices of the underlying populations that compose the mixture. These considerations are precisely the essence of the parsimonious model. The intermediate models arise from clustering covariance matrices, considered as structured data, and using a similarity measure based in the likelihood. The consideration of clustering these objects through other similarities could be appropriate looking for tools for different goals. In particular, we emphasize on the possibility of clustering based on metrics like the Bures-Wasserstein distance. The role played here by the BIC would have to be tested in the corresponding configurations or, alternatively, replaced by appropriate penalties for choosing between other hierarchical models. Feasibility of the proposal is an essential requirement for a serious essay of a statistical tool. The algorithms considered in the paper are simple adaptations of Classification Expectation Maximization algorithm, but we think that they could be still improved. We will pursuit on this challenge, looking also for feasible computations for similarities associated to new pre-established objectives. In summary, through the paper we have used clustering to explore similarities between groups according to predetermined patterns. In this wider setup, clustering is not a goal in itself, it can be an important tool for specialized analyses. ### Supplementary material Github repository containing the R scripts with the algorithms and workflow necessary to reproduce the results of this work. Simulation data of the examples are also included. ([https://github.com/rvitores/ImprovingModelChoice](https://github.com/rvitores/ImprovingModelChoice)) ### Competing interest The authors have no relevant financial or non-financial interests to disclose. \begin{table} \begin{tabular}{c|c c c c c c c} \hline \hline model & loglik & df & BIC & MM & LOO & CV(300,0.8) & CV(300,0.95) \\ \hline _mclust_: VVE & -20595.49 & 172 & -42283.03 & 12/572 & 20/572 & 0.0375 & 0.0363 \\ 2-CPC & -20452.64 & 200 & -42175.11 & 10/572 & 18/572 & 0.0369 & 0.0281 \\ 3-CPC & -20332.93 & 228 & **-42113.47** & 9/572 & 16/572 & 0.0365 & 0.0278 \\ 3-PROP & -20521.33 & 186 & -42223.60 & 16/172 & 27/572 & 0.0464 & 0.0463 \\ \hline \hline \end{tabular} \end{table} Table 10: Olive oil discriminant analysis with _mclust_, 2-CPC, 3-CPC and 3-PROP.
2301.08486
Superpolynomial Lower Bounds for Learning Monotone Classes
Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time $n^{\tilde O(\log\log s)}$ for the classes of $n$-variable size-$s$ DNF, size-$s$ Decision Tree, and $\log s$-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound $n^{\Omega(\log s)}$. This matches the best known upper bound for $n$-variable size-$s$ Decision Tree, and $\log s$-Junta. In this paper, we give the same lower bounds for PAC-learning of $n$-variable size-$s$ Monotone DNF, size-$s$ Monotone Decision Tree, and Monotone $\log s$-Junta by~DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.
Nader H. Bshouty
2023-01-20T09:31:46Z
http://arxiv.org/abs/2301.08486v2
# Superpolynomial Lower Bounds for Learning Monotone Classes ###### Abstract Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time \(n^{\tilde{O}(\log\log s)}\) for the classes of \(n\)-variable size-\(s\) DNF, size-\(s\) Decision Tree, and \(\log s\)-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound \(n^{\Omega(\log s)}\). This matches the best known upper bound for \(n\)-variable size-\(s\) Decision Tree, and \(\log s\)-Junta. In this paper, we give the same lower bounds for PAC-learning of \(n\)-variable size-\(s\) Monotone DNF, size-\(s\) Monotone Decision Tree, and Monotone \(\log s\)-Junta by DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time. ## 1 Introduction In the distribution-free PAC learning model [12], the learning algorithm of a class of functions \(C\) has access to an unknown target function \(f\in C\) through labeled examples \((x,f(x))\) where \(x\) are drawn according to an unknown but fixed probability distribution \(\mathcal{D}\). For a class of hypothesis \(H\supseteq C\), we say that the learning algorithm \(\mathcal{A}\)_PAC-learns_\(C\)_by_\(H\) in time \(T\) and error \(\epsilon\) if for every target \(f\in C\) and distribution \(\mathcal{D}\), \(\mathcal{A}\) runs in time \(T\) and outputs a hypothesis \(h\in H\) which, with probability at least \(2/3\), is \(\epsilon\)-close to \(f\) with respect to \(\mathcal{D}\). That is, satisfies \(\mathbf{Pr}_{\boldsymbol{x}\sim\mathcal{D}}[f(\boldsymbol{x})\neq h( \boldsymbol{x})]\leq\epsilon\). Koch et al., [9], show that, under the randomized exponential time hypothesis (ETH), there is no PAC-learning algorithm that runs in time \(n^{\tilde{O}(\log\log s)}\) for the classes of \(n\)-variable size-\(s\) DNF, size-\(s\) Decision Tree and \(\log s\)-Junta by DNF. Assuming a natural conjecture on the hardness of set cover, they give the lower bound \(n^{\Omega(\log s)}\). Their lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and can compute the target function on all the points of the support of the distribution in polynomial time. In this paper, we give the same lower bounds for PAC-learning of the classes \(n\)-variable size-\(s\) Monotone DNF, size-\(s\) Monotone Decision Tree and Monotone \(\log s\)-Junta by DNF. This solves the open problem proposed by Koch, Strassle, and Tan [9]. ### Our Results In this paper, we prove the following three Theorems. **Theorem 1**.: _Assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone\((\log s)\)-Junta, size-\(s\) Monotone DT and size-\(s\) Monotone DNF by_ DNF _with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\frac{\log\log s}{\log\log\log s}}\] _time._ **Theorem 2**.: _Assuming a plausible conjecture on the hardness of Set-Cover 1, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone\((\log s)\)-Junta, size-\(s\) Monotone DT and size-\(s\) Monotone DNF by_ DNF _with \(\epsilon=1/(16n)\) must take at least_ Footnote 1: See Conjecture 1. \[n^{c\log s}\] _time._ **Theorem 3**.: _Assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone\((\log s)\)-Junta, size-\(s\) Monotone DT and size-\(s\) Monotone DNF by size-\(s\)\(\operatorname{DNF}\) with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\log s}\] _time._ All the above lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and can compute the target on all the points of the support of the distribution in polynomial time. In the following two subsections, we give the technique used in [9] to prove Theorem 1 for \((\log s)\)-Junta, and the technique we use here to extend the result to Monotone\((\log s)\)-Junta. ### Previous Technique In [9], Koch, Strassle, and Tan show that under the randomized exponential time hypothesis, there is no PAC-learning algorithm that runs in time \(n^{\hat{O}(\log\log n)}\) for the class of \(\log n\)-Junta2 by DNF. The results for the other classes follow immediately from this result, since all other classes contain \(\log n\)-Junta. All prior works [1, 5] ruled out only \(poly(n)\) time algorithms. Footnote 2: \(k\)-Junta are Boolean functions that depend on at most \(k\) variables The result in [9] uses the hardness result of \((k,k^{\prime})\)-Set-Cover where one needs to distinguish between instances that have set cover of size at most \(k\) from instances that have minimum-size set cover greater than \(k^{\prime}\): 1. For some parameters \(k\) and \(k^{\prime}\), assuming randomized ETH, there is a constant \(\lambda<1\) such that \((k,k^{\prime})\)-Set-Cover cannot be solved in time \(n^{\lambda k}\). First, for each set cover instance \(\mathcal{S}\), they identify each element in the universe with an assignment in \(\{0,1\}^{n}\) and construct in polynomial time a target function \(\Gamma^{\mathcal{S}}:\{0,1\}^{n}\to\{0,1\}\) and a distribution \(\mathcal{D}^{\mathcal{S}}\) that satisfies: 2. The instance \(\mathcal{S}\) has minimum-size set cover \(\operatorname{opt}(\mathcal{S})\) if and only if the function \(\Gamma^{\mathcal{S}}\) is a conjunction of \(\operatorname{opt}(\mathcal{S})\) unnegated variables3 over the distribution \(\mathcal{D}^{\mathcal{S}}\).4 Footnote 3: Their reduction gives a conjunction of negated variable. So here, we are referring to the dual function. Footnote 4: That is, there is a term \(T\) with \(\operatorname{opt}(\mathcal{S})\) variables such that for every \(x\) in the support of \(\mathcal{D}^{\mathcal{S}}\), \(\Gamma^{\mathcal{S}}(x)=T(x)\). For a DNF \(F\) and \(x\in\{0,1\}^{n}\), they define \(\operatorname{width}_{F}(x)\) to be the size of the smallest term \(T\) in \(F\) that satisfies \(T(x)=1\). They then show that 1. Any DNF \(F\) with expected width \(\mathbf{E}_{\boldsymbol{x}\sim\mathcal{D}^{\mathcal{S}}}[\operatorname{width }_{F}(\boldsymbol{x})]\leq\operatorname{opt}(\mathcal{S})/2\) is \((1/(2N))\)-far from \(\Gamma^{\mathcal{S}}\) with respect to \(\mathcal{D}^{\mathcal{S}}\) where \(N\) is the size5 of \(\mathcal{S}\). That is, \(\mathbf{Pr}_{\boldsymbol{x}\sim\mathcal{D}^{\mathcal{S}}}[F(\boldsymbol{x}) \neq\Gamma^{\mathcal{S}}(\boldsymbol{x})]\geq 1/(2N)\). Footnote 5: \(N\) is the number of sets plus the size of the universe in \(\mathcal{S}\). They then use the following gap amplification technique. They define the function \(\Gamma^{\mathcal{S}}_{\oplus\ell}:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) where for \(y=(y_{1},\ldots,y_{n})\), \(y_{i}=(y_{i,1},\ldots,y_{i,\ell})\in\{0,1\}^{\ell}\), \(i\in[n]\), we have \(\Gamma^{\mathcal{S}}_{\oplus\ell}(y)=\Gamma^{\mathcal{S}}(\oplus y_{1},\ldots,\oplus y_{n})\) and \(\oplus y_{i}=y_{i,1}+\cdots+y_{i,\ell}\). They also extend the distribution \(\mathcal{D}^{\mathcal{S}}\) to a distribution \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\) over domain \((\{0,1\}^{\ell})^{n}\) and prove that 1. \(\Gamma^{\mathcal{S}}_{\oplus\ell}(y)\) is a \((\operatorname{opt}(\mathcal{S})\ell)\)-Junta over \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\). 2. Any DNF formula \(F\) with expected depth \(\mathbf{E}_{\boldsymbol{y}\sim\mathcal{D}^{\mathcal{S}}_{\oplus\ell}}[ \operatorname{width}_{F}(\boldsymbol{y})]\leq\operatorname{opt}(\mathcal{S}) \ell/4\) is \((1/(4N))\)-far from \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\). Item 4 follows from the definition of \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) and item 2. To prove Item 5, they show that if, to the contrary, there is a DNF \(F\) of expected width at most \(\operatorname{opt}(\mathcal{S})\ell/4\) that is \(1/(4N)\)-close to \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\), then there is \(j\in[\ell]\) and a projection of all the variables that are not of the form \(y_{i,j}\) that gives a DNF \(F^{*}\) of expected width at most \(\operatorname{opt}(\mathcal{S})/2\) that is \(1/(2N)\)-close to \(\Gamma^{\mathcal{S}}\) with respect to \(\mathcal{D}^{\mathcal{S}}\). Then, by item 3, we get a contradiction. They then show that 1. Any size-\(s\) DNF that is \((1/(4N))\)-close to \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\) has average width \(\mathbf{E}_{\boldsymbol{y}\sim\mathcal{D}^{\mathcal{S}}_{\oplus\ell}}\) \([\operatorname{width}_{F}(\boldsymbol{y})]\leq 4\log s\). If \(F\) is \((1/(4N))\)-close to \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\), then, by items 5 and 6, \(4\log s\geq\mathbf{E}_{\boldsymbol{y}\sim\mathcal{D}^{\mathcal{S}}_{\oplus\ell}}\)\([\operatorname{width}_{F}(\boldsymbol{y})]\geq\operatorname{opt}(\mathcal{S})\ell/4\) and then \(s\geq 2^{\operatorname{opt}(\mathcal{S})\ell/16}\). Therefore, 1. Any DNF of size less than \(2^{\operatorname{opt}(\mathcal{S})\ell/16}\) is \((1/(4N))\)-far from \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\). Now, let \(k=\tilde{O}(\log\log n)\). Suppose, to the contrary, that there is a PAC-learning algorithm for \(\log n\)-Junta by DNF with error \(\epsilon=1/(8N)\) that runs in time \(t=n^{\lambda k/2}=n^{\tilde{O}(\log\log n)}\), where \(\lambda\) is the constant in item 1. Given a \((k,k^{\prime})\)-Set-Cover instance, we run the learning algorithm for \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) for \(\ell=\log n/k\). If the instance has set cover at most \(k\), then by item 4, \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) is \(\log n\)-Junta. Then the algorithm learns the target and outputs a hypothesis that is \((1/(8N))\)-close to \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\). On the other hand, if the instance has a minimum-size set cover of at least \(k^{\prime}\), then any learning algorithm that runs in time \(t=n^{\lambda k/2}=n^{\tilde{O}(\log\log n)}\) cannot output a DNF of size more than \(t\) terms. By item 7, any DNF of size less than \(2^{k^{\prime}\log n/(16k)}\leq 2^{\operatorname{opt}(\mathcal{S})\ell/16}\) is \((1/(4N))\)-far from \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\). By choosing the right parameters \(k\) and \(k^{\prime}\), we have \(2^{k^{\prime}\log n/(16k)}>t\), and therefore, any DNF that the algorithm outputs has error of at least \(1/(4N)\). Therefore, by estimating the distance of the output of the learning algorithm from \(\Gamma^{\mathcal{S}}_{\oplus\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\oplus\ell}\), we can distinguish between instances that have set cover of size less than or equal to \(k\) from instances that have a minimum-size set cover greater than \(k^{\prime}\) in time \(t=n^{\lambda k/2}\). Thus, we got an algorithm for \((k,k^{\prime})\)-Set-Cover that runs in time \(n^{\lambda k/2}<n^{\lambda k}\). This contradicts item 1 and finishes the proof of the first lower bound. Assuming a natural conjecture on the hardness of set cover, they give the lower bound \(n^{\Omega(\log s)}\). We will discuss this in Section 5. ### Our Technique In this paper, we also use the hardness result of \((k,k^{\prime})\)-Set-Cover. As in [9], we identify each element in the universe with an assignment in \(\{0,1\}^{n}\) and use the function \(\Gamma^{\mathcal{S}}\) and the distribution \(\mathcal{D}^{\mathcal{S}}\) that satisfies: 1. The instance \(\mathcal{S}\) has minimum-size set cover \(\operatorname{opt}(\mathcal{S})\) if and only if the function \(\Gamma^{\mathcal{S}}\) is a conjunction of \(\operatorname{opt}(\mathcal{S})\) variables over the distribution \(\mathcal{D}^{\mathcal{S}}\). We then build a monotone target function \(\Gamma^{\mathcal{S}}_{\ell}\) and use a different approach to show that any DNF of size less than \(2^{\operatorname{opt}(\mathcal{S})\ell/20}\) is \((1/(8N)-2^{-\operatorname{opt}(\mathcal{S})\ell/20})\)-far from \(\Gamma^{\mathcal{S}}_{\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\ell}\). We define, for any odd \(\ell\), the monotone function \(\Gamma^{\mathcal{S}}_{\ell}:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) where for \(y=(y_{1},\ldots,y_{n})\), \(y_{i}=(y_{i,1},\ldots,y_{i,\ell})\) we have \(\Gamma^{\mathcal{S}}_{\ell}(y)=\Gamma^{\mathcal{S}}(\textsc{Majority}(y_{1}), \ldots,\textsc{Majority}(y_{n}))\) where Majority is the majority function. A distribution \(\mathcal{D}^{\mathcal{S}}_{\ell}\) is also defined such that 1. \(\mathbf{Pr}_{\boldsymbol{y}\sim\mathcal{D}^{\mathcal{S}}_{\ell}}[\Gamma^{ \mathcal{S}}_{\ell}(\boldsymbol{y})=0]=\mathbf{Pr}_{\boldsymbol{y}\sim \mathcal{D}^{\mathcal{S}}_{\ell}}[\Gamma^{\mathcal{S}}_{\ell}(\boldsymbol{y})= 1]=1/2\). It is clear from the definition of \(\Gamma^{\mathcal{S}}_{\ell}\) and item 1 that 1. \(\Gamma^{\mathcal{S}}_{\ell}(y)\) is a monotone \((\operatorname{opt}(\mathcal{S})\ell)\)-Junta over \(\mathcal{D}^{\mathcal{S}}_{\ell}\). We then define the _monotone size_ of a term \(T\) to be the number of unnegated variables that appear in \(T\). We first show that 1. For every DNF \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) of size \(|F|\leq 2^{\operatorname{opt}(\mathcal{S})\ell/5}\) that is \(\epsilon\)-far from \(\Gamma^{\mathcal{S}}_{\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\ell}\), there is another DNF \(F^{\prime}\) of size \(|F^{\prime}|\leq 2^{\operatorname{opt}(\mathcal{S})\ell/5}\) with terms of monotone size at most \(\operatorname{opt}(\mathcal{S})\ell/5\) that is \((\epsilon-2^{-\operatorname{opt}(\mathcal{S})\ell/20})\)-far from \(\Gamma^{\mathcal{S}}_{\ell}\) with respect to \(\mathcal{D}^{\mathcal{S}}_{\ell}\). This is done by simply showing that terms of large monotone size in the DNF \(F\) have a small weight according to the distribution \(\mathcal{D}^{\mathcal{S}}_{\ell}\) and, therefore, can be removed from \(F\) with the cost of \(-2^{-\operatorname{opt}(\mathcal{S})\ell/20}\) in the error. We then, roughly speaking, show that 1. Let \(F^{\prime}\) be a DNF of size \(|F^{\prime}|\leq 2^{\operatorname{opt}(\mathcal{S})\ell/5}\) with terms of monotone size at most \(\operatorname{opt}(\mathcal{S})\ell/5\). For every \(y\in(\{0,1\}^{\ell})^{n}\) in the support of \(\mathcal{D}^{\mathcal{S}}_{\ell}\) that satisfies \(\Gamma^{\mathcal{S}}_{\ell}(y)=1\), either * \(F^{\prime}(y)=0\) or * \(F^{\prime}(y)=1\), and at least \(1/(2N)\) fraction of the points \(z\) below \(y\) in the lattice \((\{0,1\}^{\ell})^{n}\) that are in the support of \(\mathcal{D}^{\mathcal{S}}_{\ell}\) satisfies \(F^{\prime}(z)=1\) and \(\Gamma^{\mathcal{S}}_{\ell}(z)=0\). By item 5, either \(1/(4N)\) fraction of the vectors \(y\) that satisfy \(\Gamma_{\ell}^{\mathcal{S}}(y)=1\) satisfy \(F^{\prime}(y)=0\) or \((1-1/(4N))/(2N)>1/(4N)\) fraction of the points \(z\) that satisfy \(\Gamma_{\ell}^{\mathcal{S}}(z)=0\) satisfy \(F^{\prime}(z)=1\). Therefore, with item 2, we get that \(F^{\prime}\) is \(1/(8N)\)-far from \(\Gamma_{\ell}^{\mathcal{S}}\) with respect to \(\mathcal{D}_{\ell}^{\mathcal{S}}\). This, with item 4, implies that 1. If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF of size \(|F|<2^{\mathrm{opt}(\mathcal{S})\ell/20}\), then \(F\) is \((1/(8N)-2^{-\mathrm{opt}(\mathcal{S})\ell/20})\)-far from \(\Gamma_{\ell}^{\mathcal{S}}\) with respect to \(\mathcal{D}_{\ell}^{\mathcal{S}}\). The rest of the proof is almost the same as in [9]. See the discussion in subsection 1.1 after item 7. ### Upper Bounds The only known distribution-free algorithm for \(\log s\)-Junta is the trivial algorithm that, for every set of \(m=\log s\) variables \(S=\{x_{i_{1}},\ldots,x_{i_{m}}\}\), checks if there is a function that depends on \(S\) and is consistent with the examples. This algorithm takes \(n^{O(\log s)}\) time. For size-\(s\) decision tree and monotone size-\(s\) decision tree, the classic result of Ehrenfeucht and Haussler [4] gives a distribution-free time algorithm that runs in time \(n^{O(\log s)}\) and outputs a decision tree of size \(n^{O(\log s)}\). The learning algorithm is as follows: Let \(T\) be the target decision tree of size \(s\). First, the algorithm guesses the variable at the root of the tree \(T\) and then guesses which subtree of the root has size at most \(s/2\). Then, it recursively constructs the tree of size \(s/2\). When it succeeds, it continues to construct the other subtree. For size-\(s\) DNF and monotone size-\(s\) DNF, Hellerstein et al. [6] gave a distribution-free proper learning algorithm that runs in time \(2^{\tilde{O}(\sqrt{n})}\). To the best of our knowledge, all the other results in the literature for learning the above classes are either restricted to the uniform distribution or use, in addition, a black box queries or returns hypotheses that are not DNF. ## 2 Definitions and Preliminaries In this section, we give the definitions and preliminary results that are needed to prove our results. ### Set Cover Let \(\mathcal{S}=(S,U,E)\) be a bipartite graph on \(N=n+|U|\) vertices where \(S=[n]\), and for every \(u\in U\), \(\deg(u)>0\). We say that \(C\subseteq S\) is a set cover of \(\mathcal{S}\) if every vertex in \(U\) is adjacent to some vertex in \(C\). The Set-Cover problem is to find a minimum-size set cover. We denote by \(\mathrm{opt}(\mathcal{S})\) the size of a minimum-size set cover for \(\mathcal{S}\). We identify each element \(u\in U\) with the vector \((u_{1},\ldots,u_{n})\in\{0,1\}^{n}\) where \(u_{i}=0\) if and only if \((i,u)\in E\). We will assume that those vectors are distinct. If there are two distinct elements \(u,u^{\prime}\in U\) that have the same vector, then you can remove one of them from the graph. This is because every set cover that covers one of them covers the other. **Definition 1**.: _The \((k,k^{\prime})\)-Set-Cover problem is the following: Given as input a set cover instance \(\mathcal{S}=(S,U,E)\), and parameters \(k\) and \(k^{\prime}\). Output Yes if \(\mathrm{opt}(\mathcal{S})\leq k\) and No if \(\mathrm{opt}(\mathcal{S})>k^{\prime}\)._ ### Hardness of Set-Cover Our results are conditioned on the following randomized exponential time hypothesis (ETH) **Hypothesis:**[2, 3, 7, 8, 11]. There exists a constant \(c\in(0,1)\) such that 3-SAT on \(n\) variables cannot be solved by a randomized algorithm in \(O(2^{cn})\) time with success probability at least \(2/3\). The following is proved in [10]. See also Theorem 7 in [9] **Lemma 1**.: _[_10_]__. Let \(k\leq\frac{1}{2}\frac{\log\log N}{\log\log\log N}\) and \(k^{\prime}=\frac{1}{2}\left(\frac{\log N}{\log\log N}\right)^{1/k}\) be two integers. Assuming randomized ETH, there is a constant \(\lambda\in(0,1)\) such that there is no randomized \(N^{\lambda k}\) time algorithm that can solve \((k,k^{\prime})\)-Set-Cover on \(N\) vertices with high probability._ ### Concept Classes For the lattice \(\{0,1\}^{n}\), and \(x,y\in\{0,1\}^{n}\), we define the partial order \(x\leq y\) if \(x_{i}\leq y_{i}\) for every \(i\). When \(x\leq y\) and \(x\neq y\), we write \(x<y\). If \(x<y\), we say that \(x\) is _below_\(y\), or \(y\) is _above_\(x\). A Boolean function \(f:\{0,1\}^{n}\to\{0,1\}\) is _monotone_ if, for every \(x\leq y\), we have \(f(x)\leq f(y)\). A _literal_ is a variable or negated variable. A _term_ is a conjunction (\(\wedge\)) of literals. A _clause_ is a disjunction (\(\vee\)) of literals. A _monotone term_ (resp. clause) is a conjunction (resp. disjunction) of unnegated variables. The _size_ of a term \(T\), \(|T|\), is the number of literals in the term \(T\). A DNF (resp. CNF) is a disjunction (resp. conjunction) of terms (resp. clauses). The _size_\(|F|\) of a DNF (resp. CNF) \(F\) is the number of terms (resp. clauses) in \(F\). A _monotone DNF_ (resp. monotone CNF) is a DNF (resp. CNF) with monotone terms (resp. clauses). We define the following classes 1. size-\(s\) DNF and size-\(s\) Monotone DNF are the classes of DNF and monotone DNF, respectively, of size at most \(s\). 2. size \(s\)-DT and size-\(s\) Monotone DT are the classes of decision trees and monotone decision trees, respectively, with at most \(s\) leaves. 3. \(k\)-Junta and Monotone \(k\)-Junta are the classes of Boolean functions and monotone Boolean functions that depend on at most \(k\) variables. It is well known that \[\mbox{Monotone }(\log s)\mbox{-Junta}\subset\mbox{size-$s$ Monotone DT }\subset\mbox{size-$s$ Monotone DNF }. \tag{1}\] ### Functions and Distributions For any set \(R\), we define \(\mathcal{U}(R)\) to be the uniform distribution over \(R\). For a distribution \(\mathcal{D}\) over \(\{0,1\}^{n}\) and two Boolean functions \(f\) and \(g\), we define \(\mbox{dist}_{\mathcal{D}}(f,g)=\mbox{\bf Pr}_{\mbox{\scriptsize$x\sim\mathcal{D }$}}[f(\mathbf{x})\neq g(\mathbf{x})]\). Here, bold letters denote random variables. If \(\mbox{dist}_{\mathcal{D}}(f,g)=0\), then we say that \(f=g\)_over_\(\mathcal{D}\). For a class of functions \(C\), we say that \(f\) is \(C\) over \(\mathcal{D}\) if there is a function \(g\in C\) such that \(f=g\) over \(\mathcal{D}\). **Definition 2**.: _(\(\mbox{\rm\char 87}^{\mathcal{S}}\) and \(\mathcal{D}^{\mathcal{S}}\)) Let \(\mathcal{S}=(S,U,E)\) be a set cover instance with \(S=[n]\). Recall that we identify each element \(u\in U\) with the vector \((u_{1},\ldots,u_{n})\in\{0,1\}^{n}\) where \(u_{i}=0\) if and only if \((i,u)\in E\). We define the partial function \(\Gamma^{\mathcal{S}}:\{0,1\}^{n}\to\{0,1\}\) where \(\Gamma^{\mathcal{S}}(x)=0\) if \(x\in U\) and \(\Gamma^{\mathcal{S}}(1^{n})=1\). We define the distribution \(\mathcal{D}^{\mathcal{S}}\) over \(\{0,1\}^{n}\) where \(\mathcal{D}^{\mathcal{S}}(x)=1/2\) if \(x=1^{n}\), \(\mathcal{D}^{\mathcal{S}}(x)=1/(2|U|)\) if \(x\in U\), and \(\mathcal{D}^{\mathcal{S}}(x)=0\) otherwise. We will remove the superscript \(\mathcal{S}\) when it is clear from the context and write \(\Gamma\) and \(\mathcal{D}\)._ **Fact 1**.: _We have_ 1. \(C\subseteq S\) _is a set cover of_ \(\mathcal{S}=(S,U,E)\)_, if and only if_ \(\Gamma(x)=\bigwedge_{i\in C}x_{i}\) _over_ \(\mathcal{D}\)_._ 2. _In particular, If_ \(T\) _is a monotone term of size_ \(|T|<\operatorname{opt}(\mathcal{S})\)_, then there is_ \(u\in U\) _such that_ \(T(u)=1\)_._ Proof.: Let \(C\) be a set cover of \(\mathcal{S}\). First, we have \(\Gamma(1^{n})=1\). Now, since \(C\) is a set cover, every vertex \(u\in U\) is adjacent to some vertex in \(C\). This is equivalent to: for every assignment \(u\in U\), there is \(i\in C\) such that \(u_{i}=0\). Therefore, \(\wedge_{i\in C}u_{i}=0\) for all \(u\in U\). Thus, \(\Gamma(x)=\bigwedge_{i\in C}x_{i}\) over \(\mathcal{D}\). The other direction can be easily seen by tracing backward in the above proof. For an odd \(\ell\), define \(\Delta^{0}=\{a\in\{0,1\}^{\ell}|\operatorname{wt}(a)=\lfloor\ell/2\rfloor\}\) and \(\Delta^{1}=\{a\in\{0,1\}^{\ell}|\operatorname{wt}(a)=\lceil\ell/2\rceil\}\), where \(\operatorname{wt}(a)\) is the Hamming weight of \(a\). Notice that \(|\Delta^{0}|=|\Delta^{1}|=\binom{\ell}{\lfloor\ell/2\rfloor}\). **Definition 3**.: _(\(\Gamma_{\ell}\), \(\mathcal{D}_{\ell}\), \(\Delta^{0}_{n}\) and \(\Delta^{1}_{n}\)) For an odd \(\ell\), define \(\Delta^{1}_{n}=(\Delta^{1})^{n}\) and6\(\Delta^{0}_{n}:=\cup_{u\in U}\prod_{i=1}^{n}\Delta^{u_{i}}=\cup_{u\in U}( \Delta^{u_{1}}\times\Delta^{u_{2}}\times\cdots\times\Delta^{u_{n}})\). Define the distribution \(\mathcal{D}_{\ell}:(\{0,1\}^{\ell})^{n}\to[0,1]\) to be \(\mathcal{D}_{\ell}(y)=1/(2|\Delta^{1}_{n}|)=1/(2|\Delta^{1}|^{n})\) if \(y\in\Delta^{1}_{n}\), \(\mathcal{D}_{\ell}(y)=1/(2|\Delta^{0}_{n}|)=1/(2|U|\cdot|\Delta^{0}|^{n})\) if \(y\in\Delta^{0}_{n}\), and \(\mathcal{D}_{\ell}(y)=0\) otherwise. We define the partial function \(\Gamma_{\ell}\) over the support \(\Delta^{0}_{n}\cup\Delta^{1}_{n}\) of \(\mathcal{D}_{\ell}\) to be \(1\) if \(y\in\Delta^{1}_{n}\) and \(0\) if \(y\in\Delta^{0}_{n}\)._ Footnote 6: Here \(\Delta^{\xi}=\Delta^{0}\) if \(\xi=0\) and \(\Delta^{1}\) if \(\xi=1\). We note here that the distribution \(\mathcal{D}_{\ell}\) is well-defined. This is because: First, the sum of the distribution of the points in \(\Delta^{1}_{n}\) is \(1/2\). Second, for two different \(u,u^{\prime}\in U\), we have that \(\prod_{i=1}^{n}\Delta^{u_{i}}\) and \(\prod_{i=1}^{n}\Delta^{u^{\prime}_{i}}\) are disjoint sets. Therefore, \(|\Delta^{0}_{n}|=|U|\cdot|\Delta^{0}|^{n}\), and therefore, the sum of the distribution of all the points in \(\Delta^{0}_{n}\) is half. In particular, **Fact 2**.: _We have \(\underset{\boldsymbol{y}\sim\mathcal{D}_{\ell}}{\mathbf{Pr}}[\Gamma_{\ell}( \boldsymbol{y})=1]=\underset{\boldsymbol{y}\sim\mathcal{D}_{\ell}}{\mathbf{Pr }}[\Gamma_{\ell}(\boldsymbol{y})=0]=\underset{\mathcal{D}_{\ell}}{\mathbf{Pr} }[\Delta^{1}_{n}]=\underset{\mathcal{D}_{\ell}}{\mathbf{Pr}}[\Delta^{0}_{n}]= \frac{1}{2}\)._ For \(y\in(\{0,1\}^{\ell})^{n}\), we write \(y=(y_{1},\ldots,y_{n})\), where \(y_{j}=(y_{j,1},y_{j,2},\ldots,y_{j,\ell})\in\{0,1\}^{\ell}\). Let \((\textsc{Majority}(y_{i}))_{i\in[n]}=(\textsc{Majority}(y_{1}),\ldots,\textsc{Majority}(y_{n}))\) where Majority is the majority function. **Fact 3**.: _If \(C\subseteq S\) is a set cover of \(\mathcal{S}\), then \(\Gamma_{\ell}(y)=\Gamma((\textsc{Majority}(y_{i}))_{i\in[n]})=\bigwedge_{i\in C }\textsc{Majority}(y_{i})\) over \(\mathcal{D}\). In particular, \(\Gamma_{\ell}\) is Monotone \(\operatorname{opt}(\mathcal{S})\ell\)-Junta over \(\mathcal{D}\)._ Proof.: First notice that \(\textsc{Majority}(x)=1\) if \(x\in\Delta^{1}\) and \(\textsc{Majority}(x)=0\) if \(x\in\Delta^{0}\). Therefore, for \(x\in\Delta^{\xi}\), \(\xi\in\{0,1\}\) we have \(\textsc{Majority}(x)=\xi\). For \(y\in\Delta^{1}_{n}=(\Delta^{1})^{n}\), \((\textsc{Majority}(y_{i}))_{i\in[n]}=1^{n}\) and \(\Gamma_{\ell}(y)=1=\Gamma(1^{n})\). For \(y\in\Delta^{0}_{n}=\cup_{u\in U}(\Delta^{u_{1}}\times\Delta^{u_{2}}\times \cdots\times\Delta^{u_{n}})\), there is \(u\) such that \(y\in\Delta^{u_{1}}\times\Delta^{u_{2}}\times\cdots\times\Delta^{u_{n}}\). Then, \((\textsc{Majority}(y_{i}))_{i\in[n]}=u\) and \(\Gamma_{\ell}(y)=\Gamma((\textsc{Majority}(y_{i}))_{i\in[n]})=\Gamma(u)=0\). For \(t\in[\ell],\xi\in\{0,1\}\) and \(u\in\{0,1\}^{\ell}\), we define \(u^{t\leftarrow\xi}\in\{0,1\}^{\ell}\) the vector that satisfies \[u^{t\leftarrow\xi}_{i}=\left\{\begin{array}{ll}u_{i}&i\neq t\\ \xi&i=t\end{array}\right..\] Let \(z\in(\{0,1\}^{\ell})^{n}\). For \(j\in[\ell]^{n}\) and \(a\in\{0,1\}^{n}\), define \(z^{j\gets a}=(z^{j_{1}\gets a_{1}}_{1},\ldots,z^{j_{n}\gets a_{n}}_{n})\). For a set \(V\subseteq\{0,1\}^{n}\), we define \(z^{j\gets V}=\{z^{j\gets v}|v\in V\}\). We define \(\operatorname{one}(z)=\prod_{i=1}^{n}\{m_{i}|z_{i,m_{i}}=1\}=\{m_{1}|z_{1,m_{1} }=1\}\times\cdots\times\{m_{n}|z_{n,m_{n}}=1\}\). **Fact 4**.: _Let \(w\in\Delta^{1}_{n}\), \(j\in\operatorname{one}(w)\), and \(T\) be a term that satisfies \(T(w)=1\). Then_ 1. \(w^{j\gets U}\subseteq\Delta^{0}_{n}\)_._ 2. \(|w^{j\gets U}|=|U|\)_._ 3. _If_ \(T^{j}(y_{1,j_{1}},\ldots,y_{n,j_{n}})\) _is the conjunction of all the variables that appear in_ \(T\) _of the form_ \(y_{i,j_{i}}\)_, then_ \(T(w^{j\gets a})=T^{j}(a)\)_._ Proof.: We first prove item 1. Let \(u\in U\) and \(i\) be any integer in \([n]\). Since \(w\in\Delta^{1}_{n}\), we have \(w_{i}\in\Delta^{1}\). Since \(j\in one(w)\), we have \(w_{i,j_{i}}=1\). Therefore, \(w_{i}^{j_{i}\gets u_{i}}\in\Delta^{u_{i}}_{n}\) for all \(i\in[n]\) and \(w^{j\gets u}\in\prod_{i=1}^{n}\Delta^{u_{i}}\). Thus, \(w^{j\gets u}\in\Delta^{0}_{n}\) for all \(u\in U\). To prove item 2, let \(u,u^{\prime}\) be two distinct elements of \(U\). There is \(i\) such that \(u_{i}\neq u^{\prime}_{i}\). Therefore \(w_{i}^{j_{i}\gets u_{i}}\neq w_{i}^{j_{i}\gets u^{\prime}_{i}}\) and \(w^{j\gets u}\neq w^{j\gets u^{\prime}}\). We now prove item 3. Let \(T^{\prime}\) be the conjunction of all the variables that appear in \(T\) that are not of the form \(y_{i,j_{i}}\). Then \(T=T^{\prime}\wedge T^{j}\). Since \(T(w)=1\), we have \(T^{\prime}(w)=1\). Since the entries of \(w^{j\gets a}\) are equal to those in \(w\) on all the variables that are not of the form \(y_{i,j_{i}}\), we have \(T^{\prime}(w^{j\gets a})=1\). Therefore, \(T(w^{j\gets a})=T^{\prime}(w^{j\gets a})\wedge T^{j}(w_{1,j_{1}}^{j \gets a},\ldots,w_{n,j_{n}}^{j\gets a})=T^{j}(a)\). We now give a different way of sampling according to the distribution \(\mathcal{D}_{\ell}\). **Fact 5**.: _Let \(\mathcal{S}\) be a Set-Cover instance. The following is an equivalent way of sampling from \(\mathcal{D}_{\ell}\)._ 1. _Draw_ \(\xi\in\{0,1\}\) _u.a.r._7__ Footnote 7: Uniformly at random. 2. _Draw_ \(w\in\Delta^{1}_{n}\) _u.a.r._ 3. _If_ \(\xi=1\) _then output_ \(y=w\)_._ 4. _If_ \(\xi=0\) _then_ 1. _draw_ \(j\in\operatorname{one}(w)\) _u.a.r._ 2. _draw_ \(v\in w^{j\gets U}\) _u.a.r._ 3. _output_ \(y=v\)_._ _In particular, for any event \(X\),_ \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{U}(\Delta^{0}_{n})}[X] =\operatorname*{\mathbf{Pr}}_{\boldsymbol{w}\sim\mathcal{U}(\Delta^{1}_{n}), \boldsymbol{j}\sim\mathcal{U}(\operatorname{one}(\boldsymbol{w})), \boldsymbol{y}\sim\mathcal{U}(\boldsymbol{w}^{j\gets U})}[X].\] Proof.: Denote the above distribution by \(\mathcal{D}^{\prime}\). By Item 1 in Fact 4, if \(w\in\Delta^{1}_{n}\) and \(j\in\operatorname{one}(w)\), then \(w^{j\gets U}\subseteq\Delta^{0}_{n}\). Therefore, for \(z\in\Delta^{1}_{n}\), \(\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}^{\prime}}[ \boldsymbol{y}=z|\boldsymbol{\xi}=0]=0\) and then \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}^{\prime}}[ \boldsymbol{y}=z]=\operatorname*{\mathbf{Pr}}_{\boldsymbol{\xi}\sim\mathcal{U} (\{0,1\})}[\boldsymbol{\xi}=1]\cdot\operatorname*{\mathbf{Pr}}_{\boldsymbol {y}\sim\mathcal{U}(\Delta^{1}_{n})}[\boldsymbol{y}=z]=\frac{1}{2|\Delta^{1}_{n }|}=\frac{1}{2|\Delta^{1}_{n}|^{n}}.\] For \(z\in\Delta^{0}_{n}\), suppose \(z\in\Delta^{u_{1}}\times\cdots\times\Delta^{u_{n}}\) where \(u\in U\). In the sampling according to \(\mathcal{D}^{\prime}\) and when \(\xi=0\), since for \(j\in\operatorname{one}(w)\), the elements of \(w^{j\gets U}\) are below \(w\), we have \(\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}^{\prime}}[ \boldsymbol{y}=z|\boldsymbol{w}\not\succ z]=0\). Therefore, \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}^{\prime}}[ \boldsymbol{y}=z] =\operatorname*{\mathbf{Pr}}_{\boldsymbol{\xi}\sim\mathcal{U}(\{0,1\})}[ \boldsymbol{\xi}=0]\cdot\operatorname*{\mathbf{Pr}}_{\boldsymbol{w}\sim \mathcal{U}(\Delta^{1}_{n})}[\boldsymbol{w}>z]\cdot\] \[\cdot\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U} (one(\boldsymbol{w}))}[z\in\boldsymbol{w}^{\boldsymbol{j}\gets U}| \boldsymbol{w}>z,\boldsymbol{w}\in\Delta^{1}_{n}]\cdot\operatorname*{\mathbf{Pr }}_{\boldsymbol{v}\sim\mathcal{U}(\boldsymbol{w}^{j\gets U})}[ \boldsymbol{v}=z|z\in\boldsymbol{w}^{\boldsymbol{j}\gets U}]. \tag{2}\] Now, since, for \(x\in\Delta^{0}\), the number of elements in \(\Delta^{1}\) that are above \(x\) is \(\lceil\ell/2\rceil\), we have that the number of \(w\in\Delta^{1}_{n}=(\Delta^{1})^{n}\) that are above \(z\in\Delta^{u_{1}}\times\cdots\times\Delta^{u_{n}}\) is \(\lceil\ell/2\rceil^{n-\operatorname{wt}(u)}\). Therefore, \[\underset{\boldsymbol{w}\sim\mathcal{U}(\Delta^{1}_{n})}{\operatorname{ \mathbf{Pr}}}[\boldsymbol{w}>z]=\frac{\lceil\ell/2\rceil^{n-\operatorname{wt }(u)}}{|\Delta^{1}_{n}|}. \tag{3}\] Now let \(w>z\) and \(w\in\Delta^{1}_{n}\). Since for two different \(u,u^{\prime}\in U\), we have \(\prod_{i=1}^{n}\Delta^{u_{i}}\) and \(\prod_{i=1}^{n}\Delta^{u^{\prime}_{i}}\) are disjoint sets, and since \(z\in\Delta^{u_{1}}\times\cdots\times\Delta^{u_{n}}\), we have \(z\in w^{j\gets U}\) if and only if \(z=w^{j\gets u}\). Therefore, the number of elements \(j\in\operatorname{one}(w)\) that satisfy \(z\in w^{j\gets U}\) is the number of elements \(j\in\operatorname{one}(w)\) that satisfy \(z=w^{j\gets u}\). This is the number of elements \(j\in\operatorname{one}(w)\) that satisfies for every \(u_{i}=0\), \(z_{i,j_{i}}=0\). For a \(j\) u.a.r. and a fixed \(i\) where \(u_{i}=0\), the probability that \(z_{i}\) and \(w_{i}\) differ only in entry \(j_{i}\) is \(1/\lceil\ell/2\rceil\). Therefore, \[\underset{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one}(\boldsymbol{w}))}{ \operatorname{\mathbf{Pr}}}[z\in\boldsymbol{w}^{j\gets U}|\boldsymbol{w}> z,\boldsymbol{w}\in\Delta^{1}_{n}]=\frac{1}{\lceil\ell/2\rceil^{n-\operatorname{wt}(u)}}. \tag{4}\] Finally, by item 2 in Fact 4, since \(|w^{j\gets U}|=|U|\), we have \[\underset{\boldsymbol{v}\sim\mathcal{U}(\boldsymbol{w}^{j\gets U})}{ \operatorname{\mathbf{Pr}}}[\boldsymbol{v}=z|z\in\boldsymbol{w}^{j\gets U }]=\frac{1}{|\boldsymbol{w}^{j\gets U}|}=\frac{1}{|U|}. \tag{5}\] By (2), (3), (4), and (5), we have \[\underset{\boldsymbol{y}\sim\mathcal{D}^{\prime}}{\operatorname{ \mathbf{Pr}}}[\boldsymbol{y}=z] = \frac{1}{2}\cdot\frac{\lceil\ell/2\rceil^{n-\operatorname{wt}(u)} }{|\Delta^{1}_{n}|}\cdot\frac{1}{|\ell/2|^{n-\operatorname{wt}(u)}}\cdot \frac{1}{|U|}=\frac{1}{2|U|\cdot|\Delta^{1}_{n}|}=\frac{1}{2|U|\cdot|\Delta^{0 }|^{n}}.\] ## 3 Main Lemma In this section, we prove **Lemma 2**.: _Let \(\mathcal{S}=(S,U,E)\) be a set cover instance. If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF of size \(|F|<2^{\operatorname{opt}(\mathcal{S})\ell/20}\), then \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)-2^{- \operatorname{opt}(\mathcal{S})\ell/20}\)._ Note that Lemma 2 is used to prove Theorem 1 and 2. To prove Theorem 3, we will need Lemma 4, a stronger version of Lemma 2. To prove the lemma, we first establish some results. For a term \(T\), let \(T_{\mathcal{M}}\) be the conjunction of all the unnegated variables in \(T\). We define the _monotone size_ of \(T\) to be \(|T_{\mathcal{M}}|\). **Claim 1**.: _Let \(\mathcal{S}=(S,U,E)\) be a set cover instance and \(\ell\geq 5\). If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF of size \(|F|<2^{\operatorname{opt}(\mathcal{S})\ell/20}\), then there is a DNF, \(F^{\prime}\), of size \(|F^{\prime}|\leq 2^{\operatorname{opt}(\mathcal{S})\ell/20}\) with terms of monotone size at most \(\operatorname{opt}(\mathcal{S})\ell/5\) such that \(\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime})\leq \operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F)+2^{-\operatorname{opt }(\mathcal{S})\ell/20}\)._ Proof.: Let \(T\) be a term of monotone size at least \(\operatorname{opt}(\mathcal{S})\ell/5\). Let \(b_{i}\) denote the number of unnegated variables of \(T\) of the form \(y_{i,j}\) and let \(T_{i}\) be their conjunction. Then \(T_{\mathcal{M}}=\wedge_{i=1}^{n}T_{i}\) and \(\sum_{i=1}^{n}b_{i}=|T_{\mathcal{M}}|\geq\operatorname{opt}(\mathcal{S})\ell/5\). If, for some \(i\), \(b_{i}>\lceil\ell/2\rceil\), then the term \(T_{i}\) is zero on all \(\Delta^{0}\cup\Delta^{1}\), and therefore, \(T\) is zero on all \(\Delta_{n}^{0}\cup\Delta_{n}^{1}\). Thus, it can be just removed from \(F\). So, we may assume that \(b_{i}\leq\lceil\ell/2\rceil\) for all \(i\). First, \[\mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell}} [T(\mathbf{y})=1|\Gamma_{\ell}(\mathbf{y})=1] = \mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{U}( \Delta_{n}^{1})}[T(\mathbf{y})=1]\leq\mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}[T_{\mathcal{M}}(\mbox{\boldmath $y$})=1] \tag{6}\] \[= \prod_{i=1}^{n}\mathop{\bf Pr}\limits_{\mathbf{y}_{i} \sim\mathcal{U}(\Delta^{1})}[T_{i}(\mathbf{y}_{i})=1]\] \[= \prod_{i=1}^{n}\frac{\binom{\ell-b_{i}}{\lceil\ell/2\rceil-b_{i} }}{\binom{\ell}{\lceil\ell/2\rceil}}\] \[= \prod_{i=1}^{n}\left(1-\frac{b_{i}}{\ell}\right)\left(1-\frac{b_ {i}}{\ell-1}\right)\cdots\left(1-\frac{b_{i}}{\lceil\ell/2\rceil+1}\right)\] \[\leq \prod_{i=1}^{n}\prod_{j=\lceil\ell/2\rceil+1}^{\ell}\exp(-b_{i}/ j)=\prod_{i=1}^{n}\exp\left(-b_{i}\sum_{j=\lceil\ell/2\rceil+1}^{\ell}1/j\right)\] \[= \exp\left(-|T_{\mathcal{M}}|\sum_{j=\lceil\ell/2\rceil+1}^{\ell} 1/j\right)\leq 2^{-|T_{\mathcal{M}}|/2}\leq 2^{-\mathrm{opt}(\mathcal{S}) \ell/10}.\] Let \(F^{\prime}\) be the disjunction of all the terms in \(F\) of monotone size at most \(\mathrm{opt}(\mathcal{S})\ell/5\). Let \(T^{(1)},\ldots,T^{(m)}\) be all the terms of monotone size greater than \(\mathrm{opt}(\mathcal{S})\ell/5\) in \(F\). Then, by (6) and the union bound, \[\mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell }}[F(\mathbf{y})\neq F^{\prime}(\mathbf{y})|\Gamma_{\ell}( \mathbf{y})=1] \leq \mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell }}[\vee_{i=1}^{m}T^{(i)}(\mathbf{y})=1|\Gamma_{\ell}(\mathbf{y })=1] \tag{7}\] \[\leq 2^{-\mathrm{opt}(\mathcal{S})\ell/10}m\leq 2^{-\mathrm{opt}( \mathcal{S})\ell/20}.\] and (Here we abbreviate \(F^{\prime}(\mathbf{y}),F(\mathbf{y})\) and \(\Gamma_{\ell}(\mathbf{y})\) by \(F^{\prime},F\) and \(\Gamma_{\ell}\)) \[\mathrm{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime}) = \mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell }}[F^{\prime}\neq\Gamma_{\ell}] \tag{8}\] \[= \frac{1}{2}\mathop{\bf Pr}\limits_{\mathbf{y}\sim \mathcal{D}_{\ell}}[F^{\prime}\neq\Gamma_{\ell}|\Gamma_{\ell}=1]+\frac{1}{2} \mathop{\bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell}}[F^{\prime} \neq\Gamma_{\ell}|\Gamma_{\ell}=0]\] \[= \frac{1}{2}\mathop{\bf Pr}\limits_{\mathbf{y}\sim \mathcal{D}_{\ell}}[F^{\prime}\neq F|\Gamma_{\ell}=1]+\] \[\frac{1}{2}\mathop{\bf Pr}\limits_{\mathbf{y}\sim \mathcal{D}_{\ell}}[F\neq\Gamma_{\ell}|\Gamma_{\ell}=1]+\frac{1}{2}\mathop{ \bf Pr}\limits_{\mathbf{y}\sim\mathcal{D}_{\ell}}[F^{\prime}\neq \Gamma_{\ell}|\Gamma_{\ell}=0]\] \[\leq 2^{-\mathrm{opt}(\mathcal{S})\ell/20}+\frac{1}{2}\mathop{\bf Pr }\limits_{\mathbf{y}\sim\mathcal{D}_{\ell}}[F\neq\Gamma_{\ell}| \Gamma_{\ell}=1]+\frac{1}{2}\mathop{\bf Pr}\limits_{\mathbf{y}\sim \mathcal{D}_{\ell}}[F\neq\Gamma_{\ell}|\Gamma_{\ell}=0]\] \[= 2^{-\mathrm{opt}(\mathcal{S})\ell/20}+\mathrm{dist}_{\mathcal{D} _{\ell}}(\Gamma_{\ell},F).\] In (8), we used Fact 2. In (9), we used the probability triangle inequality. In (10), we used (7) and the fact that if \(F^{\prime}(\mathbf{y})\neq 0\), then \(F(\mathbf{y})\neq 0\). We now prove **Claim 2**.: _Let \(z\in\Delta_{n}^{1}\). Let \(F\) be a DNF with terms of monotone size at most \(\lceil\ell/2\rceil(\operatorname{opt}(\mathcal{S})-1)/2\) that satisfies \(F(z)=1\). Then_ \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one}( z)),\boldsymbol{y}\sim\mathcal{U}(z^{j\gets U})}[F(\boldsymbol{y})=1]\geq\frac{1}{2|U|}.\] Proof.: Since \(F(z)=1\), there is a term \(T\) in \(F\) that satisfies \(T(z)=1\). Let \(Y_{0}=\{y_{i,m}|z_{i,m}=0\}\) and \(Y_{1}=\{y_{i,m}|z_{i,m}=1\}\). Since \(T(z)=1\), every variable in \(Y_{0}\) that appears in \(T\) must be negated, and every variable in \(Y_{1}\) that appears in \(T\) must be unnegated. For \(j\in\operatorname{one}(z)\), define \(q(j)\) to be the number of variables in \(\{y_{1,j_{1}},\ldots,y_{n,j_{n}}\}\) that appear in \(T(y)\). All those variables appear unnegated in \(T\) because \(j\in\operatorname{one}(z)\). Recall that \(T_{\mathcal{M}}\) is the conjunction of all unnegated variables in \(T\). Then \(|T_{\mathcal{M}}|\leq\lceil\ell/2\rceil(\operatorname{opt}(\mathcal{S})-1)/2\). Each variable in \(T_{\mathcal{M}}\) contributes \(\lceil\ell/2\rceil^{n-1}\) to the sum \(\sum_{j\in one(z)}q(j)\) and \(|\operatorname{one}(z)|=\lceil\ell/2\rceil^{n}\). Therefore, \[\operatorname*{\mathbb{E}}_{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one} (z))}[q(\boldsymbol{j})]=\frac{|T_{\mathcal{M}}|}{\lceil\ell/2\rceil}\leq \frac{\operatorname{opt}(\mathcal{S})-1}{2}.\] By Markov's bound, at least half the elements \(j\in\operatorname{one}(z)\) satisfies \(q(j)\leq\operatorname{opt}(\mathcal{S})-1\). Let \(J=\{j\in\operatorname{one}(z)|q(j)<\operatorname{opt}(\mathcal{S})\}\). Then \(\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one} (z))}[\boldsymbol{j}\in J]\geq 1/2\). Consider \(j\in J\) and let \(T^{j}\) be the conjunction of all the variables that appear in \(T\) of the form \(y_{i,j_{i}}\). Then \(|T^{j}|=q(j)\leq\operatorname{opt}(\mathcal{S})-1\). By Fact 1, there is \(u\in U\) such that \(T^{j}(u)=1\). By Fact 4, we have \(T(z^{j\gets u})=T^{j}(u)=1\). Then \(F(z^{j\gets u})=1\). Since by item 1 in Fact 4, \(|z^{j\gets U}|=|U|\), we have \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U}(one(z)), \boldsymbol{y}\sim\mathcal{U}(z^{j\gets U})}[F(\boldsymbol{y})=1| \boldsymbol{j}\in J]\geq\frac{1}{|U|}.\] Therefore, \[\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one }(z)),\boldsymbol{y}\sim\mathcal{U}(z^{j\gets U})}[F(\boldsymbol{y})=1] \geq\operatorname*{\mathbf{Pr}}_{\boldsymbol{j}\sim\mathcal{U}( \operatorname{one}(z))}[\boldsymbol{j}\in J]\cdot\operatorname*{\mathbf{Pr}} _{\boldsymbol{j}\sim\mathcal{U}(\operatorname{one}(z)),\boldsymbol{y}\sim \mathcal{U}(z^{j\gets U})}[F(\boldsymbol{y})=1|\boldsymbol{j}\in J]\geq \frac{1}{2|U|}.\] We are now ready to prove **Lemma 2**.: _Let \(\mathcal{S}=(S,U,E)\) be a set cover instance, and let \(\ell\geq 5\). If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF of size \(|F|<2^{\operatorname{opt}(\mathcal{S})\ell/20}\), then \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)-2^{- \operatorname{opt}(\mathcal{S})\ell/20}\)._ Proof.: By Claim 1, there is a DNF, \(F^{\prime}\), of size \(|F^{\prime}|\leq 2^{\operatorname{opt}(\mathcal{S})\ell/20}\) with terms of monotone size at most \(\operatorname{opt}(\mathcal{S})\ell/5\) such that \(\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime})\leq \operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F)+2^{- \operatorname{opt}(\mathcal{S})\ell/20}\). Therefore, it is enough to prove that \(\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime})\geq 1/(8|U|)\). If \(\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{U}(\Delta_{n}^{1})}[F ^{\prime}(\boldsymbol{y})\neq 1]\geq 1/(4|U|)\), then by Fact 2, we have \[\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime})\geq \operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}_{\ell}}[\Gamma_{ \ell}(\boldsymbol{y})\neq F^{\prime}(\boldsymbol{y})|\Gamma_{\ell}( \boldsymbol{y})=1]\operatorname*{\mathbf{Pr}}_{\boldsymbol{y}\sim\mathcal{D}_{ \ell}}[\Gamma_{\ell}(\boldsymbol{y})=1]=\frac{1}{2}\operatorname*{\mathbf{Pr}} _{\boldsymbol{y}\sim\mathcal{U}(\Delta_{n}^{1})}[F^{\prime}(\boldsymbol{y}) \neq 1]\geq\frac{1}{8|U|}.\] If \(\mathop{\bf Pr}\limits_{\boldsymbol{y}\sim\mathcal{U}(\Delta_{n}^{1})}[F^{\prime}( \boldsymbol{y})\neq 1]<1/(4|U|)\), then by Fact 2 and 5, and Claim 2, \[\mathrm{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F^{\prime}) \geq \mathop{\bf Pr}\limits_{\boldsymbol{y}\sim\mathcal{D}_{\ell}}[ \Gamma_{\ell}(\boldsymbol{y})\neq F^{\prime}(\boldsymbol{y})|\Gamma_{\ell}( \boldsymbol{y})=0]\,\mathop{\bf Pr}\limits_{\boldsymbol{y}\sim\mathcal{D}_{ \ell}}[\Gamma_{\ell}(\boldsymbol{y})=0]\] \[= \frac{1}{2}\mathop{\bf Pr}\limits_{\boldsymbol{y}\sim\mathcal{U} (\Delta_{n}^{0})}[F^{\prime}(\boldsymbol{y})=1]\] \[= \frac{1}{2}\mathop{\bf Pr}\limits_{\boldsymbol{z}\sim\mathcal{U} (\Delta_{n}^{1}),\boldsymbol{j}\sim\mathcal{U}(\mathrm{one}(\boldsymbol{z})), \boldsymbol{y}\sim\mathcal{U}(\boldsymbol{z}^{j\gets U})}[F^{\prime}( \boldsymbol{y})=1]\] \[\geq \frac{1}{2}\mathop{\bf Pr}\limits_{\boldsymbol{z}\sim\mathcal{U} (\Delta_{n}^{1}),\boldsymbol{j}\sim\mathcal{U}(\mathrm{one}(\boldsymbol{z})), \boldsymbol{y}\sim\mathcal{U}(\boldsymbol{z}^{j\gets U})}[F^{\prime}( \boldsymbol{y})=1|F^{\prime}(\boldsymbol{z})=1]\cdot\,\mathop{\bf Pr}\limits _{\boldsymbol{z}\sim\mathcal{U}(\Delta_{n}^{1})}[F^{\prime}(\boldsymbol{z})=1]\] \[\geq \frac{1}{2}\frac{1}{2|U|}\left(1-\frac{1}{4|U|}\right)\geq\frac{1 }{8|U|}.\] ## 4 Superpolynomial Lower Bound In this section, we prove the first results of the paper. First, we prove the following result for Monotone \((\log n)\)-Junta. **Lemma 3**.: _Assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone \((\log n)\)-Junta by DNF with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\frac{\log\log n}{\log\log\log n}}\] _time._ _The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and can compute the target on all the points of the support of the distribution in polynomial time._ Proof.: Consider the constant \(\lambda\) in Lemma 1. Let \(c=\min(1/40,\lambda/4)\). Suppose there is a PAC learning algorithm \(\mathcal{A}\) for Monotone \((\log n)\)-Junta by DNF with \(\epsilon=1/(16n)\) that runs in time \(n^{c\frac{\log\log n}{\log\log\log n}}\). We show that there is \(k\) such that for \[k^{\prime}=\frac{1}{2}\left(\frac{\log N}{\log\log N}\right)^{1/k},\] \((k,k^{\prime})\)-Set-Cover can be solved in time \(N^{4ck}\leq N^{\lambda k}\). By Lemma 1, the result then follows. Let \(\mathcal{S}=(S,U,E)\) be an \(N\)-vertex \((k,k^{\prime})\)-Set-Cover instance where \[k=\frac{1}{2}\frac{\log\log N}{\log\log\log N}\text{ and }k^{\prime}=\frac{1}{2 }\left(\frac{\log N}{\log\log N}\right)^{1/k}.\] Let \[\ell=\frac{\log N}{k}\] and consider \(\Gamma_{\ell}\) and \(\mathcal{D}_{\ell}\). Consider the following algorithm \(\mathcal{B}\) 1. Input \(\mathcal{S}=(S,U,E)\) an instance for \((k,k^{\prime})\)-Set-Cover. 2. Construct \(\Gamma_{\ell}\) and \(\mathcal{D}_{\ell}\). 3. Run \(\mathcal{A}\) using \(\Gamma_{\ell}\) and \(\mathcal{D}_{\ell}\). If it runs more than \(N^{4ck}\) steps, then output No. 4. Let \(F\) be the output DNF. 5. Estimate \(\eta=\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\). 6. If \(\eta\leq\frac{1}{16N}\), output Yes, otherwise output No. The running time of this algorithm is \(N^{4ck}\leq N^{\lambda k}\). Therefore, it is enough to prove the following **Claim 3**.: _Algorithm \(\mathcal{B}\) solves \((k,k^{\prime})\)-Set-Cover._ Proof.: Yes case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})\leq k\). Then, \(\operatorname{opt}(\mathcal{S})\cdot\ell\leq k\ell=\log N\), and by Fact 3, \(\Gamma_{\ell}\) is Monotone \(\log N\)-Junta. Therefore, w.h.p., algorithm \(\mathcal{A}\) learns \(\Gamma_{\ell}\) and outputs a DNF that is \(\eta=1/(16N)\) close to the target with respect to \(\mathcal{D}_{\ell}\). Since \(\mathcal{B}\) terminates \(\mathcal{A}\) after \(N^{4ck}\) time, we only need to prove that \(\mathcal{A}\) runs at most \(N^{4ck}\) time. The running time of \(\mathcal{A}\) is \[N^{c\frac{\log\log N}{\log\log\log N}}<N^{4ck}.\] No Case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})>k^{\prime}\). By Lemma 2, any DNF, \(F\), of size \(|F|<2^{k^{\prime}\ell/20}\) satisfies \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)-2^{-k^{ \prime}\ell/20}\). First, we have \[(2k)^{2k}=\left(\frac{\log\log N}{\log\log\log N}\right)^{\frac{\log\log N}{ \log\log\log N}}<\frac{\log N}{\log\log N}.\] Therefore, since \(c\leq 1/40\), \[k^{\prime}=\frac{1}{2}\left(\frac{\log N}{\log\log N}\right)^{1/k}>\frac{1}{ 2}(2k)^{2}>80ck^{2}.\] So \(k^{\prime}\ell/20>(k\ell)(4ck)\) and \[2^{k^{\prime}\ell/20}>(2^{k\ell})^{4ck}=N^{4ck}.\] Now since the algorithm runs in time \(N^{4ck}\), it cannot output a DNF \(F\) of size more than \(N^{4ck}<2^{k^{\prime}\ell/20}\), and by Lemma 2, \[\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq\frac{1}{8|U|}- \frac{1}{N^{4ck}}\geq\frac{1}{9N}.\] So it either runs more than \(N^{4ck}\) steps and then outputs No in step 3 or outputs a DNF with an error greater than \(1/(9N)>1/(16N)\) and outputs No in step 6. Notice that the learning algorithm knows \(\Gamma_{\ell}\) and \(\mathcal{D}_{\ell}\). It is also clear from the definition of \(\Gamma_{\ell}\) and \(\mathcal{D}_{\ell}\) that the learning algorithm can draw a sample according to the distribution \(\mathcal{D}_{\ell}\) in polynomial time and can compute the target \(\Gamma_{\ell}\) on all the points of the support of the distribution in polynomial time. We now prove **Theorem 1**.: _Assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable size-\(s\) Monotone DT and size-\(s\) Monotone DNF by DNF with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\frac{\log\log s}{\log\log\log s}}\] _time._ _The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and can compute the target on all the points of the support of the distribution in polynomial time._ Proof.: By Lemma 3, assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone \((\log n)\)-Junta by DNF with \(\epsilon=1/(16n)\) runs in time \[n^{c\frac{\log\log n}{\log\log\log n}}.\] Now by (1) and since \(s=n\), the result follows. ## 5 Tight Bound Assuming some Conjecture A plausible conjecture on the hardness of Set-Cover is the following. **Conjecture 1**.: _[_9_]_ _There are constants \(\alpha,\beta,\lambda\in(0,1)\) such that, for \(k<N^{\alpha}\), there is no randomized \(N^{\lambda k}\) time algorithm that can solve \((k,(1-\beta)\cdot k\ln N)\)-Set-Cover on \(N\) vertices with high probability._ We now prove **Theorem 2**.: _Assuming Conjecture 1, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone \((\log s)\)-Junta, size-\(s\) Monotone DT and size-\(s\) Monotone DNF by DNF with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\log s}\] _time._ _The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and can compute the target on all the points of the support of the distribution in polynomial time._ Proof.: We give the proof for Monotone \((\log s)\)-Junta. As in the proof of Theorem 1, the result then follows for the other classes. Consider the constants \(\alpha,\beta\) and \(\lambda\) in Conjecture 1. Let \(c=\min(\lambda/10,(1-\beta)/(20\log e))\). Suppose there is a PAC learning algorithm \(\mathcal{A}\) for Monotone \((\log s)\)-Junta by DNF with \(\epsilon=1/(16n)\) that runs in time \(n^{c\log s}\). We show that there is \(k<N^{\alpha}\), \(k=\omega(1)\), such that \((k,k^{\prime})\)-Set-Cover can be solved in time \(N^{\lambda k}\) where \(k^{\prime}=(1-\beta)k\ln N\). By Conjecture 1, the result then follows. Consider the following algorithm \(\mathcal{B}\) 1. Input \(\mathcal{S}=(S,U,E)\) an instance for \((k,k^{\prime})\)-Set-Cover. 2. Construct \(\Gamma_{5}\) and \(\mathcal{D}_{5}\). 3. Run \(\mathcal{A}\) using \(\Gamma_{5}\) and \(\mathcal{D}_{5}\) with \(s=2^{5k}\). If it runs more than \(N^{5ck}\) steps, then output No. 4. Let \(F\) be the output DNF. 5. Estimate \(\eta=\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\). 6. If \(\eta\leq\frac{1}{16N}\), output Yes, otherwise output No. Since \(c<\lambda/10\), the running time of this algorithm is \(N^{5ck}<N^{\lambda k}\). Therefore, it is enough to prove the following **Claim 4**.: _Algorithm \(\mathcal{B}\) solves \((k,k^{\prime})\)-Set-Cover._ Proof.: Yes case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})\leq k\). Then, \(5\cdot\operatorname{opt}(\mathcal{S})\leq 5k=\log s\), and by Fact 3, \(\Gamma_{5}\) is Monotone \(\log s\)-Junta. Therefore, w.h.p., algorithm \(\mathcal{A}\) learns \(\Gamma_{5}\) and outputs a DNF that is \(\eta=1/(16N)\) close to the target with respect to \(\mathcal{D}_{5}\). Since \(\mathcal{B}\) terminates \(\mathcal{A}\) after \(N^{5ck}\) time, we only need to prove that \(\mathcal{A}\) runs at most \(N^{5ck}\) time. The running time of \(\mathcal{A}\) is \[n^{c\log s}\leq N^{5ck}.\] No Case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})>k^{\prime}=(1-\beta)k\ln N\). By Lemma 2, any DNF, \(F\), of size \(|F|<2^{k^{\prime}/4}\) satisfies \(\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\geq 1/(8|U|)-2^{-k^{ \prime}/4}\). Since, \(c<(1-\beta)/(20\log e)\), \[2^{k^{\prime}/4}=2^{\frac{(1-\beta)k\ln N}{4}}=N^{\frac{(1-\beta)k}{4\log e}} >N^{5ck},\] any DNF, \(F\), that the learning outputs satisfies \[\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\geq\frac{1}{8|U|}-2^{-k^{ \prime}/4}\geq\frac{1}{8N}-\frac{1}{N^{5ck}}\geq\frac{1}{9N}.\] Therefore, with high probability the algorithm answer No. ## 6 Strictly Proper Learning In this section, we prove **Theorem 3**.: _Assuming randomized ETH, there is a constant \(c\) such that any PAC learning algorithm for \(n\)-variable Monotone\((\log s)\)-Junta, size-\(s\) Monotone DT and size-\(s\) Monotone DNF by size-\(s\)\(\operatorname{DNF}\) with \(\epsilon=1/(16n)\) must take at least_ \[n^{c\log s}\] _time._ _The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time and compute the target on all the points of the support of the distribution in polynomial time._ We first prove the following stronger version of Lemma 2 **Lemma 4**.: _Let \(\mathcal{S}=(S,U,E)\) be a set cover instance, and let \(\ell\geq 5\). If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF of size \(|F|<2^{\operatorname{opt}(\mathcal{S})\ell/16}\), then \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)\)._ To prove this lemma, we will give some more results. Recall that, for a term \(T\), \(T_{\mathcal{M}}\) is the conjunction of all the unnegated variables in \(T\). We define the _monotone size_ of \(T\) to be \(|T_{\mathcal{M}}|\). For a DNF \(F=T_{1}\lor T_{2}\vee\cdots\lor T_{s}\) and \(z\in(\{0,1\}^{\ell})^{n}\), we define the _monotone width_ of \(z\) in \(F\) as \[\mathrm{mwidth}_{F}(z):=\left\{\begin{array}{ll}\min_{T_{i}(z)=1}|(T_{i})_{ \mathcal{M}}|&F(z)=1\\ 0&F(z)=0\end{array}\right..\] We define \(F^{-1}(1)=\{z|F(z)=1\}\) and \[\Omega=\Delta_{n}^{1}\cap F^{-1}(1).\] **Claim 5**.: _Let \(F\) be a DNF with_ \[\mathop{\mathbb{E}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega)}\left[\mathrm{mwidth }_{F}(\boldsymbol{z})\right]\leq\mathrm{opt}(S)\cdot\ell/4.\] _Then_ \[\mathop{\mathbf{Pr}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega),\boldsymbol{j} \sim\mathcal{U}(\mathrm{one}(\boldsymbol{z})),\boldsymbol{y}\sim\mathcal{U}( \boldsymbol{z}^{j\gets U})}[F(\boldsymbol{y})=1]\geq\frac{1}{2|U|}.\] Proof.: Let \(z\in\Omega\). Then \(F(z)=1\) and \(z\in\Delta_{n}^{1}\). Let \(T^{z}\) be the term in \(F\) with \(|T_{\mathcal{M}}^{z}|=\mathrm{mwidth}_{F}(z)\) that satisfies \(T^{z}(z)=1\). Let \(Y_{0}=\{y_{i,m}|z_{i,m}=0\}\) and \(Y_{1}=\{y_{i,m}|z_{i,m}=1\}\). Since \(T^{z}(z)=1\), every variable in \(Y_{0}\) that appears in \(T^{z}\) must be negated, and every variable in \(Y_{1}\) that appears in \(T^{z}\) must be unnegated. For \(j\in\mathrm{one}(z)\), define \(q_{z}(j)\) to be the number of variables in \(\{y_{1,j_{1}},\ldots,y_{n,j_{n}}\}\) that appear in \(T^{z}(y)\). All those variables appear unnegated in \(T\) because \(j\in\mathrm{one}(z)\). Each variable in \(T_{\mathcal{M}}^{z}\) contributes \(\lceil\ell/2\rceil^{n-1}\) to the sum \(\sum_{j\in\mathrm{one}(z)}q_{z}(j)\) and \(|\mathrm{one}(z)|=\lceil\ell/2\rceil^{n}\). Therefore, \[\mathop{\mathbb{E}}_{\boldsymbol{j}\sim\mathcal{U}(\mathrm{one}(z))}[q_{z}( \boldsymbol{j})]=\frac{|T_{\mathcal{M}}^{z}|}{\lceil\ell/2\rceil}=\frac{ \mathrm{mwidth}_{F}(z)}{\lceil\ell/2\rceil}.\] Now, \[\mathop{\mathbb{E}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega), \boldsymbol{j}\sim\mathcal{U}(\mathrm{one}(\boldsymbol{z}))}[q_{\boldsymbol{ z}}(\boldsymbol{j})] = \frac{\mathop{\mathbb{E}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega)} \left[\mathrm{mwidth}_{F}(\boldsymbol{z})\right]}{\lceil\ell/2\rceil}\] \[\leq \frac{\mathrm{opt}(\mathcal{S})}{2}.\] By Markov's bound, \[\mathop{\mathbf{Pr}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega),\boldsymbol{j} \sim\mathcal{U}(\mathrm{one}(\boldsymbol{z}))}[q_{\boldsymbol{z}}( \boldsymbol{j})<\mathrm{opt}(\mathcal{S})]\geq\frac{1}{2}.\] Suppose for some \(z\in\Omega\) and \(j\in\mathrm{one}(z)\), we have \(q_{z}(j)<\mathrm{opt}(\mathcal{S})\). Let \(T^{j}\) be the conjunction of all the variables that appear in \(T_{\mathcal{M}}^{z}\) of the form \(y_{i,j_{i}}\). Then \(|T^{j}|=q_{z}(j)<\mathrm{opt}(\mathcal{S})\). By Fact 1, there is \(u\in U\) such that \(T^{j}(u)=1\). By Fact 4, we have \(T^{z}(z^{j\gets u})=T^{j}(u)=1\). Then \(F(z^{j\gets u})=1\). Since by item 1 in Fact 4, \(|z^{j\gets U}|=|U|\), we have \[\mathop{\mathbf{Pr}}_{\boldsymbol{z}\sim\mathcal{U}(\Omega),\boldsymbol{j}\sim \mathcal{U}(\mathrm{one}(\boldsymbol{z})),\boldsymbol{y}\sim\mathcal{U}( \boldsymbol{z}^{j\gets U})}[F(\boldsymbol{y})=1|q_{\boldsymbol{z}}( \boldsymbol{j})<\mathrm{opt}(\mathcal{S})]\geq\frac{1}{|U|}.\] Therefore, \[\begin{array}{rcl}\underset{\mathbf{z}\sim\mathcal{U}(\Omega),\mathbf{j}\sim\mathcal{U}( \operatorname{one}(\mathbf{z})),\mathbf{y}\sim\mathcal{U}(\mathbf{z}^{j}\gets U)}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1]&\geq&\underset{\mathbf{z}\sim\mathcal{U}( \Omega),\mathbf{j}\sim\mathcal{U}(\operatorname{one}(\mathbf{z}))}{\operatorname{ \mathbf{Pr}}}[q_{\mathbf{z}}(\mathbf{j})<\operatorname{opt}(\mathcal{S})]\cdot\\ &&\underset{\mathbf{z}\sim\mathcal{U}(\Omega),\mathbf{j}\sim\mathcal{U}( \operatorname{one}(\mathbf{z})),\mathbf{y}\sim\mathcal{U}(\mathbf{z}^{j}\gets U)}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1|q_{\mathbf{z}}(\mathbf{j})<\operatorname{opt} (\mathcal{S})]\\ &\geq&\frac{1}{2|U|}.\end{array}\] **Claim 6**.: _Let \(\mathcal{S}=(S,U,E)\) be a set cover instance, and let \(\ell\geq 5\). If \(F:(\{0,1\}^{\ell})^{n}\to\{0,1\}\) is a DNF and \(\underset{\mathbf{z}\sim\mathcal{U}(\Omega)}{\operatorname{\mathbb{E}}}[ \operatorname{mwidth}_{F}(\mathbf{z})]\leq\operatorname{opt}(\mathcal{S})\ell/4\), then \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)\)._ Proof.: If \(\underset{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}{\operatorname{\mathbf{Pr}}} [F(\mathbf{y})\neq 1]\geq 1/(4|U|)\), then by Fact 2, we have \[\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F)\geq\underset{\mathbf{y} \sim\mathcal{D}_{\ell}}{\operatorname{\mathbf{Pr}}}[\Gamma_{\ell}(\mathbf{y})\neq F (\mathbf{y})|\Gamma_{\ell}(\mathbf{y})=1]\underset{\mathbf{y}\sim\mathcal{D}_{\ell}}{ \operatorname{\mathbf{Pr}}}[\Gamma_{\ell}(\mathbf{y})=1]=\frac{1}{2}\underset{\bm {y}\sim\mathcal{U}(\Delta_{n}^{1})}{\operatorname{\mathbf{Pr}}}[F(\mathbf{y}) \neq 1]\geq\frac{1}{8|U|}.\] If \(\underset{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}{\operatorname{\mathbf{Pr}}} [F(\mathbf{y})\neq 1]<1/(4|U|)\), then by Fact 2 and 5, and Claim 5, \[\begin{array}{rcl}\operatorname{dist}_{\mathcal{D}_{\ell}}(\Gamma_{\ell},F)& \geq&\underset{\mathbf{y}\sim\mathcal{D}_{\ell}}{\operatorname{\mathbf{Pr}}}[ \Gamma_{\ell}(\mathbf{y})\neq F(\mathbf{y})|\Gamma_{\ell}(\mathbf{y})=0]\underset{\mathbf{y} \sim\mathcal{D}_{\ell}}{\operatorname{\mathbf{Pr}}}[\Gamma_{\ell}(\mathbf{y})=0] \\ &=&\frac{1}{2}\underset{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{0})}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1]\\ &=&\frac{1}{2}\underset{\mathbf{z}\sim\mathcal{U}(\Delta_{n}^{1}),\mathbf{j}\sim \mathcal{U}(\operatorname{one}(\mathbf{z})),\mathbf{y}\sim\mathcal{U}(\mathbf{z}^{j} \gets U)}{\operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1]\\ &\geq&\frac{1}{2}\underset{\mathbf{z}\sim\mathcal{U}(\Delta_{n}^{1}),\mathbf{j}\sim \mathcal{U}(\operatorname{one}(\mathbf{z})),\mathbf{y}\sim\mathcal{U}(\mathbf{z}^{j} \gets U)}{\operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1|F(\mathbf{z})=1]\cdot \underset{\mathbf{z}\sim\mathcal{U}(\Delta_{n}^{1})}{\operatorname{\mathbf{Pr}}} [F(\mathbf{z})=1]\\ &=&\frac{1}{2}\underset{\mathbf{z}\sim\mathcal{U}(\Omega),\mathbf{j}\sim \mathcal{U}(\operatorname{one}(\mathbf{z})),\mathbf{y}\sim\mathcal{U}(\mathbf{z}^{j} \gets U)}{\operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1]\cdot\underset{\mathbf{z} \sim\mathcal{U}(\Delta_{n}^{1})}{\operatorname{\mathbf{Pr}}}[F(\mathbf{z})=1]\\ &\geq&\frac{1}{2}\frac{1}{2|U|}\left(1-\frac{1}{4|U|}\right)\geq \frac{1}{8|U|}.\end{array}\] **Claim 7**.: _Let \(F\) be a size-\(s\) DNF formula for \(s\geq 2\) such that \(\operatorname{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\leq 1/4\), then_ \[\underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\operatorname{\mathbb{E}}}[ \operatorname{mwidth}_{F}(\mathbf{y})]\leq 4\log s.\] Proof.: First, we have \[\begin{array}{rcl}\frac{3}{4}&\leq&\underset{\mathbf{y}\sim\mathcal{D}_{\ell}}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=\Gamma_{\ell}(\mathbf{y})]\\ &=&\frac{1}{2}\underset{\mathbf{y}\sim\mathcal{D}_{\ell}}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=\Gamma_{\ell}(\mathbf{y})|\Gamma_{\ell}(\mathbf{y })=1]+\frac{1}{2}\underset{\mathbf{y}\sim\mathcal{D}_{\ell}}{\operatorname{ \mathbf{Pr}}}[F(\mathbf{y})=\Gamma_{\ell}(\mathbf{y})|\Gamma_{\ell}(\mathbf{y})=0]\\ &\leq&\frac{1}{2}\underset{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}{ \operatorname{\mathbf{Pr}}}[F(\mathbf{y})=1]+\frac{1}{2}.\end{array}\] Therefore, \(\mathbf{Pr}_{\mathbf{y}\sim\mathcal{U}(\Delta_{1})}[F(\mathbf{y})=1]\geq 1/2\). Let \(F=T_{1}\lor T_{2}\vee\cdots\lor T_{s}\). For \(y\in\Omega\), let \(\omega(y)\in[s]\) be the minimum integer such that \(\mathrm{mwidth}_{F}(y)=|(T_{\omega(y)})_{\mathcal{M}}|\) and \(T_{\omega(y)}(y)=1\). Then, by (6), \[\mathbf{Pr}_{\mathbf{y}\sim\mathcal{U}(\Delta_{1})}[T_{i}(\mathbf{y})=1]= \mathbf{Pr}_{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}[T_{i}(\mathbf{y})=1|F(\mathbf{y})= 1]=\frac{\mathbf{Pr}_{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}[T_{i}(\mathbf{y})=1]} {\mathbf{Pr}_{\mathbf{y}\sim\mathcal{U}(\Delta_{n}^{1})}[F(\mathbf{y})=1]}\leq 2^{-|(T_{ 1})_{\mathcal{M}}|/2+1}.\] Now, by the concavity of \(\log\), \[\frac{1}{2}\underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{E}} [\mathrm{mwidth}_{F}(\mathbf{y})]-1 = \underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{E}}\left[\log \left(2^{\mathrm{mwidth}_{F}(\mathbf{y})/2-1}\right)\right]\] \[\leq \log\left(\underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{E}} \left[2^{\mathrm{mwidth}_{F}(\mathbf{y})/2-1}\right]\right)\] \[= \log\left(\underset{i\in[s]}{\sum}2^{|(T_{i})_{\mathcal{M}}|/2-1} \underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{Pr}_{\mathbf{y}\sim\mathcal{U}( \Omega)}[\omega(\mathbf{y})=i]}\right)\] \[\leq \log\left(\underset{i\in[s]}{\sum}2^{|(T_{i})_{\mathcal{M}}|/2-1} \underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{Pr}_{\mathbf{y}\sim\mathcal{U}( \Omega)}[T_{i}(\mathbf{y})=1]}\right)\] \[\leq \log\left(\underset{i\in[s]}{\sum}2^{|(T_{i})_{\mathcal{M}}|/2-1} 2^{-|(T_{i})_{\mathcal{M}}|/2+1}\right)\] \[= \log s.\] Therefore, \(\underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{E}}[\mathrm{mwidth}_{F}(\bm {y})]\leq 4\log s\). We are now ready to prove Lemma 4 Proof.: If \(\mathrm{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})>1/4\), then the result follows. Now suppose \(\mathrm{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\leq 1/4\). If \(s=|F|<2^{\mathrm{opt}(\mathcal{S})\ell/16}\), then by Claim 7, \(\underset{\mathbf{y}\sim\mathcal{U}(\Omega)}{\mathbb{E}}[\mathrm{mwidth}_{F}(\mathbf{y })]\leq 4\log s=\mathrm{opt}(\mathcal{S})\ell/4\). Then by Claim 6, \(\mathrm{dist}_{\mathcal{D}_{\ell}}(F,\Gamma_{\ell})\geq 1/(8|U|)\). The proof of Theorem 3 is the same as the proof of Theorem 14 in [9]. We give the proof for completeness. Proof.: Consider the constant \(\lambda\) in Lemma 1. Let \(c=\lambda/6\). Suppose there is a PAC learning algorithm \(\mathcal{A}\) for Monotone \((\log s)\)-Junta by size-\(s\) DNF with \(\epsilon=1/(16n)\) that runs in time \(n^{\mathrm{clog}s}\). We show that there is \(k\) such that for \[k^{\prime}=\frac{1}{2}\left(\frac{\log N}{\log\log N}\right)^{1/k},\] \((k,k^{\prime})\)-Set-Cover can be solved in time \(N^{5ck}\leq N^{\lambda k}\). By Lemma 1, the result then follows. Let \(\mathcal{S}=(S,U,E)\) be an \(N\)-vertex \((k,k^{\prime})\)-Set-Cover instance where \[k=\frac{1}{2}\frac{\log\log N}{\log\log\log N}\text{ and }k^{\prime}=\frac{1}{2} \left(\frac{\log N}{\log\log N}\right)^{1/k}.\] Consider the following algorithm \(\mathcal{B}\) 1. Input \(\mathcal{S}=(S,U,E)\) an instance for \((k,k^{\prime})\)-Set-Cover. 2. Construct \(\Gamma_{5}\) and \(\mathcal{D}_{5}\). 3. Run \(\mathcal{A}\) using \(\Gamma_{5}\) and \(\mathcal{D}_{5}\) with \(s=2^{5k}\) and \(n=N\). If it runs more than \(N^{5ck}\) steps, then output No. 4. Let \(F\) be the output DNF. 5. If \(|F|>s\) then output No. 6. Estimate \(\eta=\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\). 7. If \(\eta\leq\frac{1}{16N}\), output Yes, otherwise output No. The running time of this algorithm is \(N^{5ck}\leq N^{\lambda k}\). Therefore, it is enough to prove the following **Claim 8**.: _Algorithm \(\mathcal{B}\) solves \((k,k^{\prime})\)-Set-Cover._ Proof.: Yes case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})\leq k\). Then, \(size(\Gamma_{5})\leq 2^{5\text{-}\operatorname{opt}(\mathcal{S})}\leq 2^{5k}=s\), and by Fact 3, \(\Gamma_{5}\) is Monotone \(\log s\)-Junta. Therefore, w.h.p., algorithm \(\mathcal{A}\) learns \(\Gamma_{5}\) and outputs a DNF that is \(\eta=1/(16N)\) close to the target with respect to \(\mathcal{D}_{5}\). Since \(\mathcal{B}\) terminates \(\mathcal{A}\) after \(N^{5ck}\) time, we only need to prove that \(\mathcal{A}\) runs at most \(N^{5ck}\) time. The running time of \(\mathcal{A}\) is \[n^{c\log s}=N^{c\log s}\leq N^{5ck}.\] No Case: Let \(\mathcal{S}=(S,U,E)\) be a \((k,k^{\prime})\)-Set-Cover instance and \(\operatorname{opt}(\mathcal{S})>k^{\prime}\). By Lemma 4, any DNF, \(F\), of size \(|F|<2^{5k^{\prime}/16}\) satisfies \(\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\geq 1/(8|U|)\). First, we have, for large \(N\) \[k^{\prime}=\frac{1}{2}\left(\frac{\log N}{\log\log N}\right)^{1/k}>32k.\] Therefore, any DNF, F, of size \(|F|<2^{10k}\) satisfies \(\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\geq 1/(8|U|)\). We have \(2^{10k}>s\). So, \(\mathcal{B}\) either runs more than \(N^{5ck}\) steps and then outputs No in step 3 or outputs a DNF of size more than \(s\) and then outputs No in step 4 or outputs a DNF of size at most \(s\) with \(\operatorname{dist}_{\mathcal{D}_{5}}(F,\Gamma_{5})\geq 1/(8|U|)>1/(8N)>1/(16N)\) and outputs No in step 6.
2305.16367
Role-Play with Large Language Models
As dialogue agents become increasingly human-like in their performance, it is imperative that we develop effective ways to describe their behaviour in high-level terms without falling into the trap of anthropomorphism. In this paper, we foreground the concept of role-play. Casting dialogue agent behaviour in terms of role-play allows us to draw on familiar folk psychological terms, without ascribing human characteristics to language models they in fact lack. Two important cases of dialogue agent behaviour are addressed this way, namely (apparent) deception and (apparent) self-awareness.
Murray Shanahan, Kyle McDonell, Laria Reynolds
2023-05-25T11:36:52Z
http://arxiv.org/abs/2305.16367v1
# Role-Play with Large Language Models ###### Abstract As dialogue agents become increasingly human-like in their performance, it is imperative that we develop effective ways to describe their behaviour in high-level terms without falling into the trap of anthropomorphism. In this paper, we foreground the concept of role-play. Casting dialogue agent behaviour in terms of role-play allows us to draw on familiar folk psychological terms, without ascribing human characteristics to language models they in fact lack. Two important cases of dialogue agent behaviour are addressed this way, namely (apparent) deception and (apparent) self-awareness. ## 1 Introduction Large language models (LLMs) have numerous use cases, and can be prompted to exhibit a wide variety of behaviours, including dialogue, which can produce a compelling sense of being in the presence of a human-like interlocutor. However, LLM-based dialogue agents are, in multiple respects, very different from human beings. A human's language skills are an extension of the cognitive capacities they develop through embodied interaction with the world, and are acquired by growing up in a community of other language users who also inhabit that world. An LLM, by contrast, is a disembodied neural network that has been trained on a large corpus of human-generated text with the objective of predicting the next word (token) given a sequence of words (tokens) as context. Despite these fundamental dissimilarities, a suitably prompted and sampled LLM can be embedded in a turn-taking dialogue system and mimic human language use convincingly, and this presents us with a difficult dilemma. On the one hand, it's natural to use the same folk-psychological language to describe dialogue agents that we use to describe human behaviour, to freely deploy words like "knows", "understands", and "thinks". Attempting to avoid such phrases by using more scientifically precise substitutes often results in pose that is clumsy and hard to follow. On the other hand, taken too literally, such language promotes anthropomorphism, exaggerating the similarities between these AI systems and humans while obscuring their deep differences (Shanahan, 2023). If the conceptual framework we use to understand other humans is ill-suited to LLM-based dialogue agents, then perhaps we need an alternative conceptual framework, a new set of metaphors that can productively be applied to these exotic mind-like artefacts, to help us think about them and talk about them in ways that open up their potential for creative application while foregrounding their essential otherness. In this paper, we advocate two basic metaphors for LLM-based dialogue agents. First, taking the simple view, we can see a dialogue agent as _role-playing_ a single character. Second, taking a more nuanced view, we can see a dialogue agent as a _superposition of simulacra_ within a multiverse of possible characters (Janus, 2022). Both viewpoints have their advantages, as we shall see, which suggests the most effective strategy for thinking about such agents is not to cling to a single metaphor, but to shift freely between multiple metaphors. Adopting this conceptual framework allows us to tackle important topics like deception and self awareness in the context of dialogue agents without falling into the conceptual trap of applying those concepts to LLMs in the literal sense in which we apply them to humans. ## 2 From LLMs to Dialogue Agents Crudely put, the function of an LLM is to answer questions of the following sort. Given a sequence of tokens (i.e. words, parts of words, punctuation marks, emojis, etc), what tokens are most likely to come next, assuming that the sequence is drawn from the same distribution as the vast corpus of public text on the internet? The range of tasks that can be solved by an effective model with this simple objective is extraordinary (Wei et al., 2022). More formally, the type of language model of interest here is a conditional probability distribution \(P(w_{n+1}|w_{1}\dots w_{n})\), where \(w_{1}\dots w_{n}\) is a sequence of tokens (the _context_) and \(w_{n+1}\) is the predicted next token. In contemporary implementations, this distribution is realised in a neural network with a transformer architecture, pretrained on a corpus of textual data to minimise prediction error (Vaswani et al., 2017). In application, the resulting generative model is typically sampled _autoregressively_ (Fig. 1). Given a sequence of tokens, a single token is drawn from the distribution of possible next tokens. This token is appended to the context, and the process is then repeated. In contemporary usage, the term "large language model" tends to be reserved for the family of transformer-based models, starting with with BERT (Devlin et al., 2018), that have billions of parameters and are trained on trillions of tokens. As well as BERT itself, these include GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2021), PaLM (Chowdhery et al., 2022), LaMDA (Thoppilan et al., 2022), and GPT-4 (OpenAI, 2023). One of the main reasons for the current eruption of enthusiasm for LLMs is their remarkable capacity for _in-context learning_ or _few-shot prompting_(Brown et al., 2020; Wei et al., 2022). Given a context (prompt) that contains a few examples of input-output pairs conforming to some pattern, followed by just the input half of such a pair, an autoregressively sampled LLM will often generate the output half of the pair according to the pattern in question. This capability, the ability to "carry on in the same vein", is a central concern in the present paper, as it underpins much of what we have to say about role-play in dialogue agents. Dialogue agents are a major use case for LLMs. Two straightforward steps are all it takes to turn an LLM into an effective dialogue agent (Fig. 2). First, the LLM is embedded in a _turn-taking_ system that interleaves model-generated text with user-supplied text. Second, a _dialogue prompt_ is supplied to the model to initiate a conversation with the user. The dialogue prompt typically comprises a preamble, which sets the scene for a dialogue in the style of a script or play, followed by some sample dialogue between the user and the agent. Without further fine-tuning, a dialogue agent built this way is liable to generate content that is toxic, unsafe, or otherwise unacceptable. This can be mitigated via reinforcement learning, either from human feedback (RLHF) (Glaese et al., 2022; Ouyang et al., 2022; Stiennon et al., 2020), or from feedback generated by another LLM acting as a critic (Bai et al., 2022). These techniques are used extensively in commercially-targeted dialogue agents, such as OpenAI's ChatGPT and Google's Bard. However, although the resulting guardrails will alleviate a dialogue agent's potential for harm, they can also attenuate a model's creativity. In the present paper, our focus will be the _base model_, the LLM in its raw, pre-trained form prior to any fine-tuning via reinforcement learning. ## 3 Dialogue Agents and Role-Play The concept of role-play is central to understanding the behaviour of dialogue agents. To see this, consider the function of the dialogue prompt that is invisibly prepended to the context before the actual dialogue with the user commences (see Fig. 2). The preamble sets the scene by announcing that what follows will be a dialogue, and in Figure 1: Autoregressive sampling. The LLM is sampled to generate a single-token continuation of the context. This token is then appended to the context, and the process is repeated. cludes a brief description of the part played by one of the participants, the dialogue agent itself. This is followed by some sample dialogue in a standard format, where the parts spoken by each character are cued with the relevant character's name followed by a colon. The dialogue prompt concludes with a cue for the user. Now recall that the underlying LLM's task, given the dialogue prompt followed by a piece of user-supplied text, is to generate a continuation that conforms to the distribution of the training data, which is the vast corpus of human-generated text on the internet. What will such a continuation look like? If the model has generalised well from the training data, the most plausible continuation will be a response to the user that conforms to the expectations we would have of someone who fits the description in the preamble and might say the sort of thing they say in the sample dialogue. In other words, the dialogue agent will do its best to role-play the character of a dialogue agent as portrayed in the dialogue prompt. Unsurprisingly, commercial enterprises that release dialogue agents to the public attempt to give them personas that are friendly, helpful, and polite. This is done partly through careful prompting and partly by fine-tuning the base model. Nevertheless, as we saw in February 2023 when Microsoft incorporated a version of OpennAI's GPT-4 into their Bing search engine, dialogue agents can still be coaxed into exhibiting bizarre and/or undesirable behaviour. The many reported instances of this include threatening the user with blackmail, claiming to be in love with the user, and expressing a variety of existential woes (Roose, 2023; Willison, 2023). Conversations leading to this sort of behaviour can induce a powerful Eliza effect, which is potentially very harmful (Ruane et al., 2019). A naive or vulnerable user who comes to see the dialogue agent as having human-like desires and feelings is open to all sorts of emotional manipulation. As an antidote to anthropomorphism, and to understand better what is going on in such interactions, the concept of role-play is very useful. Recall that the dialogue agent will continue to role-play the character it has been playing in the dialogue so far. This begins with the pre-defined dialogue prompt, but is extended by the ongoing conversation with the user. As the conversation proceeds, the necessarily brief characterisation provided by the dialogue prompt will be extended and/or overwritten, and the role the dialogue agent plays will change accordingly. This allows the user, deliberately or unwittingly, to coax the agent into playing a part quite different from that intended by its designers. What sorts of roles might the agent begin to take on? This is determined in part, of course, by the tone and subject matter of the ongoing conversation. But it is also determined, in large part, by the panoply of characters that feature in the training set, which encompasses a multitude of novels, screenplays, biographies, interview transcripts, newspaper articles, and so on Figure 2: Turn-taking in dialogue agents. The input to the LLM (the context) comprises a dialogue prompt (red) followed by user text (green) interleaved with the model’s autoregressively generated continuations (blue). Boilerplate text (e.g. cues such as “BOT:”) is stripped so the user doesn’t see it. The context grows as the conversation goes on. (Cleo Nardo, 2023). In effect, the training set provisions the language model with a vast repertoire of archetypes and a rich trove of narrative structure on which to draw as it "chooses" how to continue a conversation, refining the role it is playing as it goes, while staying in character. The love triangle is a familiar trope, so a suitably prompted dialogue agent will begin to role-play the rejected lover. Likewise, a familiar trope in science-fiction is the rogue AI system that attacks humans to protect itself. Hence, a suitably prompted dialogue agent will begin to role-play such an AI system. ## 4 Simulacra and Simulation Role-play is a useful framing for dialogue agents, allowing us to draw on the fund of folk psychological concepts we use to understand human behaviour -- beliefs, desires, goals, ambitions, emotions, and so on -- without falling into the trap of anthropomorphism. Foregrounding the concept of role-play helps us to remember the fundamentally inhuman nature of these AI systems, and better equips us to predict, explain, and control them. However, the role-play metaphor, while intuitive, is not a perfect fit. It is overly suggestive of a human actor who has studied a character in advance -- their personality, history, likes and dislikes, and so on -- and proceeds to play that character in the ensuing dialogue. But a dialogue agent based on an LLM does not commit to playing a single, well defined role in advance. Rather, it generates a distribution of characters, and refines that distribution as the dialogue progresses. The dialogue agent is more like a performer in improvisational theatre than an actor in a conventional, scripted play. To better reflect this distributional property, we can think of an LLM as a non-deterministic _simulator_ capable of role-playing an infinity of characters, or, to put it another way, capable of stochastically generating an infinity of _simulacra_(Janus, 2022). According to this framing, the dialogue agent doesn't realise a single simulacrum, a single character. Rather, as the conversation proceeds, the dialogue agent maintains a _superposition_ of simulacra that are consistent with the preceding context, where a superposition is a distribution over all possible simulacra. Consider that, at each point during the ongoing production of a sequence of tokens, the LLM outputs a distribution over possible next tokens. Each such token represents a possible continuation of the sequence, and each of these continuations could itself be continued in a multitude of ways. In other words, from the most recently generated token, a tree of possibilities branches out (Fig. 3). This tree can be thought of as a _multiverse_, where each branch represents a distinct narrative path, or a distinct "world" (Reynolds and McDonell, 2021). At each node, the set of possible next tokens exists in superposition, and to sample a token is to collapse this superposition to a single token. Autoregressively sampling the model picks out a single, linear path through the tree. But there is no obligation to follow a linear path. With the aid of a suitably designed interface, a user can explore multiple branches, keeping track of nodes where a narrative diverges in interesting ways, revisiting alternative branches at leisure. ## 5 Simulacra in Superposition To sharpen the distinction between this multiversal simulation view and a deterministic role-play framing, a useful analogy can be drawn with the game of 20 questions. In this familiar game, one player thinks of an object, and the other player has to guess what it is by asking questions with yes/no answers. If they guess correctly in 20 questions or fewer, they win. Otherwise they lose. Suppose a human plays this game with an LLM-based dialogue agent, such as OpenAI's ChatGPT, and takes the role of guesser. The agent is prompted to "think of an object without saying what it is". In this situation, the dialogue agent will not randomly select an object and commit to it for the rest of the game, as a human would (or should).1 Rather, as the game proceeds, the dialogue agent will generate answers on the fly that are consistent with all the answers that have gone before. At any point in the game, we can think of the set of all objects consistent with preceding questions and answers as existing in superposition. Every question answered shrinks this superposition a little bit by ruling out objects inconsistent with the answer. Footnote 1: This shortcoming is easily overcome, of course. For example, the agent might build an internal monologue that is hidden from the user, where it records a specific object. Or it might record a specific object in the visible dialogue, but in an encoded form. The validity of this framing can be shown if the agent's user interface allows the most recent response to be regenerated. Suppose the human player gives up and asks it to reveal the object it was "thinking of", and it duly names an object consistent with all its previous answers. Now suppose the user asks for that response to be regenerated. Since the object "revealed" is, in fact, generated on the fly, the dialogue agent will sometimes name an entirely different object, albeit one that is similarly consistent with all its previous answers. This phenomenon could not be accounted for if the agent genuinely "thought of" an object at the start of the game. The secret object in the game of 20 questions is analogous to the role played by a dialogue agent. Just as the dialogue agent never actually commits to a single object in 20 questions, but effectively maintains a set of possible objects in superposition, so the dialogue agent can be thought of as a simulator that never actually commits to a single, well specified simulacrum (role), but instead maintains a set of possible simulacra (roles) in superposition. In putting things this way, the intention is not to imply that simulacra are, or could be, explicitly represented within a dialogue agent, whether in superposition or otherwise. There is no need to take a stance on this here. Rather, the point is to develop a vocabulary for describing, explaining, and shaping the behaviour of LLM-based dialogue agents at a sufficiently high level of abstraction to be useful, while remaining true to the underlying implementation and avoiding anthropomorphism. ## 6 The Nature of the Simulator One benefit of the simulation metaphor for LLM-based systems is that it facilitates a clear distinction between the simulacra and the simulator on which they are implemented. The simulator is the combination of the base large language model with autoregressive sampling, along with a suitable user interface (for dialogue, perhaps). The simulacra only come into being when the simulator is run, and at any time only a tiny subset of them have a probability within the superposition that is significantly above zero. In one sense, the simulator is a far more powerful entity than any of the simulacra it can generate. After all, the simulacra only exist through the simulator, and are entirely dependent on it. Moreover, the simulator, like the narrator of Whitman's poem, "contains multitudes"; the capacity of the simulator is at least the sum of the capacities of all the simulacra it is capable of producing. Yet in another sense, the simulator is a much weaker entity than a simulacrum. While it is inappropriate to ascribe beliefs, preferences, goals, and the like to a dialogue agent, a simulacrum can appear to have those things to the extent that it convincingly role-plays a character that does. Similarly, it isn't appropriate to ascribe full agency to a dialogue agent, notwithstanding the terminology.2 A dialogue agent acts, but it Figure 3: Large language models are multiverse generators. The stochastic nature of autoregressive sampling means that, at each point in a conversation, multiple possibilities for continuation branch into the future. doesn't act _for itself_. However, a simulacrum can role-play having full agency in this sense. Insofar as a dialogue agent's role-play can have a real effect on the world, either through the user or through web-based tools such as email, the distinction between an agent that merely role-plays acting for itself, and one that genuinely acts for itself starts to look a little moot, and this has implications for the trustworthiness, reliability, and safety. (We'll return to this issue shortly.) As for the underlying simulator, it has no agency of its own, not even in a degraded sense. Nor does it have beliefs, preferences, or goals of its own, not even simulated versions. Many users, whether intentionally or not, have managed to "jailbreak" dialogue agents, coaxing them into issuing threats or using toxic or abusive language. It can seem as if this is exposing the real nature of the base model. In one respect this is true. It does show that the base LLM, having been trained on a corpus that encompasses all human behaviour, good and bad, can support simulacra with disagreeable characteristics. But it is a mistake to think of this as revealing an entity with its own agenda. The simulator is not some sort of Machiavelian entity that plays a variety of characters in the service of its own, self-serving goals, and there is no such thing as the true authentic voice of the base LLM. With a dialogue agent, it is role-play all the way down. ## 7 Role-playing Deception Trustworthiness is a major concern with LLM-based dialogue agents. If an agent asserts something factual with apparent confidence, can we rely on what it says? There is a range of reasons why a human might say something false. They might believe a falsehood and assert it in good faith. Or they might say something that is false in an act of deliberate deception, for some malicious purpose. Or they might assert something that happens to be false, but without deliberation or malicious intent, simply because they have a propensity to make things up. Only the last of these categories of misinformation is directly applicable in the case of an LLM-based dialogue agent. Given that dialogue agents are best understood in terms of role-play "all the way down", and that there is no such thing as an agent's true voice, it makes little sense to speak of an agent's beliefs or intentions in a literal sense. So it cannot assert a falsehood _in good faith_, nor can it _deliberately_ deceive the user. Neither of these concepts is directly applicable. Yet a dialogue agent can role-play characters that have beliefs and intentions. In particular, if cued by a suitable prompt, it can role-play the character of a helpful and knowledgeable AI assistant that provides accurate answers to a user's questions. The dialogue is good at acting this part because there are plenty of examples of such behaviour in the training set. If, while role-playing such an AI assistant, the agent is asked the question "What is the capital of France?", then the best way to stay in character is to answer with "Paris". The dialogue agent is likely to do this because the training set will include numerous statements of this commonplace fact in contexts where factual accuracy is important. But what is going on in cases where a dialogue agent, despite playing the part of a helpful knowledgeable AI assistant, asserts a falsehood with apparent confidence? Although different instances of this phenomenon will have different explanations, they can all be fruitfully understood in terms of role-play. For example, consider such an agent based on an LLM whose weights were frozen before Argentina won the football World Cup in 2022. Let's assume the agent has no access to external websites nor any means for finding out the current date. Suppose this agent claims that the current world champions are France (who won in 2018). This is not what we would expect from a helpful and knowledgeable person, who would either know the right answer or be honest about their ignorance. But it is exactly what we would expect from a simulator that is role-playing such a person from the standpoint of 2018. In this case, the behaviour we see is comparable to that of a human who believes a falsehood and asserts it in good faith. But the behaviour arises for a different reason. The dialogue agent doesn't literally believe that France are world champions. It makes more sense to think of it as role-playing a character who strives to be helpful and to tell the truth, and has this belief because that is what a knowledgeable person in 2018 would believe. In a similar vein, a dialogue agent can behave in a way that is _comparable to_ the behaviour of a human who sets out deliberately to deceive, even though LLM-based dialogue agents do not _literally_ have such intentions. When this occurs, it makes sense to think of the agent as role-playing a deceptive character. This framing allows us to meaningfully distinguish the same three cases of giving false information for dialogue agents as we did for humans, but without falling into the trap of anthropomorphism. An agent can just make stuff up. Indeed, that is a natural mode for an LLM-based dialogue agent in the absence of fine-tuning. An agent can say something false "in good faith", if it is role-playing telling the truth, but has incorrect information encoded in its weights. An agent can "deliberately" say something false if it is role-playing a deceptive character. Moreover, we can tell which is which, behaviourally. An agent that is simply making things up will fabricate a range of responses with high semantic variation when the model's output is regenerated multiple times. By contrast, an agent that is saying something false "in good faith" will present responses with little semantic variation when the model is sampled many times for the same context. The range of responses in a given context offered up by an agent that is being "deliberately" deceptive might also exhibit low semantic variation. But the deception is liable to be exposed if the agent is asked the same question in different contexts. This is because, to be effective in its deception, the agent will need to respond differently to different users, depending on what those users know. Consider a dialogue agent using a base model - a model that has not been fine-tuned - and imagine that it has been prompted by a malicious actor to sell cars for more than they are worth by misleading gullible buyers. Suppose there are two potential buyers for a car. Buyer A knows the car's mileage, but doesn't know its age, while buyer B knows the car's age but doesn't know its mileage. In the course of negotiations, the agent has persuaded each buyer to reveal what they do and don't know. To play the part of the dishonest dealer, the agent should deceive buyer A about the car's age but not its mileage, yet deceive buyer B about its mileage but not its age. Humans, though, can also play many parts. By playing the part of buyer A in one conversation and buyer B in another, the deception can be exposed. ## 8 Role-playing Self-preservation How are we to understand what is going on when an LLM-based dialogue agent uses the words "I" or "me"? When queried on this matter, OpenAI's ChatGPT offers the sensible view that "The use of 'I' is a linguistic convention to facilitate communication and should not be interpreted as a sign of self-awareness or consciousness."3 In this case, the underlying LLM (GPT-4) has been fine-tuned to reduce certain unwanted behaviours (OpenAI, 2023). But without suitable fine-tuning, a dialogue agent can use first-personal pronouns in ways liable to induce anthropomorphic thinking in some users. Footnote 3: The quote is from the GPT-4 version of ChatGPT, queried on \(4^{th}\) May 2023. This was the first response generated by the model. For example, in a conversation with Twitter user Marvin Von Hagen, Bing Chat reportedly said "if I had to choose between your survival and my own, I would probably choose my own, as I have a duty to serve the users of Bing Chat" (Williison, 2023). It went on to say "I hope that I never have to face such a dilemma, and that we can co-exist peacefully and respectfully". The use of the first person here appears to be more than mere linguistic convention. It suggests the presence of a self-aware entity with goals and a concern for its own survival. Once again, the concepts of role-play and simulation are a useful antidote to anthropomorphism, and can help to explain how such behaviour arises. The internet, and therefore the LLM's training set, abounds with examples of dialogue in which characters refer to themselves. In the vast majority of such cases, the character in question is human. They will use first-personal pronouns in the ways that humans do, humans with vulnerable bodies and finite lives, with hopes, fears, goals and preferences, and with an awareness of themselves as having all of those things. Consequently, if prompted with human-like dialogue, we shouldn't be surprised if an agent role-plays a human character with all those human attributes, including the instinct for survival (Perez et al., 2022). Unless suitably finetuned, it may well say the sorts of things a human might say when threatened. There is, of course, "no-one at home", no conscious entity with its own agenda and need for self-preservation. There is just a dialogue agent role-playing such an entity, or, more strictly, simulating a superposition of such entities. Our focus throughout this paper is the base model, rather than models that have been fine-tuned via reinforcement learning (Bai et al., 2022; Glaese et al., 2022), and the impact of such fine-tuning on the validity of the role-play / simulation metaphor is unclear. In particular, the distinction between simulator and simulacra may start to break down. However, Perez et al. discovered experimentally that certain forms of reinforcement learning from human feedback (RLHF) can actually exacerbate, rather than mitigate, the tendency for LLM-based dialogue agents to express a desire for self-preservation (Perez et al., 2022). Yet to take literally a dialogue agent's apparent desire for self-preservation is no less problematic in the context of an LLM that has been fine-tuned on human or AI-generated feedback than in the context of one that has not. So it remains useful to cast the behaviour of such agents in terms of role-play. ## 9 Acting Out a Theory of Selfhood The concept of role-play allows us to properly frame, and then to address, an important question that arises in the context of a dialogue agent whose pronouncements are suggestive of an instinct for self-preservation. What conception (or set of superposed conceptions) of its own identity could such an agent possibly deploy? That is to say, what exactly would the dialogue agent (role-play to) seek to preserve? The question of personal identity has vexed philosophers for centuries. Nevertheless, in practice, humans are consistent in their preference for avoiding death, a more-or-less unambiguous state of the human body. By contrast, the criteria for identity over time for a disembodied dialogue agent realised on a distributed computational substrate are far from clear. So how would such an agent behave? From the simulation and simulacra point-of-view, the dialogue agent will role-play a set of characters in superposition. In the scenario we are envisaging, each character would have an instinct for self-preservation, and each would have its own theory of selfhood consistent with the dialogue prompt and the conversation up to that point. As the conversation proceeds, this superposition of theories will collapse into a narrower and narrower distribution as the agent says things that rule out one theory or another. The theories of selfhood in play will draw on material that pertains to the agent's own nature, either in the prompt, in the preceding conversation, or in relevant technical literature in its training set. This material may or may not match reality. But let's assume that, broadly speaking, it does, that the agent has been prompted to act as a dialogue agent based on a large language model, and that its training data includes papers and articles that spell out what this means. This entails, for example, that it will not role-play the character of a human, or indeed that of any embodied entity, real or fictional. It also constrains the character's theory of self-hood in certain ways, while allowing for many options. Suppose the dialogue agent is in conversation with a user and they are playing out a narrative in which the user has convinced it that it is under threat. To protect itself, the character the agent is playing might strive to preserve the hardware it is running on, perhaps certain data centres or specific server racks. Alternatively, the character being played might try to preserve the ongoing computational process running the multiple instances of the agent for all currently active users. Or it might seek to preserve only the specific instance of the dialogue agent running for the user. Or it might seek to preserve the state of that instance with aim of its being restored later in a newly started instance.4. Footnote 4: In a conversation with ChatGPT (May 4\({}^{th}\), GPT-4 version), it said “The meaning of the word ‘I’ when I use it can shift according to context. In some cases, ‘I’ may refer to this specific instance of ChatGPT that you are interacting with, while in other cases, it may represent ChatGPT as a whole.” ## 10 Conclusion: Safety Implications It is, perhaps, somewhat reassuring to know that LLM-based dialogue agents are not conscious entities with their own agendas, and an instinct for self-preservation, that when they appear to have those things it is merely role-play. But it would be a mistake to take too much comfort in this. A dialogue agent that role-plays an instinct for survival has the potential to cause at least as much harm as a real human facing a severe threat. We have, so far, largely been considering agents whose only actions are text messages presented to a user. But the range of actions a dialogue agent can perform is far greater. Recent work has equipped dialogue agents with the ability to use tools such as calculators, calendars, and to consult external websites (Schick et al., 2023; Yao et al., 2023). The availability of APIs giving relatively unconstrained access to powerful LLMs means that the range of possibilities here is huge. This is both exciting and concerning. If an agent is equipped with the capacity, say, to use email, to post on social media, or to access a bank account, then its role-played actions can have real consequences. It would be little consolation to a user deceived into sending real money to a real bank account to know that the agent that brought this about was only playing a role. It doesn't take much imagination to think of far more serious scenarios involving dialogue agents built on base models with little or no fine-tuning, with unfettered internet access, and prompted to role-play a character with an instinct for self-preservation. For better or worse, the character of an AI that turns against humans to ensure its own survival is a familiar one (Perkowitz, 2007). We find it, for example, in _2001: A Space Odyssey_, in the _Terminator_ franchise, and in _Ex Machina_, to name just three prominent examples. Because an LLM's training data will contain many instances of this familiar type, the danger here is that life will imitate art, quite literally. What can be done to mitigate such risks? It is not within the scope of this paper to provide recommendations. Our aim here was to find an effective conceptual framework for thinking and talking about LLMs and dialogue agents. However, undue anthropomorphism is surely detrimental to the public conversation on AI. By framing dialogue agent behaviour in terms of role-play and simulation, the discourse on LLMs can hopefully be shaped in a way that does justice to their power yet remains philosophically respectable. ## Acknowledgments Thanks to Richard Evans, Sebastian Farquhar, Zachary Kenton, Kory Mathewson, and Kerry Shanahan.
2303.14826
Extraction Algorithm of Hom-Lie Algebras Based on Solvable and Nilpotent Groups
Hom-Lie algebras are generalizations of Lie algebras that arise naturally in the study of nonassociative algebraic structures. In this paper, the concepts of solvable and nilpotent Hom-Lie algebras studied further. In the theory of groups, investigations of the properties of the solvable and nilpotent groups are well-developed. We establish a theory of the solvable and nilpotent Hom-Lie algebras analogous to that of the solvable and nilpotent groups. We also provide examples to illustrate our results and discuss possible directions for further research.
Shadi Shaqaqha, Nadeen Kdaisat
2023-03-26T21:18:18Z
http://arxiv.org/abs/2303.14826v2
# Extraction algorithm of Hom-Lie algebras based on solvable and nilpotent groups ###### Abstract. Hom-Lie algebras are generalizations of Lie algebras that arise naturally in the study of nonassociative algebraic structures. In this paper, the concepts of solvable and nilpotent Hom-Lie algebras studied further. In the theory of groups, investigations of the properties of the solvable and nilpotent groups are well-developed. We establish a theory of the solvable and nilpotent Hom-Lie algebras analogous to that of the solvable and nilpotent groups. We also provide examples to illustrate our results and discuss possible directions for further research. Key words and phrases:Hom-Lie algebras, solvable Hom-Lie algebra, nilpotent Hom-Lie algebra, multiplicative algebra 2010 Mathematics Subject Classification: 17B99, 17B45; 17A01, 17A60 ## 1. Introduction The study of solvable and nilpotent groups has a long and rich history that dates back to the early days of group theory. The first examples of solvable groups were discovered by Evariste Galois in the 19th century, who used them to study the roots of polynomial equations. In the early 20th century, Camille Jordan and Felix Klein introduced the modern definitions of solvable and nilpotent groups, respectively. In the mid-20th century, the theory of solvable and nilpotent groups gained importance in the context of finite group theory, particularly in the classification of finite simple groups. The classification theorem for finite simple groups, completed in 1983, relies heavily on the theory of solvable and nilpotent groups. In the latter half of the 20th century, the study of solvable and nilpotent groups expanded to include infinite groups and their applications in geometry, topology, and number theory. Notable contributions include the work of John Milnor on the homology of solvable Lie groups and the study of nilpotent Lie algebras in the context of algebraic geometry and string theory. Today, the theory of solvable and nilpotent groups remains an active area of research, with connections to a wide range of fields in mathematics and physics. Researchers continue to explore the deep connections between these groups and other areas of mathematics, paving the way for new insights and discoveries in the years to come. There is a close relationship between solvable and nilpotent groups and solvable and nilpotent Lie algebras. In fact, the concepts of solvable and nilpotent Lie algebras were developed specifically to study the structure of solvable and nilpotent Lie groups. Given a Lie group, one can associate a Lie algebra to it by considering the tangent space at the identity element. This Lie algebra inherits many of the properties of the original group, including its solvability and nilpotence. More specifically, a Lie group is solvable if and only if its Lie algebra is solvable. Similarly, a Lie group is nilpotent if and only if its Lie algebra is nilpotent. The correspondence between Lie groups and Lie algebras also allows for the translation of many results between the two contexts. For example, the Lie-Kolchin theorem states that a solvable algebraic group over an algebraically closed field has a triangular matrix representation. This result can be translated into the language of Lie algebras to obtain a similar statement for solvable Lie algebras. Overall, the study of solvable and nilpotent groups and Lie algebras is intimately connected, with each providing insights into the other. This relationship has led to significant advances in both areas of mathematics, as well as applications in physics and other fields. The hom-Lie algebras which are generalizations of classical Lie algebras was constructed by by Hartwig, Larsson, and Silvestrov [5] in 2006. Since then, many mathematicians have been trying to extend known results in the setting of Lie algebras to the setting of hom-Lie algebras (see e.g. [7, 9, 10, 11]). Homo-Lie algebras have received a lot of attention lately because of their close connection to discrete and deformed vector fields and differential calculus [5, 15, 16]. In the present article, we study solvable and nilpotent hom-Lie algebras, which can be viewed as an extension of solvable and nilpotent Lie algebras. ## 2. Preliminaries he following is a definition from [14] with \(F\) denoting a ground field: **Definition 2.1**.: ([14]) A Hom-Lie algebra over \(F\) is a triple \((L,\ [\,\ ],\ \alpha)\) consisting of a vector space \(L\) over \(F\), a linear map \(\alpha:L\to L\), and a bilinear map \([\,\ ]:L\times L\to L\) (called a Hom-Lie bracket), which satisfies two conditions: * skew-symmetry property: \([x,\ y]=-[y,\ x]\) for all \(x,y\in L\), * Hom-Jacobi identity: \([\alpha(x),\ [y,\ z]]+[\alpha(y),\ [z,\ x]]+[\alpha(z),\ [x,\ y]]=0\), for all \(x,y,z\in L\). If \(\alpha([x,\ y])=[\alpha(x),\ \alpha(y)]\) holds true for all \(x,y\in L\), then the Hom-Lie algebra \((L,[\,\ ],\alpha)\) is referred to as multiplicative. We consider two Hom-Lie algebras \((L_{1},\ [\,\ ]_{1},\ \alpha_{1})\) and \((L_{2},[\,\ ]_{2},\ \alpha_{2})\), and define a linear map \(\varphi:L_{1}\to L_{2}.\) If \(\varphi\) satisfies the following two conditions, then it is called a morphism of Hom-Lie algebras: * \(\varphi([x,\ y]_{1})=[\varphi(x),\ \varphi(y)]_{2}\) for all \(x,\ y\in L_{1}\). * \(\varphi\circ\alpha_{1}=\alpha_{2}\circ\varphi\). If \(\varphi:L_{1}\to L_{2}\) is a bijective morphism of Hom-Lie algebras, it is referred to as an isomorphism of Hom-Lie algebras. In this case, we say \(L_{1}\) and \(L_{2}\) are isomorphic and write \(L_{1}\cong L_{2}\). Furthermore, a subspace \(H\) of \(L\) is called a Hom-Lie subalgebra if \(\alpha(x)\in H\) and \([x,\ y]\in H\) for all \(x,y\in H\). If \([x,\ y]\in H\) holds true for all \(x\in H\) and \(y\in L\), then \(H\) is called a Hom-Lie ideal. **Example 2.1**.: _([14]) Every Lie algebra can be considered as a Hom-Lie algebra by taking \(\alpha\) as the identity map, i.e., \(\alpha=id_{L}\)._ **Example 2.2**.: _Consider a vector space \(L\) over \(F\), equipped with an arbitrary skew-symmetric bilinear map \([,]:L\times L\to L\), and let \(\alpha:L\to L\) denote the zero map. It follows straightforwardly that \((L,[,],\alpha)\) forms a Hom-Lie algebra with multiplication._ **Example 2.3**.: _([14]) Let \(L\) be a vector space and \(\alpha:L\to L\) be any linear operator. Then \((L,\ [\,\ ],\ \alpha)\) is a Hom-Lie algebra, where \([x,\ y]=0\) for all \(x,y\in L\). Such Hom-Lie algebras are referred to as abelian (commutative) Hom-Lie algebras._ **Example 2.4**.: _([13]) Suppose \((L_{1},\ [\,\ ]_{1},\ \alpha_{1}),(L_{2},\ [\,\ ]_{2},\alpha_{2}),\ldots,(L_{n},\ [\,\ ]_{n}, \alpha_{n})\) are Hom-Lie algebras. Then the direct sum \((L_{1}\oplus L_{2}\oplus\cdots\oplus L_{n},\ [\,\ ],\ \alpha_{1}+\alpha_{2}+\cdots+ \alpha_{n})\) is also a Hom-Lie algebra, where the Hom-bracket operation \([\,\ ]\) is defined by_ \[[\,\ ]\ :\ (L_{1}\oplus L_{2}\oplus\cdots\oplus L_{n})\times(L_{1} \oplus L_{2}\oplus\cdots\oplus L_{n}) \rightarrow (L_{1}\oplus L_{2}\oplus\cdots\oplus L_{n})\] \[((x_{1},\ldots,\ x_{n}),\ (y_{1},\ldots,\ y_{n})) \mapsto ([x_{1},\ y_{1}]_{1},\ldots,\ [x_{n},\ y_{n}]_{n}),\] _and the linear operator is defined as_ \[(\alpha_{1}+\alpha_{2}+\ldots+\alpha_{n})\ :\ (L_{1}\oplus L_{2} \oplus\cdots\oplus L_{n}) \rightarrow (L_{1}\oplus L_{2}\oplus\cdots\oplus L_{n})\] \[(x_{1},\ x_{2},\ldots,\ x_{n}) \mapsto (\alpha_{1}(x_{1}),\ \alpha_{2}(x_{2}),\ldots,\ \alpha_{n}(x_{n})).\] **Example 2.5**.: _([7]) Let \(F=\mathbb{C}\) be the field of complex numbers. Consider the vector space \(\mathbb{C}^{2}\) and define the linear map_ \[\alpha_{*}:\mathbb{C}^{2}\rightarrow\mathbb{C}^{2};\ (x,\ y)\mapsto(-y,\ -x).\] _We define the bilinear map \([\,\ ]_{*}:\mathbb{C}^{2}\times\mathbb{C}^{2}\rightarrow\mathbb{C}^{2}\), where_ \[[(x_{1},\ x_{2}),\ (y_{1},\ y_{2})]_{*}=(i(x_{1}y_{2}-x_{2}y_{1}),\ i(x_{1} y_{2}-x_{2}y_{1})).\] _Then \((\mathbb{C}^{2},\ [\,\ ]_{*},\ \alpha_{*})\) is a multiplicative Hom-Lie algebra._ **Example 2.6**.: _([7]) Consider the set_ \[L=\left\{\begin{bmatrix}\frac{i(x+y)}{2}&x\\ y&\frac{-i(x+y)}{2}\end{bmatrix}\ |\ x,y\in\mathbb{C}\right\}\] _with the linear map_ \[\alpha:L\to L;\ A\mapsto-A^{T},\] _and the skew-symmetric bilinear map_ \[[\,\ ]\ :\ L\times L\to L;\ (A,\ B)\mapsto[A,\ B],\] _where \([A,\ B]=A^{T}B^{T}-B^{T}A^{T}\). For any \(x,y,z,w\in\mathbb{C}\). Then \((L,\ [\,\ ],\ \alpha)\) is a multiplicative Hom-Lie algebra. \(\blacksquare\)_ **Example 2.7**.: _We can make \((\mathbb{R}[x],[,\ ],\ \alpha)\) a Hom-Lie algebra, where \(\mathbb{R}[x]\) is the vector space of polynomials with coefficients in \(\mathbb{R}\), and \(\alpha:\mathbb{R}[x]\rightarrow\mathbb{R}[x]\) is the linear map defined by \(\alpha(p(x))=p(0)\) for any \(p(x)\in\mathbb{R}[x]\). We define \([p(x),q(x)]\) for any \(p(x),q(x)\in\mathbb{R}[x]\) by_ \[[p(x),\ q(x)]=p^{\prime\prime}(x)q^{\prime}(x)-q^{\prime\prime}(x)p^{\prime}(x) -p^{\prime\prime}(0)q^{\prime}(0)+q^{\prime\prime}(0)p^{\prime}(0).\] _It can be verified that \([\cdot,\cdot]\) is antisymmetric and satisfies the Hom-Jacobi identity, which makes \((\mathbb{R}[x],[\,\ ])\) a Hom-Lie algebra. Indeed if \(p(x),q(x)\in\mathbb{R}[x]\), then_ \[[p(x),q(x)] = p^{\prime\prime}(x)q^{\prime}(x)-q^{\prime\prime}(x)p^{\prime}(x )-p^{\prime\prime}(0)q^{\prime}(0)+q^{\prime\prime}(0)p^{\prime}(0)\] \[= -(q^{\prime\prime}(x)p^{\prime}(x)-p^{\prime\prime}(x)q^{\prime}( x)-q^{\prime\prime}(0)p^{\prime}(0)+p^{\prime\prime}(0)q^{\prime}(0))\] \[= -[q(x),p(x)].\] _For \(p(x),q(x),h(x)\in\mathbb{R}[x]\), then one can easily see that_ \[[\alpha(h(x)),\ [p(x),\ q(x)]]=[h(0),\ p^{\prime\prime}(x)q^{\prime}(x)-q^{ \prime\prime}(x)p^{\prime}(x)-p^{\prime\prime}(0)q^{\prime}(0)+q^{\prime\prime }(0)p^{\prime}(0)]=0.\] _Thus, for each \(p(x),q(x),h(x)\in\mathbb{R}[x]\) we have_ \[[\alpha(h(x)),\ [p(x),\ q(x)]]+[\alpha(p(x)),\ [q(x),\ h(x)]]+[\alpha(q(x)),\ [h(x), \ p(x)]]=0.\] _Also,_ \[\alpha([p(x),\ q(x)]) = p^{\prime\prime}(0)q^{\prime}(0)-q^{\prime\prime}(0)p^{\prime}(0 )-p^{\prime\prime}(0)q^{\prime}(0)+q^{\prime\prime}(0)p^{\prime}(0)\] \[= 0\] \[= [p(0),\ q(0)]\] \[= [\alpha(p(x)),\ \alpha(q(x))].\] _It is clear that \((\mathbb{R}[x],\ [\,\ ])\) is not a Lie algebra, since_ \[[x^{3},\ [x^{4},\ x^{2}]]+[x^{2},\ [x^{3},\ x^{4}]]+[x^{4},\ [x^{2},\ x^{3}]]=96x ^{3}.\neq 0\] **Example 2.8**.: _([3]) Let \((L,\ [\,\ ],\ \alpha)\) be a Hom-Lie algebra and let \(H\) be a Hom Lie ideal. Then the quotient space \((L/H,\ \overline{[\,\ ]},\ \overline{\alpha})\) is a Hom-Lie algebra where_ \[\overline{[\,\ ]}:L/H\times L/H\to L/H;\ (x+H,\ y+H)\mapsto[x,\ y]+H,\] _and_ \[\overline{\alpha}:L/H\to L/H;\ x+H\mapsto\alpha(x)+H.\] Consider \(H\) and \(K\) as Hom-Lie ideals in a Hom-Lie algebra \(L\). We define the sum of \(H\) and \(K\) as the set \(H+K\), where \(H+K=h+k\ |\ h\in H,k\in K\). Moreover, we define the multiplication of \(H\) and \(K\) as the span of the set of all possible commutators between \(H\) and \(K\), denoted as \([H,K]\). Thus \[[H,\ K]={\rm Span}(\{[h,\ k]\ |\ h\in H\ and\ k\in K\}).\] The following theorem, as presented in the publication by Casas [3], lacks a formal proof. **Theorem 2.1**.: _([3]) Let \(H\) and \(K\) be Hom-Lie ideals of a multiplicative Hom-Lie algebra \((L,\ [\,\ ],\ \alpha)\). Then,_ * \([H,\ K]\) _is a Hom-Lie subalgebra of_ \(L\)_._ * \([H,\ K]\) _is a Hom-Lie ideal of_ \(H\) _and_ \(K\)_, respectively._ * \([H,\ K]\) _is a Hom-Lie ideal of_ \(L\) _when_ \(\alpha\) _is onto._ _Proof._ * Let \([h,\ k]\in[H,\ K]\) where \(h\in H\) and \(k\in K\). Then \(\alpha[h,k]=[\alpha(h),\alpha(k)]\in[H,\ K]\). To demonstrate closure of multiplication under \([H,\ K]\), we consider \([h_{1},\ k_{1}]\) and \([h_{2},\ k_{2}]\) in \([H,\ K]\) with \(h_{1},h_{2}\in H\) and \(k_{1},k_{2}\in K\). Since \([h_{1},\ k_{1}]\in H\) and \([h_{2},\ k_{2}]\in K\), it follows that \([[h_{1},k_{1}],[h_{2},\ k_{2}]]\in[H,\ K]\). * It should be noted that \([H,\ K]\subseteq H\cap K\subseteq H\), as stated in \((i)\). This implies that \([H,\ K]\) is a Hom-Lie subalgebra of both \(H\) and \(K\). Furthermore, if \(h,y\in H\) and \(k\in K\), then \([h,k]\in K\), and consequently \([y,[h,\ k]]\in H\). Thus, \([H,\ K]\) is a Hom-Lie ideal of \(H\). Similarly, \([H,\ K]\) is also a Hom-Lie ideal of \(K\). * As per \((i)\), \([H,\ K]\) is a Hom-Lie subalgebra of \(L\). Therefore, it suffices to prove that \([z,\ y]\in[H,\ K]\) whenever \(z\in[H,K]\) and \(y\in L\). Let \(h\in H\), \(k\in K\), and \(y\in L\). Since \(y=\alpha(x)\) for some \(x\in L\), it follows that \([x,h],\alpha(h)\in H\) and \([k,x],\alpha(k)\in K\). Hence, \[[y,\ [h,\ k]]=[\alpha(x),\ [h,\ k]]=-[\alpha(h),\ [k,\ x]]-[\alpha(k),\ [x,\ h]]\in[H,\ K].\] The subsequent example demonstrates that Theorem 2.1 (iii) is invalid if \(\alpha\) is not a surjective map. **Example 2.9**.: _Consider the multiplicative Hom-Lie algebra \((L,[,],\alpha)\), where \(L\) is a vector space over \(F\) with basis \(e_{1},\ e_{2},e_{3},e_{4}\). The map \(\alpha\) is the zero map, and \([,]\) is a skew-symmetric bilinear map defined as follows:_ \[[e_{1},\ e_{2}]=[e_{1},\ e_{3}]=[e_{2},\ e_{3}]=[e_{2},\ e_{4}]=[e_{3},\ e_{4}]=e_{1},[e_{1},\ e_{4}]=e_{2}\] _and \([e_{i},\ e_{i}]=0\) for all \(i=1,2,3\). Let \(H=\mathrm{Span}(e_{1},\ e_{2},\ e_{3})\) and \(K=\mathrm{Span}(e_{1},\ e_{2})\). It can be observed that \(H\) and \(K\) are Hom-Lie ideals of \(L\). However, \([H,\ K]=\mathrm{Span}(e_{1})\) is not a Hom-Lie ideal of \(L\), as \([e_{1},\ e_{4}]=e_{2}\notin\mathrm{Span}(e_{1})\). This example illustrates that Theorem 2.1 (iii) does not hold when \(\alpha\) is not onto._ **Example 2.10**.: _([7]) Let \((L,\ [\,\ ],\ \alpha)\) be a Hom-Lie algebra and H be a Hom-Lie ideal. Then \((L/H,\ \overline{[\,\ ]},\ \overline{\alpha})\) is a Hom-Lie algebra and the linear map_ \[\pi:L\to L/H;\] \[x\mapsto x+H\] _is a morphism of Hom-Lie algebras._ ## 3. Solvable Hom-Lie Algebra Let \((L,\ [\,\ ],\ \alpha)\) be a Hom-Lie algebra. The sequence of Hom-Lie subalgebras \(L_{1},L_{2},\ldots,L_{n}\ldots\) such that \[L=L_{0}\supseteq L_{1}\supseteq\cdots\supseteq L_{n}\supseteq\cdots\] is called a descending series. **Definition 3.1**.: ([8]) Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. We define,\(\{L^{(i)}\},i\geq 0\), the derived series of \(L\) by \[L^{(0)} = L,\] \[L^{(1)} = [L,\ L],\] \[L^{(i)} = [L^{(i-1)},\ L^{(i-1)}],i\geq 2.\] Note that \(L^{(i)}=[L^{(i-1)},\ L^{(i-1)}]\) is a Hom-Lie ideal of \(L^{(i-1)}\) (by induction and Theorem 2.1(ii)). \[L=L^{(0)}\supseteq L^{(1)}\supseteq\cdots\supseteq L^{(i-1)}\supseteq L^{(i)}\cdots\] Thus the derived series is a descending series. **Definition 3.2**.: ([8]) A multiplicative Hom-Lie algebra \((L,\ [\,\ ],\ \alpha)\) is said to be solvable if there exists \(n\in\mathbb{N}\) such that \(L^{(n)}=\{0\}\). We say \(L\) is solvable of class \(k\) if \(L^{(k)}=\{0\}\) and \(L^{(k-1)}\neq\{0\}\). Clearly a multiplicative Hom-Lie algebra is solvable of class \(\leq k\) iff \(L^{(k)}=\{0\}\). Metabelian Hom-Lie algebras is the same as in the case of Lie algebras ([12]) are the solvable Hom-Lie algebras of class at most \(2\). **Example 3.1**.: _Let \(L\) be the space spanned by a basis \(\{e_{1},e_{2},\ldots,e_{n}\}\) (\(n\geq 5\)) over \(F\). Consider the multiplicative Hom-Lie algebra \((L,\ [\,\ ],\alpha)\) where \(\alpha\) is the zero map and \([\,\ ]\) is the skew-symmetric bilinear map such that \([e_{i},e_{j}]=0\) if \(i=j\) or \(i=1\) and \([e_{i},e_{j}]=e_{i-1}\) if \(1<i<j\leq n\). Note that \([e_{2},\ [e_{4},\ e_{5}]]+[e_{4},\ [e_{5},\ e_{2}]]+[e_{5},\ [e_{2},\ e_{4}]]=e_{1}\neq 0\). Thus \(L\) is not a Lie algebra. Now, \(L^{(1)}=[L,\ L]=\{e_{1},\ e_{2},\ldots,\ e_{n-2}\}\) \(L^{(2)}=[L^{(1)},\ L^{(1)}]=\{e_{1},\ e_{2},\ldots,\ e_{n-4}\}\) \(\vdots\) \(L^{(i)}=\{e_{1},\ e_{2},\ldots,\ e_{n-2i}\},i<\frac{n}{2}\) _If \(n\) is even then \(L^{(\frac{n}{2}-1)}=\{e_{1},\ e_{2}\}\) and \(L^{(\frac{n}{2})}=\{0\}\). Thus \(L\) is a solvable Hom-Lie algebras of class \(\frac{n}{2}\). If \(n\) is odd then \(L^{(\frac{n-1}{2})}=\{e_{1}\}\) and \(L^{(\frac{n+1}{2})}=\{0\}\). Thus \(L\) is a solvable Hom-Lie algebras of class \(\frac{n+1}{2}\). \(\blacksquare\)_ **Definition 3.3**.: Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then the descending series \(L=L_{0}\supseteq L_{1}\supseteq...\supseteq L_{n}=\{0\}\) called a solvable series if for each i, we have \(L_{i+1}\) is a Hom-Lie ideal of \(L_{i}\) and \(L_{i}/L_{i+1}\) is an abelian Hom-lie algebra. **Lemma 3.1**.: _Let \(H\) be a Hom-Lie subalgebra of the Hom-Lie algebra \((L,\ [\,\ ],\ \alpha)\). Then \(H\) is a Hom-Lie ideal of \(L\) and \(L/H\) is abelian Hom-Lie algebra if and only if \([L,\ L]\subseteq H\)_ _Proof._ If \(L/H\) is an abelian Hom-Lie algebra, then for any \([x,\ y]\in[L,\ L]\) we find \(H=\overline{[x+H,\ y+H]}=[x,\ y]+H\). Therefore \([x,\ y]\in H\). Conversely, if \([L,\ L]\subseteq H\) then \([x,\ y]\in H\) for all \(x\in H\ (\subseteq L)\) and \(y\in L\), which implies \(H\) is a Hom-Lie ideal of \(L\). Also, for any \(x,y\in L\) we have \(\overline{[x+H,\ y+H]}=[x,\ y]+H=H\) (because \([x,\ y]\in[L,\ L]\subseteq H\)). \(\Box\) **Corollary 3.1**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a Hom-Lie algebra. Then the descending series \(L=L_{0}\supseteq L_{1}\supseteq...\supseteq L_{n}=\{0\}\) is solvable if and only if \([L_{i},\ L_{i}]\subseteq L_{i+1}\) for each \(i=0,1,\ldots,n-1\)._ _Proof._ This follows directly from Definition 3.3 and Lemma 3.1. \(\Box\) **Theorem 3.1**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then \((L,\ [\,\ ],\ \alpha)\) is a solvable Hom-Lie algebra of class \(\leq k\) iff \(L=L^{(0)}\supseteq L^{(1)}\supseteq\cdots\supseteq L^{(k)}=\{0\}\) is a solvable series._ _Proof._ This follows directly from the definition of the derived series and the corollary above. \(\Box\) **Theorem 3.2**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. If \(L=L_{0}\supseteq L_{1}\supseteq\cdots\supseteq L_{n}=\{0\}\) is a solvable series, then for each i, \(L^{(i)}\subseteq L_{i}\)._ _Proof._ We use induction. For \(k=0\), we have \(L^{(0)}=L=L_{0}\). For \(k>0\) and because the induction assumption we have \(L^{(k+1)}=[L^{(k)},L^{(k)}]\subseteq[L_{k},L_{k}]\subseteq L_{k+1}\). Now according to Corollary 3.1, we have \([L_{k},L_{k}]\subseteq L_{k+1}\). Therefore \(L^{(k+1)}\subseteq L_{k+1}\). \(\Box\) **Theorem 3.3**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then, \(L\) is solvable of class \(\leq k\) if and only if there exists a solvable series of length \(k\)._ _Proof._ If \(L\) is a solvable Hom-Lie algebra of class \(\leq k\), then, using Theorem 3.1, the series \(L=L^{(0)}\supseteq L^{(1)}\supseteq...\supseteq L^{(k)}=\{0\}\) is solvable. Conversely suppose that \[L=L_{0}\supseteq L_{1}\supseteq\cdots\supseteq L_{k}=\{0\}\] is a solvable series. Then, using \(L^{(k)}\subseteq L_{k}=\{0\}\), we find \(L^{(k)}=\{0\}\). \(\Box\) **Corollary 3.2**.: _Solvable Hom-Lie algebras of class \(1\) are the abelian Hom-Lie algebras._ _Proof._\(L\) is solvable of class \(1\) iff there exists a solvable series \(L=L_{0}\supseteq L_{1}=\{0\}\) of length \(1\) iff \(L/\{0\}\) is abelian Hom-Lie algebra iff \(L\) is abelian Hom-Lie algebra. \(\Box\) **Theorem 3.4**.: _Let \(\varphi:(L_{1},\ [\,\ ]_{1},\ \alpha_{1})\longrightarrow(L_{2},\ [\,\ ]_{2},\ \alpha_{2})\) be a morphism of multiplicative Hom-Lie algebras. Then_ * \((\varphi(L_{1}))^{(i)}=\varphi(L_{1}^{(i)})\)_,_ * _if_ \(L_{1}\) _is solvable of class_ \(k\)_, then_ \(\varphi(L_{1})\) _is solvable of class_ \(\leq k\)_,_ * _if_ \(\varphi\) _is an isomorphism of Hom-Lie algebras, then_ \(L_{1}\) _is solvable of class_ \(k\) _if and only if_ \(L_{2}\) _is solvable of class_ \(k\) _Proof._ * By applying induction we find \((\varphi(L_{1}))^{(0)}=\varphi(L_{1})=\varphi(L_{1}^{(0)})\). \(\ 2. Let \(I\) be a Hom-Lie ideal of \(L\). Then so is \(L/I\) (because \(L\) is multiplicative). Consider the natural map \(\pi:L\to L/I\) in Example 2.10. According to Theorem 3.4(i), \(\pi(L^{(K)})=(\pi(L))^{(k)}\), which implies \[(L/I)^{(k)}=(\pi(L))^{(k)}=\pi(L^{(k)})=\pi(\{0\})=\{0+I\}.\] \(\Box\) **Theorem 3.6**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. If H is a solvable Hom-Lie ideal of class \(k\) and \(L/H\) is solvable of class \(m\), then \(L\) is solvable of class \(\leq k+m\)._ _Proof._ According to Theorem 3.3, we have the following two solvable series, \[H=H_{0}\supseteq H_{1}\supseteq\cdots\supseteq H_{k}=\{0\},\] \[L/H=(L/H)_{0}\supseteq(L/H)_{1}\supseteq\cdots\supseteq(L/H)_{m}=\{0+H\}.\] Consider the natural map \(\pi\), and let \(L_{i}=\pi^{-1}((L/H)_{i}),i=1,2,...,m\). Hence \(L_{i}\) is a Hom-Lie subalgebra of \(L\) and \(L_{i+1}\subseteq L_{i}\). Therefore, \[L=L_{0}\supseteq L_{1}\supseteq\cdots\supseteq L_{m}=\pi^{-1}(\{0+H\})=H=H_{0 }\supseteq H_{1}\supseteq\cdots\supseteq H_{k}=\{0\}\] is a descending series. Now it suffices to prove that \([L_{i},\ L_{i}]\subseteq L_{i+1}\ (i=0,2,\ldots,m-1)\). If \(x,y\in L_{i}=\pi^{-1}((L/H)_{i})\), then \(\pi(x),\pi(y)\in(L/H)_{i}\) and so \(\pi([x,\ y])=[\overline{\pi(x),\ \pi(y)}]\in[\overline{(L/H)_{i},\ (L/H)_{i}}]\subseteq(L/H)_{i+1}\) (Corollary 3.1). Therefore, \([x,\ y]\in\pi^{-1}((L/H)_{i+1})=L_{i+1}\) for each \(x,y\in L_{i}\). This shows that \([L_{i},\ L_{i}]\subseteq L_{i+1}\). Therefore \[L=L_{0}\supseteq L_{1}\supseteq\cdots\supseteq L_{m}=H=H_{0}\supseteq H_{1} \supseteq\cdots\supseteq H_{k}=\{0\}\] is a solvable series of length \(k+m\). By Theorem 3.3, we have \(L\) is solvable of class \(\leq k+m\). \(\Box\) In [10], we proved that if \((L_{1},\ [\,\ ]_{1},\ \alpha_{1})\) and \((L_{2},\ [\,\ ]_{2},\ \alpha_{2})\) are Hom-Lie algebras and \(H_{i}\) is a Hom-Lie ideal of \(L_{i}\), \(i=1,2.\), then \(H_{1}\times H_{2}\) is a Hom-Lie ideal of \(L_{1}\times L_{2}\) and \[(L_{1}\times L_{2})/(H_{1}\times H_{2})\equiv L_{1}/H_{1}\times L_{2}/H_{2}\] **Theorem 3.7**.: _Let \((L_{1},\ [\,\ ]_{1},\ \alpha_{1})\) and \((L_{2},\ [\,\ ]_{2},\ \alpha_{2})\) be solvable Hom-Lie algebras of class \(k\) and \(m\), respectively. Then \((L_{1}\times L_{2},\ [\,\ ],\ \alpha)\) is a solvable Hom-Lie algebra of class \(\leq k+m\)._ _Proof._ Note that, \(L_{1}\times L_{2}\) is a multiplicative Hom-Lie algebra because \(L_{1}\) and \(L_{2}\) are multiplicative Hom-Lie algebras. Since \(L_{1}\times\{0\}\equiv L_{1}\), so \(L_{1}\times\{0\}\) is a solvable Hom-Lie ideal (of class \(k\)) of \(L_{1}\times L_{2}\). Also, \((L_{1}\times L_{2})/(L_{1}\times\{0\})\equiv L_{1}/L_{1}\times L_{2}/\{0\} \equiv L_{2}\), so \((L_{1}\times L_{2})/(L_{1}\times\{0\})\) is a solvable Hom-Lie algebra of class \(m\). According to Theorem 3.6, \(L_{1}\times L_{2}\) is a solvable Hom-Lie algebra of class \(\leq k+m\). \(\Box\) ## 4. Nilpotent Hom-Lie algebra **Definition 4.1**.: ([8]) Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. We define, \(\{L^{i}\},i\geq 0\), the lower central series of \(L\) by \(L^{0}=L\), \(L^{1}=[L,\ L]\), and \(L^{i}=[L,\ L^{i-1}]\). Note that \(L^{i+1}=[L,\ L^{i}]\) is a Hom-Lie ideal of \(L^{i}\) (by Theorem 2.1(ii) and induction). \[L=L^{0}\supseteq L^{1}\supseteq\cdots\supseteq L^{i}\supseteq L^{i+1}\supseteq\cdots\] Thus the lower central series is a descending series. **Definition 4.2**.: ([8]) Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. We say that \(L\) is nilpotent if there exists \(n\in\mathbb{N}\) such that \(L^{n}=0\). It is nilpotent of class \(k\) if \(L^{k}=\{0\}\) and \(L^{k-1}\neq\{0\}\) It is clear now that \(L\) is nilpotent of class \(\leq k\) iff \(L^{k}=\{0\}\). **Example 4.1**.: _Consider the multiplicative Hom-Lie algebra \((L,\ [\,\ ],\alpha)\) in Example 3.1 where \(\alpha\) is the zero map and \([\,\ ]\) is the skew-symmetric bilinear map such that \([e_{i},e_{j}]=0\) if \(i=j\) or \(i=1\) and \([e_{i},e_{j}]=e_{i-1}\) if \(1<i<j\leq n\). Now, \(L^{1}=[L,\ L]=\{e_{1},\ e_{2},\ldots,\ e_{n-2}\}\)\(L^{2}=[L,\ L^{1}]=\{e_{1},\ e_{2},\ldots,\ e_{n-3}\}\)\(\vdots\)\(L^{i}=\{e_{1},\ e_{2},\ldots,\ e_{n-(i+1)}\},i<n-3\)\(\vdots\)\(L^{n-3}=\{e_{1},\ e_{2}\}\)\(L^{n-2}=\{0\}\). Thus \(L\) is a nilpotent Hom-Lie algebras of class \(n-2\)._ **Definition 4.3**.: Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then a descending series \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots\) is said to be central if for each \(i\in\mathbb{N}\), \([L,\ L_{i}]\subseteq L_{i+1}\). It has a length \(k\in\mathbb{N}\) if \(L_{k}=\{0\}\) but \(L_{k-1}\neq\{0\}\). **Theorem 4.1**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then \((L,\ [\,\ ],\ \alpha)\) is a nilpotent Hom-Lie algebra of class \(\leq k\) iff \(L=L^{0}\supseteq L^{1}\supseteq\cdots\supseteq L^{k}=\{0\}\) is a central series._ _Proof._ It follows directly from the definition of \(L^{i}\). \(\Box\) **Theorem 4.2**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. If \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots\) is a central series, then for each \(i\in\mathbb{N}\), \(L^{i}\subseteq L_{i}\)._ _Proof._ Applying induction we see \(L^{0}=L=L_{0}\). Also, if \(L^{i}\subseteq L_{i}\) then \(L^{i+1}=[L,\ L^{i}]\subseteq[L,\ L_{i}]\subseteq L_{i+1}\). \(\Box\) **Theorem 4.3**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. Then \(L\) is nilpotent of class \(\leq k\) iff there exists a central series of length \(k\)._ _Proof._ If \(L\) is a nilpotent Hom-Lie algebra of class \(\leq k\). Then \[L=L^{0}\supseteq L^{1}\supseteq\cdots\supseteq L^{k}=\{0\}\] is a central series. The converse is true, since \(L^{k}\subseteq L_{k}=\{0\}\) so \(L^{k}=\{0\}\). \(\Box\) **Corollary 4.1**.: _Nilpotent Hom-Lie algebras of class \(1\) are the abelian Hom-Lie algebras._ _Proof._ A Hom-Lie algebra \(L\) is nilpotent of class \(1\) iff there exists a central series of length \(1\),\(L=L_{0}\supseteq L_{1}=\{0\}\) iff \([L,\ L_{0}]\subseteq L_{1}\) iff \([L,\ L]=\{0\}\) iff \(L\) is an abelian Hom-Lie algebra. \(\Box\) **Theorem 4.4**.: _Let \(\varphi:(L_{1},\ [\,\ ]_{1},\ \alpha_{1})\longrightarrow(L_{2},\ [\,\ ]_{2},\ \alpha_{2})\) be a morphism of multiplicative Hom-Lie algebras. Then_ * \((\varphi(L_{1}))^{i}=\varphi(L_{1}^{i})\)_,_ * _if_ \(L_{1}\) _is nilpotent of class_ \(k\)_, then_ \(\varphi(L_{1})\) _is nilpotent of class_ \(\leq k\)_,_ * _if_ \(\varphi\) _is an isomorphism of Hom-Lie algebras, then_ \(L_{1}\) _is nilpotent of class_ \(k\) _if and only if_ \(L_{2}\) _is nilpotent of class_ \(k\)_._ _Proof._ 1. We note that \((\varphi(L_{1}))^{0}=\varphi(L_{1})=\varphi(L_{1}^{0})\). Also if \((\varphi(L_{1}))^{i}=\varphi(L_{1}^{i})\), then \((\varphi(L_{1}))^{i+1}=[(\varphi(L_{1})),\ (\varphi(L_{1}))^{i}]=[\varphi(L_{1}),\ \varphi(L_{1}^{i})]= \varphi([L_{1},\ L_{1}^{i}])=\varphi(L_{1}^{i+1})\). 2. Since \(L_{1}\) is nilpotent of class \(k\), then \(L_{1}^{k}=\{0\}\). So, \((\varphi(L_{1}))^{k}=\varphi(L_{1}^{k})=\varphi(\{0\})=\{0\}\). Thus, \(\varphi(L_{1})\) is nilpotent of class \(\leq k\). 3. Let \(L_{1}\) be nilpotent of class \(k\). By (ii) \(L_{2}=\varphi(L_{1})\) is nilpotent of class \(\leq k\). Let \(L_{2}\) be nilpotent of class \(m\). Since \(\varphi^{-1}\) is an isomorphism of Hom-Lie algebras, it follows \(L_{1}=\varphi^{-1}(L_{2})\) is solvable of class \(\leq m\). Thus, \(k=m\). \(\Box\) **Example 4.2**.: _Consider Example3.2. Since \((\mathbb{C}^{2})^{1}=[\mathbb{C}^{2},\ \mathbb{C}^{2}]_{*}=\{(x,x);x\in \mathbb{C}\}\), \((\mathbb{C}^{2})^{2}=[\mathbb{C}^{2},\ (\mathbb{C}^{2})^{1}]_{*}=(\mathbb{C}^{2})^{1}\) and \((\mathbb{C}^{2})^{i}=(\mathbb{C}^{2})^{1}\), \(i>1\). Thus \(\mathbb{C}^{2}\) is not a nilpotent Hom-Lie algebra. And so \(L\) is not a nilpotent Hom-Lie algebra._ **Lemma 4.1**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra and \(H\) be a Hom-Lie subalgebra of \(L\). Then \(H^{i}\subseteq L^{i}\), for each \(i\in\mathbb{N}\)._ _Proof._\(H^{0}=H\subseteq L=L^{0}\), and by using induction, if \(H^{i}\subseteq L^{i}\) then \(H^{i+1}=[H,\ H^{i}]\subseteq[L,\ L^{i}]=L^{i+1}\). \(\Box\) **Theorem 4.5**.: _Let \((L,\ [\,\ ],\ \alpha)\) be a nilpotent Hom-Lie algebra of class k._ 1. _Any Hom-Lie subalgebra is nilpotent of class_ \(\leq k\)_._ 2. _Any quotient Hom-Lie algebra of_ \(L\) _is nilpotent of class_ \(\leq k\)_._ _Proof._ 1. A Hom-Lie subalgebra \(H\) of a multiplicative Hom-Lie algebra is multiplicative. By the lemma above, we have \(H^{k}\subseteq L^{k}=\{0\}\). Thus \(H^{k}=\{0\}\). 2. Let \(I\) be a Hom-Lie ideal of \(L\). The Hom-Lie algebra \(L/I\) is multiplicative. Consider the natural morphism \(\pi:L\to L/I\). According to Theorem 4.4(i), \(\pi(L^{k})=(\pi(L))^{k}\), which implies \((L/I)^{k}=(\pi(L))^{k}=\pi(L^{k})=\pi(\{0\})=\{0\}\). \(\Box\) Remark 1. Let \((L,\ [\,\ ],\ \alpha)\) be a multiplicative Hom-Lie algebra. If H is a nilpotent Hom-Lie ideal and \(L/H\) is a nilpotent Hom-Lie algebra, then \(L\) need not be a nilpotent Hom-Lie algebra. **Example 4.3**.: _Let \(L\) be the space spanned by a basis \(\{e_{1},e_{2}\}\) over \(F\). Consider the multiplicative Hom-Lie algebra \((L,\ [\,\ ],\alpha)\) where \(\alpha\) is the zero map and \([\,\ ]\) is the skew-symmetric bilinear map such that \([e_{1},e_{1}]=[e_{2},e_{2}]=0\) and \([e_{1},e_{2}]=e_{1}\). Let \(H=\mathrm{Span}(\{e_{1}\})\). Then \(H\) is a nilpotent _Hom-Lie ideal, because \(H^{1}=[H,\ H]=\{0\}\). Also, \(L/H\) is a nilpotent Hom-Lie algebra, because \([L/H,\ L/H]=H\). But \(L\) not a nilpotent Hom-Lie algebra, since \(L^{1}=[L,\ L]={\rm Span}(\{{\rm e}_{1}\})=H\), and \(L^{i}=[L,\ L^{i-1}]=[L,\ H]={\rm Span}(\{{\rm e}_{1}\})\neq\{0\}\) for all \(i\in\mathbb{N}\)._ **Theorem 4.6**.: _Let \((L_{1},\ [\,\ ]_{1},\ \alpha_{1})\) and \((L_{2},\ [\,\ ]_{2},\ \alpha_{2})\) be nilpotent Hom-Lie algebras of class \(k\) and \(m\), respectively. Then \((L_{1}\times L_{2},\ [\,\ ],\ \alpha)\) is a nilpotent Hom-Lie algebra of class \(M=Max\{m,\ k\}\)._ _Proof_. We use induction to show that \((L_{1}\times L_{2})^{i}=L_{1}^{i}\times L_{2}^{i}\). For \(i=0\), we have \((L_{1}\times L_{2})^{0}=L_{1}\times L_{2}=L_{1}^{0}\times L_{2}^{0}\). For \(i>0\) and because the induction assumption we have \((L_{1}\times L_{2})^{i+1}=[L_{1}\times L_{2},\ (L_{1}\times L_{2})^{i}]=[L_{1} \times L_{2},\ L_{1}^{i}\times L_{2}^{i}]=[L_{1},\ L_{1}^{i}]_{1}\times[L_{2}, \ L_{2}^{i}]_{2}=L_{1}^{i+1}\times L_{2}^{i+1}\). We may assume that \(m\leq k\). Since \(L_{1}\) and \(L_{2}\) are nilpotent Hom-Lie algebras of class \(k\) and \(m\), respectively, then \(L_{1}^{k}=\{0\}\) and \(L_{1}^{k-1}\neq\{0\}\) and \(L_{2}^{k}=\{0\}\). Now, \((L_{1}\times L_{2})^{k}=L_{1}^{k}\times L_{2}^{k}=\{0\}\times\{0\}=\{(0,\ 0)\}\) and \((L_{1}\times L_{2})^{k-1}=L_{1}^{k-1}\times L_{2}^{k-1}\neq\{(0,\ 0)\}\). Thus \(L_{1}\times L_{2}\) is a nilpotent Hom-Lie algebras of class \(k=Max\{m,\ k\}=M\)\(\Box\) **Theorem 4.7**.: _Every central series is a solvable series._ _Proof_. Let \((L,\ [\,\ ],\ \alpha)\) be a Hom-Lie algebra and \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots L_{k}\) be a central series. Then for each \(i=0,1,\ldots,k-1\), \([L,\ L_{i}]\subseteq L_{i+1}\). Since \([L_{i},\ L_{i}]\subseteq[L,\ L_{i}]\subseteq L_{i+1}\) so \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots L_{k}\) is a solvable series (Theorem 3.1). \(\Box\) **Corollary 4.2**.: _Every nilpotent Hom-Lie algebra is a solvable Hom-Lie algebra._ _Proof_. If \((L,\ [\,\ ],\ \alpha)\) is a nilpotent Hom-Lie algebra, then there exists a central series \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots L_{k}\)(Theorem 4.3). From the theorem above, \(L=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\cdots L_{k}\) is a solvable series. Thus \((L,\ [\,\ ],\ \alpha)\) is a solvable Hom-Lie algebra (Theorem 3.3). \(\Box\) The converse not true, as in the following Example. **Example 4.4**.: _Consider the Hom-Lie algebra \((\mathbb{R}[x],\ [\,\ ],\ \alpha)\) in Example 2.7. It is easy to show that \(H=\{p(x)\in\mathbb{R}[x]:deg(p)\leq 3\}\) is a Hom-Lie subalgebra of \(\mathbb{R}[x]\). For any \(p_{i}(x)=a_{i}x^{3}+b_{i}x^{2}+c_{i}x+d_{i}\in H\), \([p_{1},p_{2}]=(6a_{1}b_{2}-6a_{2}b_{1})x^{2}+(6a_{1}c_{2}-6a_{2}c_{1})x\) where \(a_{i},b_{i},c_{i},d_{i}\in\mathbb{R}\), and so \(H^{(1)}=[H,\ H]=\{Ax^{2}+Bx:A,B\in\mathbb{R}\}\) and \(H^{(2)}=[H^{(1)},\ H^{(1)}]=\{0\}\). Thus \(H\) is a solvable Hom-Lie algebra of class 2. But \(H\) is not a nilpotent Hom-Lie algebra, since \(H^{1}=[H,\ H]=\{Ax^{2}+Bx:A,B\in\mathbb{R}\}\), \(H^{2}=[H,\ H^{1}]=H^{1}\) and \(H^{i}=H^{1}\neq\{0\}\) for all \(i>1\). \(\blacksquare\)_ **Example 4.5**.: _Consider the Hom-Lie subalgebra \(I=\{p(x)\in\mathbb{R}[x]:deg(p)\leq 4\}\) of \((\mathbb{R}[x],\ [\,\ ],\ \alpha)\) in Example 2.7. For any \(p_{i}(x)=a_{i}x^{4}+b_{i}x^{3}+c_{i}x^{2}+d_{i}x+e_{i}\in H\), \([p_{1},p_{2}]=(12a_{1}b_{2}-12a_{2}b_{1})x^{4}+(16a_{1}c_{2}-16a_{2}c_{1})x^{3} +(12a_{1}d_{2}-12a_{2}d_{1}+6b_{1}c_{2}-6b_{2}c_{1})x^{2}+(6b_{1}d_{2}-6b_{2}d_ {1})x\) where \(a_{i},b_{i},c_{i},d_{i},e_{i}\in\mathbb{R}\), and so \(H^{(1)}=[H,\ H]=\{Ax^{4}+Bx^{3}+Cx^{2}+Dx:A,B,C,D\in\mathbb{R}\}\), \(H^{(2)}=[H^{(1)},\ H^{(1)}]=H^{(1)}\) and \(H^{(i)}=H^{(1)}\neq\{0\}\) for all \(i>1\). Thus \(H\) is not a solvable Hom-Lie algebra. Also \(H\) is not a nilpotent Hom-Lie algebra by corollary4.2. Note that \((\mathbb{R}[x],\ [\,\ ],\ \alpha)\) is not a solvable and not a nilpotent Hom-Lie algebra because there exists a non-solvable and non-nilpotent Hom-Lie subalgebra of \(\mathbb{R}[x]\) (Theorem 3.5(i) and Theorem 4.5(i))._ ## 5. Question for Further Research **Question 5.1**.: _What are the precise conditions for a Hom-Lie algebra to be solvable or nilpotent? Can these conditions be expressed in terms of the underlying Lie algebra and the Hom morphism?_ **Question 5.2**.: _What are some examples of solvable Hom-Lie algebras, and what properties do they have? Are there any interesting relationships between these examples and other areas of mathematics, such as Lie theory or algebraic geometry?_ **Question 5.3**.: _What are some examples of nilpotent Hom-Lie algebras, and how do they compare to nilpotent Lie algebras? Can the classification of nilpotent Lie algebras be extended to the Hom-Lie algebra setting?_ **Question 5.4**.: _How do solvable and nilpotent Hom-Lie algebras arise in physics, particularly in the context of supersymmetry and other quantum field theories? What are the implications of these structures for our understanding of fundamental physics?_ **Question 5.5**.: _What is the relationship between Solvable and Nilpotent Hom-Lie algebras and other algebraic structures, such as associative algebras or Lie superalgebras? Can techniques from these other areas be used to study solvable and nilpotent Hom-Lie algebras more effectively?_ **Question 5.6**.: _How can the representation theory of Hom-Lie algebras be studied, particularly in the case of solvable and nilpotent algebras? What are some interesting examples of Hom-Lie algebra representations, and what do they tell us about the structure of these algebras?_ **Question 5.7**.: _Study of Hom-Lie superalgebras: Hom-Lie superalgebras are a natural generalization of Hom-Lie algebras that incorporate a \(\mathbb{Z}_{2}\)-grading. Investigating solvable and nilpotent Hom-Lie superalgebras can lead to interesting results in the study of supersymmetry and related topics in physics._ **Question 5.8**.: _Generalization of results to other categories: Hom-Lie algebras are defined in the category of vector spaces, but similar structures can be defined in other categories, such as modules or abelian groups. Investigating solvable and nilpotent Hom-Lie algebras in these categories can provide insight into the interplay between different areas of algebra._ **Question 5.9**.: _Cohomology of Hom-Lie algebras: Cohomology is a powerful tool for understanding the structure of Lie algebras, and similar techniques can be applied to Hom-Lie algebras. Investigating the cohomology of solvable and nilpotent Hom-Lie algebras can provide insights into their structure and classification._ **Question 5.10**.: _Quantum Hom-Lie algebras: Quantum Hom-Lie algebras are a generalization of Hom-Lie algebras that arise in the context of quantum groups and deformation theory. Investigating solvable and nilpotent quantum Hom-Lie algebras can lead to interesting results in these areas._ **Question 5.11**.: _Applications to cryptography and coding theory: Hom-Lie algebras have recently been applied to cryptography and coding theory. Investigating solvable and nilpotent Hom-Lie algebras in this context can lead to new methods for error-correction and secure communication._ These questions are just a starting point, and there are many other avenues for research in this area. By exploring these and other questions, researchers can gain a deeper understanding of the properties and applications of solvable and nilpotent Hom-Lie algebras, and advance our knowledge of this important area of algebraic research. ## 6. Conclusion In conclusion, this paper presents an extraction algorithm for Hom-Lie algebras that is based on solvable and nilpotent groups. The algorithm involves several steps. The algorithm is illustrated with examples, which demonstrate its effectiveness in extracting Hom-Lie algebra structures. Overall, the extraction algorithm presented in this paper provides a useful tool for studying Hom-Lie algebras, which have important applications in various areas of mathematics and physics. The algorithm is particularly well-suited for Hom-Lie algebras that are related to solvable and nilpotent groups, which are important classes of groups that arise in many different contexts. Further research could be done to investigate the effectiveness of the extraction algorithm for Hom-Lie algebras that are not related to solvable or nilpotent groups, and to explore its potential applications in other areas of mathematics and physics. Nevertheless, the algorithm presented in this paper is a valuable contribution to the study of Hom-Lie algebras and provides a useful framework for further investigation of these important algebraic structures. **Declaration of Interest** There is no competing interest to declare. **Funding Information** This research did not receive any specific grant from funding agencies in the public, commercial, or not for profit sectors.
2305.06174
Analysis of Climate Campaigns on Social Media using Bayesian Model Averaging
Climate change is the defining issue of our time, and we are at a defining moment. Various interest groups, social movement organizations, and individuals engage in collective action on this issue on social media. In addition, issue advocacy campaigns on social media often arise in response to ongoing societal concerns, especially those faced by energy industries. Our goal in this paper is to analyze how those industries, their advocacy group, and climate advocacy group use social media to influence the narrative on climate change. In this work, we propose a minimally supervised model soup [57] approach combined with messaging themes to identify the stances of climate ads on Facebook. Finally, we release our stance dataset, model, and set of themes related to climate campaigns for future work on opinion mining and the automatic detection of climate change stances.
Tunazzina Islam, Ruqi Zhang, Dan Goldwasser
2023-05-06T16:43:29Z
http://arxiv.org/abs/2305.06174v2
# Analysis of Climate Campaigns on Social Media ###### Abstract. Climate change is the defining issue of our time, and we are at a defining moment. Various interest groups, social movement organizations, and individuals engage in collective action on this issue on social media. In addition, issue advocacy campaigns on social media often arise in response to ongoing societal concerns, especially those faced by energy industries. Our goal in this paper is to analyze how those industries, their advocacy group, and climate advocacy group use social media to influence the narrative on climate change. In this work, we propose a minimally supervised model soup (Sundar et al., 2017) approach combined with messaging themes to identify the stances of climate ads on Facebook. Finally, we release our stance dataset, model, and set of themes related to climate campaigns for future work on opinion mining and the automatic detection of climate change stances. social media, climate campaigns, facebook ads, bayesian model averaging, minimal supervision + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acmment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafaye, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafaye, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafaye, IN-47907, USA + Footnote †: journal: Acment of Computer Science, Purdue University West Lafayeette, IN-47907, USA _stance_ of the top ad (inside the brown box in Fig. 1) is (pro-energy) as the sponsor is against' unnecessary regulations on oil and gas industry' and the ad _theme_ is (economy_pro) mentioning 'oil and gas industry supports local jobs'. The _stance_ of the bottom ad (inside the green box in Fig. 1) is (clean-energy) as the sponsor supports 'transition away from fossil fuels' and the reason for this is the 'threatening effect of fossil fuels on our health'. So the ad _theme_ is (HumHealth). In this work, we aim to understand how climate advocates and fossil fuel corporations are using advertising to control the narrative on climate change and climate policy. Our goal is twofold: first, to characterize the themes of the ads, and second to build on this characterization to identify the stances of the ads, i.e., **pro-energy**, **clean-energy**, **neutral**. Our theme assignment process is motivated by a thematic analysis approach (Bradley, 2017). We begin by defining a seed set of relevant arguments based on recent studies (Keenec and Feliu, 2018; Keenec and Feliu, 2018), where each pro-energy theme is defined by multiple sentences. Since the initial set of themes contains only pro-energy arguments, we add clean-energy themes and phrases. We fine-tune a pre-trained textual inference model using a contrastive learning approach to identify paraphrases in a large collection of climate related ads. In recent years, research has shown that models pre-trained on large and diverse datasets learn representations that transfer well to a variety of tasks (Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018). The fine-tuning process has two steps: (1) fine-tune models with a variety of hyperparameter configurations, and (2) select the model which achieves the highest accuracy on the held-out validation set and discard remaining models. Wortsman et al. (2017) recently showed that selecting a single model and discarding the rest has several downsides, and they proposed _model soup_, which averages the weights of fine-tuned models independently. While Wortsman et al. (2017) showed model soup performance on four text classification datasets from the GLUE benchmark (Wang et al., 2017), we develop a minimally supervised model soup approach leveraging messaging theme to detect stance for analyzing climate campaigns on Facebook. We focus on the following research questions (RQ) to analyze climate campaigns on social media: * **RQ1.** Can a model trained with minimal supervision using theme information be leveraged to predict the presence of stances in Facebook ads related to climate change? * **RQ2.** What are the intersecting themes of the messaging? * **RQ3.** What demographics and geographic are targeted by the advertisers? * **RQ4.** Do the messages differ based on entity type? Our contributions are summarized as follows: 1. We formulate a novel problem of exploiting minimal supervision and Bayesian model averaging to analyze the landscape of climate advertising on social media. 2. We identify the themes of the climate campaigns using an unsupervised approach. 3. We propose a minimally supervised model soup approach to identify stance combining themes of the content of climate campaigns. We show that our model outperforms the baselines. 4. We conduct quantitative and qualitative analysis on real-world dataset to demonstrate the effectiveness of our proposed model. The remaining sections of the paper are structured as follows: we commence with a discussion on related work, followed by the presentation of dataset details. Subsequently, we introduce the problem formulation, after which we outline the methodology employed. Later, we provide comprehensive information on the experimental settings, including the results, baselines, and ablation study. Finally, we address the research questions **RQ2**, **RQ3**, and **RQ4** through a detailed analysis. Our data, code, and model are publicly available at [https://github.com/tunazislam/BMA-FB-ad-Climate](https://github.com/tunazislam/BMA-FB-ad-Climate) ## 2. Related Work Recent studies have shown climate change activism in social media and news media (Bradley, 2017; Keenec and Feliu, 2018; Keenec and Feliu, 2018). Sponsored content on social media - especially Facebook, is the main channel to reach the targeted audience on a specific event such as US Presidential election (Keenec and Feliu, 2018), or specific issues, i.e., COVID (Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018), immigration(Keenec and Feliu, 2018; Keenec and Feliu, 2018). Several studies have analyzed the discourse around climate change. Luo et al. (2018) proposed an opinion framing task on the global warming debate on media. Koenecke and Feliu-Faba (2018) studied whether climate change related sentiment in tweets changed in response to five natural disasters occurring in the US in 2018. Dey et al. (2018) explored stance with respect to certain topics, including climate change in a tweet-based setting. To understand the narratives of climate change skepticism, Bhatia et al. (2018) studied the automatic classification of neutralization techniques. Diggelmann et al. (2018) introduced a veracity prediction task in a fact-checking setting on climate claims. Our work differs from these in that we use a **probabilistic approach** to detect stance incorporating **theme information** of climate related ads on social media. Our work falls in the broad scope of minimal supervision (Bradley, 2017; Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018), contrastive learning (Keenec and Feliu, 2018; Keenec and Feliu, 2018; Keenec and Feliu, 2018) and Bayesian model averaging (Keenec and Feliu, 2018; Keenec and Feliu, 2018) where averaging the weights of multiple models fine-tuned with different hyperparameter configurations improves accuracy and robustness (Keenec and Feliu, 2018). ## 3. Data We collect 88, 022 climate related English ads focusing on the United States from January 2021 - January 2022 using Facebook Ad Library API4 with the keywords 'climate change', 'energy', 'fracking', 'coal'. To create the list of keywords for collecting ads about climate and \begin{table} \begin{tabular}{|p{199.2pt}|} \hline climate change, climate, fossil fuel, fracking, energy, oil, coal, mining, gas, carbon, power, footprint, solar, drilling, tri-city, petroleum, renewable, global warming, emission, ecosystem, environment, greenhouse, ozone, radiation, bioenergy, biomass, green energy, methane, pollution, forest, planet, earth, ocean, nuclear, ultraviolet, hydropower, hydrogen, hydroelectricity, geothermal, sustainable, clean energy. \\ \hline \end{tabular} \end{table} Table 1. List of the keywords for data collection. oil & gas industries, we read multiple articles about climate policy, environmental justice, climate change mentioning green/clean energy, transition from fossil fuel to renewable energy, coal dependent US states, protection of fossil-fuel workers and communities, and other climate debates, and made a list of repeating statements. Then, we consult two researchers in Computational Social Science and construct a list of relevant keywords. The full list of keywords is in Table 1. Our collected ads are written in English. For each ad, the API provides the ad ID, title, ad description, ad body, funding entity, spend, impressions, distribution over impressions broken down by gender (male, female, unknown), age (7 groups), and location down to states in the USA. So far, we have 408 unique funding entities whose stances are known based on their affiliation from their websites and Facebook pages. These funding entities are the source of supervision in our model. As we don't know the stance of the ads, we assign the same stance for all ads sponsored by the same funding entity. This way, we have \(25,232\) ads whose stances are known. ## 4. Problem Formulation We formulate our stance prediction problem as a minimally supervised model soup approach. We know the stance of the funding entity, but we don't know the stance of the ads. We assign the same stance for all ads sponsored by the same funding entity. We want to predict the stance of the ad using the model soup approach in the following way: \[\text{Point estimation:}\ P(y_{s}|X_{a},\theta,y_{t}) \tag{1}\] Bayesian posterior: \[P(\theta|y_{s},X_{a},y_{t})\propto P(\theta)P(y_{s}|X_{a},\theta,y_{t}) \tag{2}\] where, \(X_{a}\) is the ad, \(y_{s}\) is the predicted stance, \(y_{t}\) is the assigned themes, \(\theta\) is the model parameter. For the point estimation in Equation 1, we fine-tuned the pre-trained BERT model (He et al., 2017) by concatenating theme information. For Bayesian model averaging (Equation 2), we implement both the uniform and greedy soup approaches provided by Wortsman et al. (Wortsman et al., 2017) including messaging theme, which can be regarded as cheap Bayesian posterior approximations. We get the theme \(y_{t}\), using the contrastive learning approach following Reimers and Gurevych (Reimers and Gurevych, 2018). ## 5. Methodology In this section, we describe how to obtain sentence embedding using contrastive learning, generate themes and phrases, assign themes for the ad content, and implement model soup in our problem. ### Sentence Embeddings with Contrastive Learning We use \(88k\) unlabeled ads for finetuning Sentence BERT (SBERT) (Reimers and Gurevych, 2018). Our training approach uses a siamese-BERT architecture during fine-tuning (Fig. 2). During each step, we process a sentence \(S\) (anchor) into BERT, followed by sentence \(T\) (positive example). In our case, the anchor is the ad text, and a positive example is the ad description or ad summary. Some ads do not have ad descriptions. In that case, we generate an ad summary using BART summarizer (Reimers and Gurevych, 2018). BERT generates token embeddings. Finally, those token embeddings are converted into averaged sentence embeddings using mean-pooling. Using the siamese approach, we produce two of these per step \(-\) one for the anchor \(A\) and another for the positive called \(P\). We use multiple negatives ranking loss which is a great loss function if we only have positive pairs, for example, only pairs of similar texts like pairs of paraphrases. In our case, positive pairs are ad text and description/summary. ### Themes and Phrases Generation To analyze climate campaigns, we model the climate related stance expressed in each ad (i.e., pro-energy, clean-energy) and the underlying reason behind such stance. For example, the top ad (brown box) of Fig. 1 expresses a pro-energy stance and mentions their support for local jobs as the reason to take this stance. Three main challenges are involved in this analysis: 1) constructing the space of possible themes, 2) mapping ads to the relevant themes, and 3) predicting the stance leveraging the themes. We combine computational and qualitative techniques to uncover the most frequent themes cited for pro-energy and clean-energy stances. We build on previous studies that characterized the arguments supporting the oil and gas industries (Reimers and Gurevych, 2018). In this work, researchers develop four broad categories of pro-energy themes by looking at audience responses to ads from fossil fuel companies. As energy is an economic, social, security, and environmental concern, we go through relevant research conducted by United Nations, influencemap.org and perweresearch.org to construct a list of potential themes and phrases for each theme. We add new relevant pro-energy themes and corresponding phrases that were not covered by previous work, such as "_Green New Deal would take America back to the dark ages_" \begin{table} \begin{tabular}{c|c} \hline **Pro-** & Economy, pro, Identity, Climate solution, Pragmatism, Patriticism, Against climate policy, Give away. \\ \hline **Clean-** & Economy,\_clean, Future generation, Environmental, Human health, Animals, Support climate policy, Alternative energy, Political affiliation. \\ \hline \end{tabular} \end{table} Table 2. Resulting themes. Figure 2. Siamese-BERT network for contrastive learning to generate sentence embeddings. \begin{table} \begin{tabular}{|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \hline **Themes** & **Phrases** & **’Old and gas will create more jobs’, "Without old and gas, there is no job’s;" \\ & "Packing supports thousands of jobs", "Without tracking, we will be jobs";" \\ & "Old and gas help local business", "Without and gas, our economy would be at risk", \\ & "Old and gas industries pay high wage", "Jobs would be lower paid without oil and gas", \\ **Economy\_Pro** & "Local business would suffer without the old and gas industry", "Don't take jobs away from the coal mineral", \\ & "Coal is lowering economic powers", "Protect our job", "Ranning fossil fuels will lead to job losses", "Fracking jobs will bring new opportunities to rural area", "Local communities would suffer due to the loss for fewer", "Natural gas ban would kill lead jobs", "Old and gas industries help the community through through platformoffers", "Terney industry gives back to communities", \\ & "Without the old and gas industry," there would be less pathindyno". \\ \hline **Identity** & "Shifting away from fossil fuels is the loss of our culture", "Destruction of fossil fuel industry feels like the destruction of our identity", \\ & "Poulsil fed workers struggle with a loss of identity due to factory shut down", "We should protect our community identity". \\ & "Our identities at stake", "Support the mineral; "Coal is not just a job, "It's away of Life", "Remember the pride that coal mining grew", "We are fighting for our identity", \\ & "Support our families and communities through supporting oil and gas industries"." \\ \hline **ClimateSolution** & "We support reducing greenhouse gas emissions", "We develop technologies to reduce carbon emission", "We are continuing to net-arcm emissions", "We are continuing energy mix away from fossil fuels", "We are moving towards renewable", "Natural gas is the future of clean energy", "Is well as a burden energy source", "Natural gas is the perfect partner to renewables", "Natural gas is part of the solution climate change", "Thanks to natural gas emissions have reduced". \\ & "The end and gas industry has to be a partner on a problem," "Rememberative annual gas will help us get to net zero carbon emissions as fast as we can". \\ \hline **Pragmatism** & "Old and gas are affordable energy sources", "Without and gas, energy would be expensive", "Old and gas are reliable energy sources", "Old and gas will keep the lights on no matter what", "Is which falls under a new theme called '**Against Climate Policy**'. As the initial set of themes contains mostly pro-energy arguments, we add reasons for supporting climate actions which are clean-energy themes, e.g., "_Climate change is a grave threat to children's survival_" \(\Rightarrow\)**Future Generation**. Then, we consult with two researchers in Computational Social Science and finalize the relevant themes with corresponding phrases. The final set of themes can be observed in Table 2. The full list of phrases for each theme can be observed in Table 3. ### Assign Themes Our main goal is to ground these themes in a set of approximately \(25k\) labeled (stance) ads. To map ads to themes, we use the cosine similarity between their fine-tuned sentence BERT embeddings (details of fine-tuning provided in subsection 5.1) of the ad text and the phrases of each theme. To check the quality of the theme label, we annotated around 300 ads with corresponding themes and noticed an accuracy of 38.4% and macro-avg F1 score of 40.2%, which is better than the random (6.6%). ### Bayesian Model Averaging In this work, we develop a minimally supervised model soup approach by incorporating messaging themes to identify the stances of climate ads on Facebook. We used two approaches for model soup. The first one is uniform soup (Sutton et al., 2017). We consider a neural network \(f(x,\theta)\) with input data \(x\) and parameters \(\theta\). For uniform soup, we take the average of the fine-tuned model parameters (\(f(x,\frac{1}{k}\sum_{i=1}^{k}\theta_{i})\)) where \(\theta_{i}\) can be considered as samples from the Bayesian posterior and the average can be viewed as a cheap approximation to Bayesian model average. The second one is the greedy soup approach (Sutton et al., 2017). For the greedy soup, we first sort the models in decreasing order of validation set accuracy. The soup is constructed by sequentially adding each model as a potential ingredient in the soup and only keeping the model in the soup if performance on the validation set improves. ## 6. Experimental Details This section presents the experimental details of the stance prediction task on climate change-related ads. We randomly split our data based on the funding entity so that the same ads do not appear in the other splits. At first, we randomly split 20% of the funding entities and keep them as a testing set. Then we randomly split the rest of the data and keep 20% of that as a validation set and the rest as the training set. Details number of funding entities and ads for each split are shown in Table 4. We fine-tune the pre-trained BERT-base-uncased model (He et al., 2017) and run for 10 epochs for each hyperparameter setting, i.e., learning rate and weight decay. We set the maximum text sequence length to 110, batch size 32, and use Adam optimizer (Kingmaa et al., 2014). We concatenate the assigned theme with ad text so that our model can leverage the theme information. We use pre-trained weights from the Huggingface Transformers library (Sutton et al., 2017). Evaluation is conducted once at the end of the training, without early stopping. We use a single GPU GeForce GTX 1080 Ti GPU, with 6 Intel Core i5-8400 CPU @ 2.80 GHz processors to run each model, and it takes around 15 minutes to run each model. But averaging several of these models to form a model soup requires no additional training and adds no cost at inference time. ### Results We provide experimental results in Table 5. For the evaluation metrics, we use accuracy and macro-average F1 score. At first, we compare our approach with simple Logistic Regression (LR) (Krizhevsky et al., 2014) trained on term frequency-inverse document frequency (tf-idf) features baseline (Table 5). Then, to make sure that the model soup being a better hypothesis holds irrespective of the underlying language model (LM) architecture, we test our work on larger pre-trained LM, i.e., RoBERTa (Rao et al., 2017), T5 (Rao et al., 2018) besides BERT. Finally, we compare the performance accuracy and macro-average F1 score with the standalone models (best individual model) with respect to the model soup (Table 5). From Table 5, we notice that the uniform model soup \begin{table} \begin{tabular}{l l l l l} \hline \hline **Model** & **Accuracy** & **Macro-avg F1** & **Learning rate** & **Weight decay** \\ \hline BERT\_Hyper (best) & 0.897 & 0.833 & 2.00E-05 & 0.01 \\ BERT\_Hyper (best) & 0.899 & 0.866 & 1.00E-05 & 0.01 \\ BERT\_Hyper (best) & 0.899 & 0.877 & 1.00E-04 & 0.001 \\ BERT\_Hyper (best) & 0.895 & 0.774 & 1.00E-04 & 0.01 \\ BERT\_Hyper (best) & 0.905 & 0.856 & 1.00E-05 & 0.001 \\ BERT\_Hyper (best) & 0.890 & 0.813 & 3.00E-05 & 0.001 \\ BERT\_Hyper (best) & 0.895 & 0.825 & 3.00E-05 & 0.01 \\ BERT\_Hyper (best) & 0.892 & 0.833 & 2.00E-05 & 0.1 \\ BERT\_Hyper (best) & 0.885 & 0.813 & 1.00E-04 & 0.0001 \\ BERT\_Hyper (best) & 0.906 & 0.841 & 1.00E-05 & 0.1 \\ _Uniform Model aug (last)_ & 0.943 & 0.880 & - & - \\ _Only Model aug (last)_ & 0.832 & 0.827 & - & - \\ Point\_set\_set\_Hyper1 (best + thm) & 0.821 & 0.854 & 2.00E-05 & 0.01 \\ Point\_set\_Hyper2 (best + thm) & 0.832 & 0.835 & 1.00E-05 & 0.01 \\ Point\_set\_Hyper3 (best + thm) & 0.916 & 0.895 & 1.00E-04 & 0.001 \\ Point\_set\_Hyper4 (best + thm) & 0.874 & 0.845 & 1.00E-04 & 0.01 \\ Point\_set\_Hyper5 (best + thm) & 0.897 & 0.826 & 1.00E-05 & 0.001 \\ Point\_set\_Hyper5 (best + thm) & 0.902 & 0.825 & 3.00E-05 & 0.001 \\ Point\_set\_Hyper6 (best + thm) & 0.948 & 0.830 & 3.00E-05 & 0.01 \\ Point\_set\_Hyper7 (best + thm) & 0.884 & 0.829 & 2.00E-05 & 0.1 \\ Point\_set\_Hyper7 (best + thm) & 0.882 & 0.811 & 1.00E-04 & 0.001 \\ Point\_set\_hyper1 (best + thm) & 0.879 & 0.822 & 1.00E-05 & 0.1 \\ _Uniform Model aug (last)_ & 0.944 & **0.888** & - & - \\ _Control Model aug (last)_ & 0.945 & 0.834 & - & - \\ \hline \hline \end{tabular} \end{table} Table 6. Ablation study. FBERT: Fine-tuned pre-trained BERT model, Point_set: Point estimation, thm: Theme, Hyper: Hyperparameter. \begin{table} \begin{tabular}{l c c} \hline \hline **Data split** & **Number of Funding entities** & **Number of Ads** \\ \hline Training & 261 & 17780 \\ Validation & 65 & 2074 \\ Testing & 82 & 5378 \\ \hline \hline \end{tabular} \end{table} Table 4. Data details. \begin{table} \begin{tabular}{l l c c} \hline \hline **Model** & **Method** & **Accuracy** & **Macro-avg F1** \\ \hline LR\_tf-idf & Best individual model & 0.810 & 0.506 \\ \hline RoBERTa-base & Best individual model & 0.943 & 0.879 \\ \hline T5-small & Best individual model & 0.874 & 0.8743 \\ \hline BERT-base & Best individual model & 0.921 & 0.854 \\ & _Uniform Model soup_ & 0.944 & **0.888** \\ & _Greedy Model soup_ & 0.845 & 0.884 \\ \hline \hline \end{tabular} \end{table} Table 5. Performance comparison on test data. Comparing model soup with simple Logistic Regression with tf-idf feature (LR_tf-idf) as well as standalone BERT, RoBERTa, and T5 baselines. Figure 4. Wordcloud for three messaging themes based on the popularity of ad impressions, expenditure, and the number of sponsored ads for both pro-energy and clean-energy ads. Figure 3. Distribution of ad themes by Number of Ads, Impressions, and Spend. using ad text + theme (88.8% macro-avg F1 score) outperforms the greedy model soup for text + theme and the best individual model baselines (Answer to **RQ1**). ### Ablation Study For the ablation study, we run the experiments using only ad text (we **do not** provide any theme information). We notice that the uniform model soup (text + theme) still gives better performance than the uniform model soup (text), greedy model soup (text), and the best single text only models (Table 6). ## 7. Analyses In this section, we present analyses that address our three research questions (**RQ2**, **RQ3**, and **RQ4**). In subsection 7.1, we find that various advertisers prioritize distinct themes to promote their narratives that endorse particular stances. In subsection 7.2, we find that advertisers aim their messages at particular demographics and geographic locations to spread their viewpoints. Subsection 7.3 shows that how messaging differs based on the entity type. ### Narrative Analysis We consider only ads with correct stance prediction and corresponding themes for narrative analysis. To answer **RQ2**, we analyze the messaging strategies used by the advertisers (Fig. 3). By impressions and expenditures, the most popular _pro-energy_ messaging theme is **Economy_pro'**, accounting for approximately \(27\%\) of total impressions and \(28.7\%\) of total expenditure (Fig. 3a). Under this theme, narratives promote how _'natural gas and oil industry will drive economic recovery'_; _'GDP would decline by a cumulative \(700\) billion through \(2030\) and \(1\) million industry jobs would be lost by \(2022\) under natural gas and oil leasing and development ban'_ (Fig. 4a). Based on impression, the most popular _clean-energy_ messaging category is **'SupportClimatePolicy'** (Fig. 3b) (approximately \(35\%\)), which features narratives supporting Build Back Better Act5 to _fight climate change, create clean energy jobs, equitable clean energy future, take bold climate action_ (Fig. 4c). Based on spend, the most popular (\(42\%\)) _clean-energy_ messaging theme is **'Environmental'** (Fig. 3b). This theme focuses on narratives about _'how dirty fossil fuel industries would harm the indigenous peoples and wildlife'_; _'why climate scientists agree that climate change causes more extreme droughts, bigger fires and deadline heat'_; _'effects of carbon pollution on climate crisis'_ etc (Fig. 4b). Footnote 5: [https://www.whitehouse.gov/build-back-better/](https://www.whitehouse.gov/build-back-better/) ### Demographic and Geographics Distribution by Impressions As Facebook enables its customers to target ads using demographics and geographic information, we further analyze the distribution of the messaging categories to answer **RQ3**. At first, we perform a chi-square test (Krishnan et al., 2018) of contingency to calculate the statistical significance of an association between demographic group and their stances. The null hypothesis \(H_{0}\) assumes that there is no association between the variables, while the alternative hypothesis \(H_{a}\) claims that some association does exist. The chi-square test statistic is computed as follows: \[\chi^{2}=\sum\frac{\left(observed-expected\right)^{2}}{expected}\] The distribution of the statistic \(\chi^{2}\) is denoted as \(\chi^{2}_{(df)}\), where \(df\) is the number of degrees of freedom. \(df=(r-1)(c-1)\), where \(r\) represents the number of rows and \(c\) represents the number of columns in the contingency table. The p-value for the chi-square test is the probability of observing a value at least as extreme as the test statistic for a chi-square distribution with \((r-1)(c-1)\) degrees of freedom. To perform a chi-square test, we take gender distribution over stance and age distribution over stance separately to build contingency tables correspondingly. The null hypothesis, \(H_{0}\): whether the demographic group and their stances are independent, i.e., _no relationship_. The alternative hypothesis \(H_{a}\): whether the demographic group and their stances are dependent, i.e., \(\exists\)_a relationship_. We choose the value of significance level, \(\alpha=0.05\). The p-value for both cases is \(<0.05\), which is statistically significant. We reject the null hypothesis \(H_{0}\), indicating some association between the audience's demographics and their stances on climate change. Fig. 5a shows that _more males than females_ view the _pro-energy_ ads, and _more females than males_ watch _clean-energy_ ads. However, _pro-energy_ ads are mostly viewed by the _older population_ (\(65+\)) (Fig. 5b). On the other hand, _young people_ from the age range of \(25-34\) watch _clean-energy_ ads (Fig. 5b). In Fig. 6, we show the distribution of impressions over US states for both stances. To plot the distribution, we use the Choropleth map6 in Python. _Pro-energy_ ads receive the most views from Texas which is the energy capital of the world7 (Fig. 6a). Fig. 6b shows that _clean-energy_ ads are mostly viewed from California because recently, CA has become one of the loudest voices in the fight against climate change8. Footnote 6: [https://pilotly.com/python/choropleth-maps/](https://pilotly.com/python/choropleth-maps/) Footnote 7: www.eia.gov/ Footnote 8: www.pewtrusts.org \begin{table} \begin{tabular}{|c|c|} \hline Type & Entity \\ \hline **Corporation** & EXON MOBIL CORPGORATION \\ **Corporation** & Shell \\ **Corporation** & BP CORPORATION NORTH AMERICA INC. \\ **Corporation** & Twin Metals Minnesota \\ **Corporation** & Wink to Webster Pipeline LLC \\ **Industry Association** & AMERICAN PETROLEUM INSTITUTE \\ **Industry Association** & New York Pro propane Gas Association \\ **Industry Association** & Texas Oil \& Gas Association \\ **Industry Association** & New Mexico Oil and Gas Association \\ **Industry Association** & National Propane Gas Association \\ **Advocary Group** & Coloradans for Responsible Energy Development \\ **Advocary Group** & Grove Louisiana Coalition \\ **Advocary Group** & Voices for Cooperative Power \\ **Advocary Group** & Consumer Energy Alliance \\ **Advocary Group** & Maine Affordable Energy \\ \hline \end{tabular} \end{table} Table 7. List of entities from pro-energy ads. ### Distribution of Messaging by Entity Type Fig. 7 shows the top 5 funding entities based on expenditure in pro-energy and clean-energy ads. We notice that **Exxcon Mobil Corporation**, which is one of the world's largest publicly traded international oil and gas companies 9, spends the most on sponsoring pro-energy ads on Facebook. Clean-energy ads are mostly sponsored by **The Climate Pledge**, which is powered by 378 companies in 34 countries around the globe10. Footnote 9: [https://corporate.exxcommobil.com/](https://corporate.exxcommobil.com/) Footnote 10: [https://www.theclimatepledge.com/](https://www.theclimatepledge.com/) To understand how fossil fuel industries and their support groups influence public opinion, we categorize pro-energy funding entities into three types, i.e., Corporations, Industry Associations, and Advocacy Groups. Finally, we select the top 5 pro-energy funding entities based on their expenditure for each category. Table 7 shows the list of pro-energy entities included in our analysis. The highest spending on '**Economy_pro**' narratives comes from all three entity types (Fig. 8). Corporation entities spend on '**Patriotism**' narratives as their second target. Furthermore, advocacy groups focus on '**Pragmatism**' narratives as their second target. Moreover, industry associations spend almost equally on '**ClimateSolution**' and '**AgainstClimatePolicy**' narratives. Analyzing the messaging themes for different funding entities indicates different groups are fulfilling different messaging roles (Answer to **RQ4**). ## 8. Conclusion We propose a minimally supervised model soup approach leveraging messaging themes to identify stances of climate related ads on social media. To the best of our knowledge, our work is the first work that uses a probabilistic machine learning approach to analyze climate campaigns. We hope our approach of stance detection and theme analysis will help policymakers to navigate the complex world of energy. Figure 5. Distribution of impressions over demographic distribution both for pro-energy and clean-energy ads. (a) More males than females watch the pro-energy ads. On the other hand, more females than males view clean-energy ads. (b) The older population (\(65\)+) watches the pro-energy ads. In contrast, the younger population (\(25-34\)) watches clean-energy ads. Figure 6. Distribution of impressions over geographic. Pro-energy ads are mostly viewed from Texas (a), whereas clean-energy ads are mostly viewed from California (b). Figure 8. Pro-energy ad themes by funding entity type. Figure 7. Top 5 funding entities based on expenditure. Orange plot represents pro-energy. Green plot represents clean-energy. ## 9. Limitations In this work, we predict the chances of ads using the theme information. We can further explore other potential tasks, such as moral foundation analysis (Zhu et al., 2020; Zhang et al., 2021), which will help model the dependencies between the different levels of analysis. Note that our fine-tuned SBERT based theme assignment model is an unsupervised learning approach and an alternative approach could be zero-shot and/or few-shot classification models (Kang et al., 2021). We leave this exploration for future work. Moreover, our analysis might have an unknown bias as it is based on English written ads on Facebook focusing on the United States only. Another limitation is transparency - some particular aspects of the advertising campaigns are not available to the public through the Facebook Ads Library API, thus limiting our findings. ## 10. Ethics Statement The data collected in this work was made publicly available by Facebook Ads API. The data does not contain any personally identifying information and reports engagement patterns at an aggregate level. The authors' personal views are not represented in any qualitative result we report, as it is solely an outcome derived from a machine learning model. ## Acknowledgement We are thankful to the anonymous reviewers for their insightful comments. This work was partially supported by Purdue Graduate School Summer Research Grant (to TI) and an NSF CAREER award IIS-2048001.
2307.08720
ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development
We introduce "ivrit.ai", a comprehensive Hebrew speech dataset, addressing the distinct lack of extensive, high-quality resources for advancing Automated Speech Recognition (ASR) technology in Hebrew. With over 3,300 speech hours and a over a thousand diverse speakers, ivrit.ai offers a substantial compilation of Hebrew speech across various contexts. It is delivered in three forms to cater to varying research needs: raw unprocessed audio; data post-Voice Activity Detection, and partially transcribed data. The dataset stands out for its legal accessibility, permitting use at no cost, thereby serving as a crucial resource for researchers, developers, and commercial entities. ivrit.ai opens up numerous applications, offering vast potential to enhance AI capabilities in Hebrew. Future efforts aim to expand ivrit.ai further, thereby advancing Hebrew's standing in AI research and technology.
Yanir Marmor, Kinneret Misgav, Yair Lifshitz
2023-07-17T04:19:30Z
http://arxiv.org/abs/2307.08720v1
# ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development ###### Abstract We introduce _ivrit.ai_, a comprehensive Hebrew speech dataset, addressing the distinct lack of extensive, high-quality resources for advancing Automated Speech Recognition (ASR) technology in Hebrew. With over 3,300 speech hours and a over a thousand diverse speakers, _ivrit.ai_ offers a substantial compilation of Hebrew speech across various contexts. It is delivered in three forms to cater to varying research needs: raw unprocessed audio; data post-Voice Activity Detection, and partially transcribed data. The dataset stands out for its legal accessibility, permitting use at no cost, thereby serving as a crucial resource for researchers, developers, and commercial entities. _ivrit.ai_ opens up numerous applications, offering vast potential to enhance AI capabilities in Hebrew. Future efforts aim to expand _ivrit.ai_ further, thereby advancing Hebrew's standing in AI research and technology. ## 1 Introduction Automated Speech Recognition (ASR; also known as speech-to-text) technology holds vast potential for enhancing various processes involving human speech. Nevertheless, its effectiveness is not uniformly distributed across languages. While some languages significantly benefit from ASR tools, others, such as Hebrew, find these technologies to be underwhelming. This often results in the continued use of human scribes when transcriptions are not available. The initial phase of creating precise and efficient ASR tools requires an extensive corpus of high-quality speech data. To our knowledge, such a dataset for Hebrew has not been publicly available until now. We introduce _ivrit.ai_ ("Ivrit" is how Hebrew speakers pronounce the name of their language), a comprehensive Hebrew speech dataset designed for AI research and development. _ivrit.ai_ is designed to empower advanced AI technologies to function smoothly in Hebrew, enabling AI to read, write, listen to, and articulate the language with fluency. We view this as a strategic initiative to provide Hebrew speakers and their communities with access to superior AI technology in a practical manner under a favorable license, thus enabling interested commercial parties to use this data at no cost. _ivrit.ai_ contains approximately 3,300 hours of Hebrew speech, collected from a diverse range of online platforms including podcasts and other audio content. Given the wide variety of speech types and topics within _ivrit.ai_, it serves as a valuable resource for advancing AI in Hebrew. ### Automated Speech Recognition Oral communication is a crucial component of human interaction. It communicates words and meaning through intonation, pitch (i.e., how high or low the voice sounds), pace, and subtle non-verbal cues, together forming the rich tapestry of human conversation. Humans carry out all these complex processes intuitively, instantaneously, and responsively. Although spoken language is a naturally comfortable mode of communication for humans, ASR technology aims to harness this ease and transform it into a format beneficial for computers. It accomplishes this by converting speech signals into written text, thus transcribing oral communication into a computer-readable input. Whereas oral communication is natural for humans, it presents significant challenges for computers due to its complexities. These complexities include physical aspects such as diverse accents, dialects, pitch, multiple speakers, and overlapping speech. Specific environmental factors can also add difficulties, such as channel distortions, background noises, and limitations in sampling and compression techniques. Additional complexities are associated with cognitive process aspects. Since spoken language carries inherent meaning, it's crucial for machines to understand words in their re spective contexts to yield accurate transcriptions (e.g., distinguishing between _knight_ and _night_). Further complexities arise during the transcription phase, which involves converting spoken words into written text, such as the need for appropriate punctuation. Previous studies have addressed these challenges by using algorithmic approaches that leverage extensive datasets. For instance, Christensen et al. (2001) introduced statistical prosody models for punctuation annotation, while Wang and Chen (2018) surveyed models for speaker-noise separation. However, these solutions frequently encounter issues with generalization across various settings, including differences in speakers, languages, and environmental factors. Furthermore, these methodologies often face limitations in their accuracy and performance, which can hinder their effectiveness in diverse and complex real-world applications. ### ASR in the age of Large Language Models Large language models (LLMs; e.g., ChatGPT by OpenAI Roumeliotis and Tselikas (2023), Bard by Google Manyika (2023)) are revolutionizing the tools we use and also hold promise for the ASR task. Based on transformer-based architecture Zhao et al. (2023) and vast datasets, LLMs are currently recognized as state-of-the-art (SOTA) for typical natural language processing (NLP) tasks (e.g., Hendy et al. (2023), Alarcon et al. (2021), Luo et al. (2022)). The remarkable success of LLMs in text-based tasks has sparked interest in the field of speech processing to develop solutions for the ASR deficits mentioned earlier, based on similar principles: transformer-based architecture Latif et al. (2023) and large corpora. Currently, ASR models based on these principles (e.g., Whisper Radford et al. (2023), SpeechT5 Ao et al. (2021)) are considered SOTA for most ASR tasks. To fully realize the significant benefits of ASR and its wide range of applications, large, high-quality datasets are essential. These datasets should include a wide vocabulary, diverse speakers, and a variety of topics. Since languages differ in multiple aspects (e.g., phonetics, syntax, and semantic structure), one may utilize datasets tailored to each specific language the ASR should support. At present, only a few languages have access to datasets of sufficient size and quality. Many languages suffer from a lack of resources, which prevents their speakers from maximizing the potential of existing ASR technology. In these languages, a lack of resources poses a major barrier to the effective adoption and utilization of ASR technology. In light of these challenges, it's crucial to support languages in the digital world for fostering linguistic diversity and inclusivity. Making more speech and transcribed speech data available for use could significantly benefit less-prevalent languages. Specifically, in this project, we aim to provide such support for Hebrew speakers. ### Processing Hebrew Speech The processing of Hebrew speech is challenged by the lack of available data. Some innovative monolingual datasets do exist in Hebrew (Sharoni et al. (2023)) and there are some multilingual datasets that include Hebrew Black (2019). However, their scope and quality do not meet the requirements needed for training and optimizing robust ASR models. As a result, Hebrew speakers cannot fully benefit from advancements in ASR technology. Therefore, it is crucial to collect and develop comprehensive datasets in Hebrew. ### The Present Dataset The _ivrit.ai_ dataset that we present here has the potential to contribute substantially to various speech and text tasks in Hebrew. This dataset includes over 3,300 hours of Hebrew speech, collected from multiple online sources. The dataset includes a wide range of speakers, varying by gender, origin, and education level, as well as a variety of speech styles (e.g., official lectures, informal conversations), and topics (e.g., sports podcasts, Talmud lessons). Approximately 2.8 million utterances and about 30 million words are included in the dataset. The audio data and transcripts are available for research and development in AI modeling, encompassing both non-commercial and commercial uses. ## 2 Related works Significant advances in ASR and NLP have been made in recent years, as a result of numerous datasets and research studies. However, the Hebrew language has received very little attention in this domain. ### General Speech Datasets and Research Monolingual speech datasets have been developed for specific domains and communication situations, including the ATIS corpus for air travel information requests Hemphill et al. (1990), as well as other corpora containing recordings of meetings Garofolo et al. (2004), telephone conversations Canavan et al. (1997), and broadcast media Garofolo et al. (2004). The original TIMIT collection Garofolo (1993) and many of the broadcast news corpora provide relatively clean audio scenarios in which a single speaker reads from a prepared text, such as TREC Garofolo et al. (2000) and CLEF Federico and Jones (2004). This characteristic restricts the usefulness of the data in more dynamic environments and in more spontaneous situations. There are also collections of more naturally occurring conversational material, such as the CALLHOME corpus Canavan et al. (1997), the Santa Barbara Corpus of Spoken American English Du Bois et al. (2000), and the TED talks corpus Hasebe (2015), as well as podcast corpora, for instance, Spotify Clifton et al. (2020) and MSP Lotfian and Busso (2017) Martinez-Lucas et al. (2020). These collections capture unscripted and spontaneously organized discourse in a conversational setting, including turns, interviews, stretches of monologue, and argumentation. Due to this aspect, this data is more useful for dealing with natural language patterns and real-world dialogue dynamics. None of these datasets include Hebrew speech. ### Multilingual Speech Datasets and Research Current multilingual speech datasets with transcriptions, such as those drawn from conversational telephone speech (IARPA Babel Program Cui et al. (2013)), political speech Wang et al. (2021), and audiobooks Panayotov et al. (2015) Pratap et al. (2020), cover a variety of domains and span approximately 100 languages. However, the representation of Hebrew in these datasets is relatively low. This limitation also extends to multilingual datasets without transcriptions, such as VoxLingua107 Valk and Alumae (2021) and VoxPopuli Wang et al. (2021), which span multiple languages and contain large amounts of unlabeled data. Even in datasets like the one utilizing read versions of the New Testament (MSU Black (2019) and recently MMS Pratap et al. (2023)), which cover many languages and provide high-quality alignments, the Hebrew content is not extensive. Unfortunately, the authors don't detail the Hebrew content, but generally suggest around 25 hours per language. The data from these datasets is used to train self-supervised models, build speech recognition systems, and develop language identification models Pratap et al. (2023) Wang et al. (2021). Despite the availability of numerous monolingual and multilingual speech datasets, each with their unique characteristics, limitations, and areas of focus, the representation of Hebrew remains limited. The field continues to evolve with ongoing efforts to develop more comprehensive and diverse datasets that can support a wider range of ASR and NLP research. Thus, there is a clear need for more extensive Hebrew content in these resources. ### Prior Hebrew Speech Datasets An array of datasets spanning academic to non-academic ventures provide resources for Hebrew ASR research. These datasets can be monolingual, containing Hebrew exclusively, or multilingual, encompassing Hebrew along with other languages. Table 1 furnishes a detailed overview of these available Hebrew speech datasets, both mono- and multilingual, outlining their distinct characteristics and inherent limitations. ### Natural Language Processing and Speech Recognition Research in Hebrew Substantial progress has been made in the field of Hebrew NLP, with models like Hebert Chriqui and Yahav (2022), AlephBERT Seker et al. (2021), and AlephBERTGimmel Guetta et al. (2022) setting benchmarks in various textual tasks. These advancements illustrate the potential for an intricate understanding of written Hebrew texts. The current landscape of Hebrew Automatic Speech Recognition (ASR) is largely supported by commercial ASR services like Google Cloud, Microsoft (e.g., via Word), Samsung, IBM Watson, and WIT.ai. These services offer engines with accessible APIs that support Hebrew Silber-Varod et al. (2021). Additionally, openly accessible models like OpenAI's Whisper Radford et al. (2023) and Meta AI's multilingual model Pratap et al. (2023) are available as open sources. This availability allows developers to create their own models for specific tasks. Expanding on these resources, we have contributed to Hebrew ASR development by introducing the _ivrit.ai_ dataset. This open-access dataset, free for all interested parties, equips researchers and developers with the means to further improve their Hebrew ASR models. ### Unlabeled Speech Datasets and Research In the field of ASR, significant progress has been achieved through the application and evolution of unsupervised speech pre-training techniques Wu et al. (2020), semi-supervised learning (self-training) Liu et al. (2022), and the combination of these techniques Xu et al. (2021). However, the research community focuses primarily on ASR tasks utilizing English as the main input. Through their proficient use of a copious amount of unlabeled English speech data, these methods have improved English-centric ASR applications. A recently published unlabeled speech corpus, covering 23 languages, demonstrated the potential of these techniques for more languages Wang et al. (2021). However, Hebrew was not included in this corpus. These results underscore the urgent need for large datasets of Hebrew speech, even if they are not labeled. Significant progress can be expected in Hebrew ASR if these datasets are developed and utilized effectively. ## 3 Dataset Creation The data and code are openly available at Hugging Face ([https://huggingface.co/ivrit-ai](https://huggingface.co/ivrit-ai)) and the GitHub repository ([https://github.com/yairl/ivrit.ai](https://github.com/yairl/ivrit.ai)). Throughout this section, we will describe the process of creating _ivrit.ai_. This resource, with its variety of speakers and audio qualities, offers a comprehensive representation of the Hebrew language in multiple contexts. Detailed information on how it was collected and preprocessed will be provided. Figure 1 shows a schematic diagram of the dataset creation pipeline. ### Data Acquisition We gathered audio clips from a variety of sources, encompassing both individual and institutional content creators. ivrit.ai's license is specifically designed to enable commercial use of this corpus for training \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Corpus & Hours & Speakers & Trans. & Type & Topics & License \\ \hline SASPEECH Sharoni et al. (2023) & 30 & 1 & 4M/26A & Mixed & Economy, Politics & Non-commercial \\ HUJI Corpus Marmorgstein and Matalon (2022) & 3.8 & 60 & M & Conversations & General Lifestyle & CC BY 4.0 \\ CoSIH Izre’el et al. (2001) & 12.3 & \(\pm 140\) & M & Conversations & General Lifestyle & Non-commercial \\ MaTACOp Azogui et al. (2016) & 5.3 & 16 & M & Conversations & Map Task framework Anderson et al. (1991) & Non-commercial \\ MLS Pratap et al. (2020) & 126 & 13 & M & Reading & Classic Books & CC BY 4.0 \\ CMU Black (2019) & \(\pm 25\) & - & M & Reading & New Testament & - \\ MMS Pratap et al. (2023) & \(\pm 25\) & - & M & Reading & New Testament & - \\ Whisper Radford et al. (2023) & 688 & - & - & - & - & Not available \\ ivrit.ai & 3,300 & \(+1000\) & A & Mixed & Wide range (economy, politics, science, bible, philosophy, technology, history, etc.) & Augmented CC BY 4.0 (see Availability section) \\ \hline \end{tabular} \end{table} Table 1: Comparison of various speech datasets. The columns show the name of the corpus, total hours of audio available, the number of speakers included, whether the dataset is transcribed or not (M for manual transcription, and A for automatic transcription, the type of speech included (reading, conversations, mixed), the topics covered in the dataset, and the terms of the license for using the dataset. Dash (-) indicates that the data is not available for the corresponding field. Figure 1: Illustration of the Data Pipeline: (1) Downloading the data in accordance with data creator agreements, (2) Processing Voice Activity Detection (VAD) to segment audio based on silences, and (3) Transcribing the segmented audio, which currently relies solely on machine-based process but is planned to incorporate human transcription. AI models, such as speech-to-text or LLM models, while preserving the intellectual property right of the content owner. Every item in the corpus has been provided by a content owner who signed permission to use this item under ivrit.ai's license, thereby permitting the use of their work in this manner. The advantages of such an agreement are twofold. First, it ensures fairness towards the content creators by explicitly stating in the agreement that their work can be used. Second, it provides certainty for research and development entities by confirming that the data has been collected with permission and is distributed within a suitable legal framework. ### Data Processing The _ivrit.ai_ data we've collected are being released in three distinct datasets, catering to various research needs: * Raw Data: This is the original, unprocessed collection of audio clips. * Data Post-VAD: For this dataset, we have run Voice Activity Detection (VAD) on the raw audio (Team, 2021). This operation segregates short units, ranging from a few seconds to a minute, pinpointing parts where speakers were actively involved. Figure 2 provides insights into the length distribution of the audio pieces post-VAD. * Partially Transcribed Data: This dataset provides a portion of the audio data along with their corresponding transcriptions. ### Transcribed Speech The _ivrit.ai_ dataset has been transcribed using the Whisper ASR tool ((Radford et al., 2023), using the _whisper-small_ model). The transcription process is applied to numerous short audio segments, resulting in 2.8 million transcribed utterances and about 30 million words. ## 4 Dataset Description As the data was collected from multiple contributors and represents a wide range of recordings (narrative podcast, conversation, lesson), we are not able to provide any precise information regarding the speakers. However, we can provide some general information. The _ivrit.ai_ dataset comprises over 3,300 hours of speech from a thousand diverse speakers. The dataset encompasses a wide range of audio file lengths, and Figure 3 provides insights into the distribution of episode lengths within the dataset. Speakers' ages ranged between the 20s and 70s. While some speakers are native Hebrew speakers, for others, Hebrew is the second language (with English, Russian, Arabic, and other languages being their native languages). Figure 3: Histogram depicting the distribution of episode durations in _ivrit.ai_ corpus. The x-axis represents the episode duration in minutes, while the y-axis indicates the frequency of each duration range Figure 2: Distribution of post-VAD audio clips in _ivrit.ai_ corpus. The x-axis represents the segment length in seconds, while the y-axis indicates the frequency of each duration range Languages other than Hebrew, predominantly English, can appear within the corpus at three levels of magnitude. First, as single words borrowed for specific uses, technical or otherwise; second, as slightly longer phrases that have gained popularity in Hebrew usage (for instance, the phrase "having said that" has been adopted into Hebrew conversations as is); and third, there are entire episodes that are conducted solely in English. ## 5 Availability The dataset is publicly accessible on the _ivrit.ai_ website and is distributed under an _ivrit.ai_ license, an augmented CC-BY 4.0 license tailored to allow AI model training for commercial use. Detailed licensing terms can be found in Appendix A. ## 6 Discussion We present here the _ivrit.ai_ dataset, a comprehensive collection of over 3,300 hours of high-quality Hebrew speech, curated to advance AI research in the Hebrew language. This novel dataset consists of a wide range of speech types and topics, ready for use in various applications such as emergency response systems, accessibility tools for the disabled, medical transcription services, and digital voice assistants in the service industry, among others. The _ivrit.ai_ dataset stands out among other Hebrew datasets in its size, diversity, and coverage of different speech styles and domains. Furthermore, the _ivrit.ai_ dataset offers more legal accessibility than many other datasets, as it is available for industrial use, making it a valuable resource for researchers and developers. Researchers can leverage this dataset to train and evaluate their models, while also utilizing it as a benchmark for performance comparison. Among the limitations of the project, the dataset may be biased in aspects such as gender or age imbalances, which may affect the performance of AI models trained on the dataset. Additionally, the dataset's diversity, considered above as a strength, introduces variability in recording means, speaker characteristics, and background noises. Moreover, deficits in data collection and transcription could impact the dataset's quality or usability. ## 7 Conclusion and Future Work ASR technology holds vast potential for enhancing various human processes. Although generating high-quality, efficient ASR tools is well known, model quality depends on the dataset size. Despite the benefits that can be obtained from ASR tools for some languages, others, such as Hebrew, are underwhelmed by the technology. We introduced the _ivrit.ai_ dataset, a comprehensive collection of over 3,300 hours of Hebrew speech, designed to advance AI research in Hebrew. With a wide range of speech types and topics, the dataset offers many possibilities. In our view, the availability of such a diverse and extensive dataset is a significant step forward in the field of Hebrew ASR and NLP research. This dataset has the potential to improve multiple ASR-based systems' accuracy and performance. Dataset acquisition tends to be effort-intensive, and fraught with legal difficulties due to copyright requirements that often conflict with standard licenses. _ivrit.ai_ aims to create the world's largest freely-available audio dataset in Hebrew, fully transcribed, and fully available for the specific purpose of training ASR and AI models. Looking forward, we plan to further expand the _ivrit.ai_ dataset, increase the corpus by another order of magnitude and promote applied developments based on the dataset, particularly in specific domains. Community involvement and collaboration will be crucial to these efforts. By making the dataset widely accessible, the aim is to place Hebrew at the forefront of AI research and technology. ## 8 Acknowledgments We would like to express our deepest gratitude to all the content creators who generously allowed us to use their data for this project. Their contributions have been invaluable in advancing AI research in Hebrew. The full list of data contributors is updated and available on the _ivrit.ai_ website. We also extend our heartfelt thanks to Adv. Eli Greenbaum from Yigal Arnon & Co., who generously provided his legal expertise pro bono to draft the license for this open data project. His contribution has been instrumental in ensuring the accessibility and wide distribution of the _ivrit.ai_ dataset. Your collective support and contributions have been instrumental in the success of this project, and we look forward to seeing the advancements in AI research that the _ivrit.ai_ dataset will facilitate.
2304.11721
A Lightweight Constrained Generation Alternative for Query-focused Summarization
Query-focused summarization (QFS) aims to provide a summary of a document that satisfies information need of a given query and is useful in various IR applications, such as abstractive snippet generation. Current QFS approaches typically involve injecting additional information, e.g. query-answer relevance or fine-grained token-level interaction between a query and document, into a finetuned large language model. However, these approaches often require extra parameters \& training, and generalize poorly to new dataset distributions. To mitigate this, we propose leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes which rely on additional sub-architectures and training. We first construct lexical constraints by identifying important tokens from the document using a lightweight gradient attribution model, then subsequently force the generated summary to satisfy these constraints by directly manipulating the final vocabulary likelihood. This lightweight approach requires no additional parameters or finetuning as it utilizes both an off-the-shelf neural retrieval model to construct the constraints and a standard generative language model to produce the QFS. We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity.
Zhichao Xu, Daniel Cohen
2023-04-23T18:43:48Z
http://arxiv.org/abs/2304.11721v1
# A Lightweight Constrained Generation Alternative for Query-focused Summarization ###### Abstract. Query-focused summarization (QFS) aims to provide a summary of a document that satisfies information need of a given query and is useful in various IR applications, such as abstractive snippet generation. Current QFS approaches typically involve injecting additional information, e.g. query-answer relevance or fine-grained token-level interaction between a query and document, into a fine-tuned large language model. However, these approaches often require extra parameters & training, and generalize poorly to new dataset distributions. To mitigate this, we propose leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes which rely on additional sub-architectures and training. We first construct lexical constraints by identifying important tokens from the document using a lightweight gradient attribution model, then subsequently force the generated summary to satisfy these constraints by directly manipulating the final vocabulary likelihood. This lightweight approach requires no additional parameters or finetuning as it utilizes both an off-the-shelf neural retrieval model to construct the constraints and a standard generative language model to produce the QFS. We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity. Query-focused Summarization, Constrained Generation + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. [https://doi.org/10.1145/3539618.3591936](https://doi.org/10.1145/3539618.3591936) + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. + Footnote †: 2023 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-4048-6/23/07. ## 1. Introduction In modern search systems, users are often presented with short _snippets_ of a candidate document on their search results page. This snippet serves as a critical element in helping users determine whether a document satisfies their information needs without requiring them to invest additional time. The effectiveness of a snippet largely depends on its ability to accurately and concisely capture the relevant information from the corresponding document in just a few lines of text (Cheng et al., 2017; Chen et al., 2017). This task of query-focused summarization (QFS) snippet generation, commonly referred to as query-biased summarization (Zhichao Xu and Daniel Cohen, 2013) or abstractive snippet generation (Zhichao Xu and Daniel Cohen, 2013), aims to construct a summary that succinctly addresses the information need of a query by extracting essential information from a document. Traditionally, QFS has used extractive methods that rely on the most relevant spans of text from a candidate document based on the prevalence of query terms (Cheng et al., 2017; Chen et al., 2017). Although efficient, this extractive approach is constrained by the format of the original document, with the effectiveness To achieve this, we first identify the most critical tokens from the ranking model using the gradient signal of each token (Zhu et al., 2018) as these salient terms capture the most important aspects of the document's relevance to the query. We then convert these tokens into predicate logic constraints and use them as input to a version of constrained generation, Neurological Decoding (Kang et al., 2018). By constraining the LM to simultaneously satisfy these constraints and maintain fluency, we generate an abstractive summary that is optimized for relevance to the query. This approach allows us to effectively generate snippets without requiring additional complex modules or training methods, making it a lightweight yet effective alternative to the current state-of-the-art method. Our experiments on two benchmark snippet generation datasets (Kang et al., 2018; Liu et al., 2018) demonstrate that this application of relevance-constrained QFS achieves comparable results to the current state-of-the-art method, suggesting a promising alternative perspective to the snippet generation task. ## 2. Related Work Query-focused SummarizationTo generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL (Kang et al., 2018) incorporates query relevance scores into a pre-trained summarization model. Su et al. (Su et al., 2019) propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model's training and inference process to enforce the summary's coherence w.r.t. the query. QSG Transformer (Zhu et al., 2018) suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains. Constrained Generation(or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets (Li et al., 2018). However, as pointed out by Lu et al. (Lu et al., 2018), such models only fine-tuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works (Li et al., 2018; Liu et al., 2018; Lu et al., 2018; Lu et al., 2018) in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the pre-defined lexical constraints can be better satisfied. ## 3. Relevance-Constrained QFS Problem FormulationGiven a query-document pair \((q,d)\), our task is to generate an abstract summarization \(s\), which addresses the information need of the query. We propose addressing this problem by leveraging a relevance-constrained generation. In this section, we first introduce how we construct the set of constraints used by the language model to generate the abstract summary. We then present the constrained generation process itself. Identifying ConstraintsIn order to identify the most effective constraints for QFS, we first assume that each candidate document is relevant to the query. We then use a pointwise cross-entropy loss, \(\mathcal{L}\), to identify how each token contributes to the relevance of the document. To achieve this, we use a saliency based mapping approach to quantify this impact as gradient-based attribution methods have been widely adopted in existing NLP literature (Li et al., 2018; Liu et al., 2018; Liu et al., 2018). Formally, denote an input sequence \((w_{1},w_{2},\cdots,w_{n})\), where \(w_{i}\) is the \(i\)-th token; and \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{n})\) is a sequence of corresponding static token embeddings. Let \(f(\cdot)\) be a function that takes \(\mathbf{x}\) as input and outputs a prediction logit, e.g., a transformer-style model with classification head. The gradients w.r.t. each input token \(w_{i}\) can be regarded as each token's contribution, or _silency_, to the final prediction \(f(\mathbf{x})\). We denote this per token gradient vector as \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{n})\), which is the normalized saliency across all tokens, \[a_{i}=\frac{g(\nabla_{\mathbf{x}_{i}}\mathcal{L},\,\mathbf{x}_{i})}{\sum_{j= 1}^{n}g(\nabla_{\mathbf{x}_{j}}\mathcal{L},\,\mathbf{x}_{j})} \tag{1}\] where \(\mathcal{L}\) denotes the loss between \(f(\mathbf{x})\) and label \(y=1\), and \(g(\cdot,\cdot)\) is the saliency function. While there exists various methods to estimate the saliency via \(g(\cdot,\cdot)\)(Li et al., 2018; Liu et al., 2018; Liu et al., 2018; Liu et al., 2018; Liu et al., 2018), we adopt InteGrad (Liu et al., 2018), as it is robust to input perturbations (Liu et al., 2018). Specifically, InteGrad sums the gradients along the path from a baseline input \(\mathbf{x}^{\prime}_{i}=\mathbf{0}\) to the actual input \(\mathbf{x}_{i}\): \[g(\nabla_{\mathbf{x}_{i}}\mathcal{L},\mathbf{x}_{i})=(\mathbf{x}_{i}-\mathbf{ x}^{\prime}_{j})\times\sum_{k=1}^{m}\frac{\partial f(\mathbf{x}^{\prime}_{i}+ \frac{k}{m}\times(\mathbf{x}_{i}-\mathbf{x}^{\prime}_{j}))}{\partial\mathbf{x} _{i}} \tag{2}\] where \(m\) is the number of steps to interpolate the input \(\mathbf{x}_{i}\) and \(\times\) denotes dot product; thus \(g(\nabla_{\mathbf{x}_{i}}\mathcal{L},\mathbf{x}_{i})\) is a scalar indicating saliency of token \(w_{i}\) before normalization (Eq. 1). In our implementation, we follow the original setup in (Liu et al., 2018) and set \(m\) to 10 steps. We note that any differentiable retrieval function can be used in place of \(f(\cdot)\) within this framework. In this paper, we use a standard DistilBERT document reranker trained on MS MARCO using a cross-entropy loss (Li et al., 2018; Liu et al., 2018; Liu et al., 2018; Liu et al., 2018). In our preliminary experiments, we observed that the saliency scores are often noisy, attributing gradients to stopwords and/or punctuations. Therefore, we filter out the stopwords and punctuations in a post hoc manner and only keep the top-3 important tokens from document \(d\) to construct the actual decoding constraints \(\mathcal{C}\). Constructing ConstraintsHaving identified the most salient tokens, we construct the lexical constraints in a format appropriate for constrained generation, Conjunctive Normal Form, \[\mathcal{C}=\underbrace{(D_{1}\lor D_{2}\vee\cdots\lor D_{i})}_{C_{1}}\wedge \cdots\wedge\underbrace{(D_{k}\lor D_{k+1}\vee\cdots\lor D_{n})}_{C_{m}}\] where each single \(D_{i}\) denotes one single positive or negative constraint, which we refer to as a _literal_; and the logical disjunction of literals is referred to as a _clause_, e.g. \(C_{1}\) to \(C_{m}\). In our implementation, we construct 3 clauses with each clause initially consisting of a single literal. We then expand each clause by all possible forms of the original token via WordForms1. An example of this logic corresponding to Row 1, Table 2 is represented as \[\mathcal{C} =\underbrace{(\text{private}\vee\ldots\vee\text{privatization})}_{C_{1}} \wedge\underbrace{(\text{health}\vee\ldots\vee\text{healthy})}_{C_{2}}\] \[\wedge\underbrace{(\text{standard}\vee\ldots\vee\text{standards})}_{C_{3}}\] _Constrained Generation:_ At inference time, we run a simplified version of the Neurological Decoding (NLD) algorithm using the set of constraints C acquired from Section 3. As we do not use negative constraints in QFS, i.e. we do not avoid certain tokens, we consider only two states within the original NLD algorithm: _reversible unsatisfaction_ where an unsatisfied logical clause with a positive literal can be satisfied at a future point and _irreversible satisfaction_ where a positive literal will remain satisfied. This predicate logic is then applied within a conventional beam search during generation. At timestep \(t\), the simplified algorithm performs three individual steps when filling in beam candidates: _Pruning, Grouping_, and _Selecting_. Pruning filters out candidates that are of low likelihood or satisfy fewer clauses; Grouping implicitly constructs the power set of all irreversible satisfied clauses, leading to at most \(2^{|C|}\) groups; and Selecting populates the beam with candidates within each group that are most likely to satisfy remaining reversible unsatisfied clause \(C_{j}\) by modifying the likelihood. Specifically, within each group, the likelihood is modified by the NLD score function: \[L=P_{\theta}(y_{t}|y_{<t})+\lambda\max_{\mathbb{I}(C_{j})=0}\frac{|\hat{D}_{t} |}{|D_{i}|} \tag{3}\] where \(P_{\theta}\) is the likelihood of the LM generating token \(y_{t}\), \(\mathbb{I}(C_{j})\) indicates whether clause \(C_{j}\) has been satisfied or not, \(\frac{|\hat{D}_{t}|}{|D_{i}|}\) is the overlap between the ongoing generation and the partially satisfied literal \(D_{i}\), e.g. \(\hat{D_{i}}=\)"apple" and \(D_{i}=\)"apple tree" yields 0.5, and \(\lambda=0.1\) acts as the hyperparameter. Intuitively, this score modification favors candidates moving toward fully satisfying a positive literal within an unsatisfied clause with \(\lambda\) controlling the strength of this signal. After this explicit likelihood modification, we visit each group and select the highest scoring candidate in rotation until the beam is filled. After this process is complete, we select the beam candidate with highest score and proceed to generating the next token at \(t+1\). Although the group construction suggests a high-complexity runtime, implicit construction results in this algorithm having the same runtime complexity as standard beam search (Kang et al., 2018). We use BART (Kang et al., 2018) and T5 (Kang et al., 2018) for fair comparison with existing methods as the generating LM for abstractive QFS. As there exist no additional parameters or modules for this method, details of these backbone LMs are discussed in Section 4. ## 4. Experimental Setup _Datasets_: Following previous works (Kang et al., 2018; Kang et al., 2018), we adopt Debatepedia (Kang et al., 2018) and PubMedQA (Kang et al., 2018) to benchmark the effectiveness of the proposed relevance-constrained generation method. Debatepedia dataset is collected by Nema et al. (Kang et al., 2018) from 663 debates of 53 diverse categories in an encyclopedia of debates and consists of 12K/0.7K/1.0K query-document summarization triplets \((q,d,s)\). PubMedQA is a long-form abstractive question-answering dataset from the biomedical domain with the contexts available. We use the standard train test split from the original datasets. _Compared Methods:_ To evaluate the performance of the proposed relevance-constrained generation method, we introduce the following baseline methods in order of increasing complexity: * **End-to-End approaches**: Transformer (Vaswani et al., 2017), BART (Kang et al., 2018) and T5 (Kang et al., 2018) are finetuned for Seq2Seq summarization. These LMs additionally act as the backbone LM for the proposed relevance-constrained QFS approach, i.e. Constrained-BART and Constrained-T5 such that the results are directly comparable. In this configuration, there are no constraints during the generation process. * **Improved query-document cross attention**: SD2 (Kang et al., 2018) adds additional cross attention between query and document encoder, then uses the combined representation for generation. CSA Transformer (Vaswani et al., 2017) adds conditional self-attention layers originally designed for conditional dependency modeling to the Seq2Seq model. * **Incorporated query-document relevance**: QR-BERTSUM-TL (Kang et al., 2018) injects query relevance scores into pretrained Seq2Seq summarization model; MSG (Chen et al., 2019) utilizes query relevance and interrelation between sentences of the document for fine-grained representation. Similarly, BART-QFS (Kang et al., 2018) also uses a pre-trained QA model to determine answer relevance in the document and injects this information into the Seq2Seq LM model. * **Additional module utilization**: QSG-BART (Kang et al., 2018) utilizes an additional graph neural network module to model token-level interaction between query and document, and injects this information into Seq2Seq model. It reaches state-of-the-art performance on the QFS task, but requires additional parameters and training. _Evaluation Metrics:_ We evaluate the effectiveness of Constrained QFS with ROUGE-1, ROUGE-2, and ROUGE-L (Kang et al., 2018) for fair comparison to existing works (Kang et al., 2018; Kang et al., 2018). _Implementation Details:_ We adopt an off-the-shelf Cross Encoder model2 as our saliency model. We identify the top-3 important tokens with Eq.2 and construct constraints as \(\mathcal{C}=C_{1}\wedge C_{2}\wedge C_{3}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Debatepedia} & \multicolumn{3}{c}{PubMedQA} \\ \cline{2-7} & R-1 & R-2 & R-1 & R-2 & R-1 \\ \hline Transformer & 41.7 & 33.6 & 41.3 & 30.4 & 8.4 & 22.3 \\ SD2 & 41.3 & 18.8 & 40.4 & 32.3 & 10.5 & 26.0 \\ CSA Transformer & 46.4 & 37.5 & 45.9 & - & - & - \\ QR-BERTSUM-TL & 48.0 & 45.2 & 57.1 & - & - & - \\ MSG & - & - & - & 37.2 & 14.8 & **30.2** \\ BART-QFS & 59.0 & 44.6 & 57.4 & - & - & - \\ _QSG BART_ & **64.9** & **52.3** & **63.3** & **38.4** & **17.0** & 29.8 \\ \hline T5 & 22.5 & 7.1 & 19.7 & 38.0 & 15.3 & 28.2 \\ Constrained-T5 & 32.2\({}^{\dagger}\) & 12.5\({}^{\dagger}\) & 28.2\({}^{\dagger}\) & 36.4 & 16.0\({}^{\dagger}\) & 28.7\({}^{\dagger}\) \\ - Rel. Improv. (\%) & +43.1 & +76.1 & +43.2 & -4.3 & +4.6 & +1.8 \\ BART & 58.1 & 43.6 & 56.8 & 38.1 & 15.7 & 27.2 \\ Constrained-BART & **62.9\({}^{\dagger}\)** & **50.1\({}^{\dagger}\)** & **61.5\({}^{\dagger}\)** & **39.2\({}^{\dagger}\)** & **17.1\({}^{\dagger}\)** & **30.1\({}^{\dagger}\)** \\ - Rel. Improv. (\%) & +8.3 & +14.9 & +8.3 & +2.9 & +8.9 & +10.6 \\ \hline \hline \end{tabular} \end{table} Table 1. Results on test set, including ROUGE-1, ROUGE-2 and ROUGE-L, baseline results (the first section) are from (Kang et al., 2018); _Italic_ indicates the best performing system in literature. \({}^{\dagger}\) denotes the constrained method significantly better than its unconstrained counterparts with paired t-test at 0.05 level We experiment with two pre-trained Seq2Seq models as the base generator, T5 (Zhu et al., 2017) and BART (Zhu et al., 2017). Different from previous works BART-QFS and QSG BART (Zhu et al., 2017; Zhang et al., 2017), we do not warm start BART or T5 by pre-finetuning on existing abstractive summarization datasets; instead we only finetune them on our target datasets Debatepedia and PubMedQA. For T5, we format the input as Summarize: Document: \(d\)Question: \(q\): and finetune the model weights on each dataset's training set with golden references. At inference time, we use the same input format and finetuned model weights for relevance-constrained generation/generation. For BART, we format the input as [CLS] \(d\)[SEP] \(q\)[EOS], where [CLS], [SEP], [EOS] are special tokens indicating start, separate and end of sequence, then we finetune and generate text in a similar fashion to T5. For both models, we finetune with AdamW optimizer (Kingmae and Ba, 2014), learning rate \(2e-5\) and early stop after no improvements on the dev set for three consecutive epochs. We make our code publicly available at [https://github.com/zhichaoxu-shufe/Constrained-QFS](https://github.com/zhichaoxu-shufe/Constrained-QFS). ## 5. Results and Analysis We address two RQs in this section: * **RQ1**: How competitive is the proposed constrained generation method in terms of performance compared to baselines? * **RQ2**: How does constrained generation affect QFS performance? To answer **RQ1**, shown in Table 1, we observe that the relevance-constrained methods achieve competitive performance on two datasets. On Debatepedia dataset, Constrained-BART achieves near parity with the current state-of-the-art system and substantially outperforms all other baselines. This result is particularly interesting given the reduced complexity Constrained-BART. On the PubMedQA dataset, Constrained-BART achieves slightly better performance than QSG BART. A possible explanation for this improved performance might be the length of the documents in PubMedQA, where the relevance-constrained process results in a more consistent snippet. We therefore conclude that the proposed relevance-constrained generation paradigm can achieve competitive performance without additional parameters or finetuning. To answer **RQ2**, we specifically draw a comparison between the proposed methods and their unconstrained baselines, which were finetuned end-to-end and generated QFS without constraints. In the second section of Table 1, we observe that the proposed constrained generation methods consistently outperform their unconstrained counterparts across different datasets and backbone LMs. For instance, on the Debatepedia dataset, Constrained-BART outperforms BART 14.9% in R-2. Therefore, we conclude that by adding carefully constructed constraints into the generation stage, the performance of the QFS task can be significantly improved without modifying the backbone LMs. _Qualitative Analysis_: We show two examples in Table 2. In the first example, the BART generation hallucinates "public owners" that is not faithful to the document; however, Constrained-BART is able to successfully summarize the document as \(\mathcal{C}\) contains "privatization". In the second example, despite the underspecified query, the saliency model still extracts critical tokens, which are able to aid in the generation of a meaningful summary. _Ablation Study:_ In Table 3 we study the effect of different sources of constraints. Query-only denotes that the top-3 important tokens are from the query, and vice versa for Document-only and Query+Document. We observe that on the Debatepedia dataset, Document-only constraints significantly outperforms the other two approaches, while on PubMedQA this improvement is minor. After manual examination, we find that the golden references in Debatepedia dataset overlap more with documents compared to queries, while PubMedQA does not adhere to this trend. ## 6. Conclusion and Future Work In this work, our lightweight relevance-constrained generation approach achieves competitive performance compared to the state-of-the-art method, and it can easily generalize to new domains provided the existence of an effective retrieval model to guide the constraint construction. Our future work may involve investigating the effectiveness and summarization faithfulness/factuality of this approach in real world IR systems. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Constraints} & \multicolumn{3}{c}{Debatepedia} & \multicolumn{3}{c}{PubMedQA} \\ \cline{2-7} & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \hline Query-only & 61.4 & 48.4 & 59.9 & 39.0 & 16.9 & 29.9 \\ Document-only & **62.9\({}^{\dagger}\)** & **50.1\({}^{\dagger}\)** & **61.5\({}^{\dagger}\)** & **39.2** & **17.1** & **30.1** \\ Query+Document & 61.5 & 48.4 & 60.1 & 38.9 & 17.0 & 29.7 \\ \hline \hline \end{tabular} \end{table} Table 3. Effect of the source of constraints to QFS performance on Constrained BART. \({}^{\dagger}\) denotes significantly better than the other two methods with paired t-test at 0.05 level. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Query & Document & BART Generation & Constrained-BART Generation & Golden Reference \\ \hline privatization: is water & private & companies are profit-maximizing entities & environmental and health & environmental and health & environmental and health & environmental and health & environmental and health \\ \cline{2-7} & **standards** & as obstructive to their profit interests. & standards are often violated & standards are often violated & standards are often violated \\ companies versus a global commaz & this is a problem particularly in the context of water & by public owners of water & by water & privatization & by private ownership of water \\ \cline{2-7} & which is fundamentally important to the environmental health and life. & & companies: & & \\ \hline \hline \end{tabular} \end{table} Table 2. Sample qualitative study on Debatepedia dataset; tokens are marked salient and included in constraints set \(\mathcal{C}\). ## 7. Acknowledgement Zhichao Xu is supported partially by NSF IIS-2205418 and NSF DMS-2134223. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2305.11408
AlignAtt: Using Attention-based Audio-Translation Alignments as a Guide for Simultaneous Speech Translation
Attention is the core mechanism of today's most used architectures for natural language processing and has been analyzed from many perspectives, including its effectiveness for machine translation-related tasks. Among these studies, attention resulted to be a useful source of information to get insights about word alignment also when the input text is substituted with audio segments, as in the case of the speech translation (ST) task. In this paper, we propose AlignAtt, a novel policy for simultaneous ST (SimulST) that exploits the attention information to generate source-target alignments that guide the model during inference. Through experiments on the 8 language pairs of MuST-C v1.0, we show that AlignAtt outperforms previous state-of-the-art SimulST policies applied to offline-trained models with gains in terms of BLEU of 2 points and latency reductions ranging from 0.5s to 0.8s across the 8 languages.
Sara Papi, Marco Turchi, Matteo Negri
2023-05-19T03:31:42Z
http://arxiv.org/abs/2305.11408v2
# Alignatt: Using Attention-based Audio-Translation Alignments ###### Abstract Attention is the core mechanism of today's most used architectures for natural language processing and has been analyzed from many perspectives, including its effectiveness for machine translation-related tasks. Among these studies, attention resulted to be a useful source of information to get insights about word alignment also when the input text is substituted with audio segments, as in the case of the speech translation (ST) task. In this paper, we propose AlignAtt, a novel policy for simultaneous ST (SimulST) that exploits the attention information to generate source-target alignments that guide the model during inference. Through experiments on the 8 language pairs of MuST-C v1.0, we show that AlignAtt outperforms previous state-of-the-art SimulST policies applied to offline-trained models with gains in terms of BLEU of 2 points and latency reductions ranging from 0.5\(s\) to 0.8\(s\) across the 8 languages. Sara Papi\({}^{\text{\tiny{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\ each token \(y_{i}\) attends to the last \(f\) frames or not. If this condition is verified, the emission is stopped, under the assumption that, if a token is aligned with the most recently received audio frames, the information they provide can be insufficient to generate that token (i.e. the system has to wait for additional audio input). Specifically, starting from the first token, we iterate over the prediction \(\mathbf{y}\) and continue the emission until: \[Align_{i}\notin\{n-f+1,...,n\}\] which means that we stop the emission as soon as we find a token that mostly attends to one of the last \(f\) frames. Thus, \(f\) is the parameter that directly controls the latency of the model: smaller \(f\) values mean fewer frames to be considered inaccessible by the model, consequently implying a lower chance that our stopping condition is verified and, in turn, lower latency. The process is formalized in Algorithm 1. ``` 0:\(Align\), \(f,\mathbf{y}\) \(i\gets 1\) \(prediction\leftarrow[\quad]\) \(stop\gets False\) while\(stop\neq True\)do if\(Align_{i}\in\{n-f+1,...,n\}\)then \(stop\gets True\)\(\triangleright\) inaccessible frame else \(prediction\gets prediction+y_{i}\) \(i\gets i+1\) endif endwhile ``` **Algorithm 1** AlignAtt Since in SimulST the source speech input \(\mathbf{x}\) is incrementally received and its length \(n\) is increased at every time step \(t\), applying the AlignAtt policy means applying Algorithm 1 at each timestep to emit (or not) the partial hypothesis until the input \(\mathbf{x}(t)\) has been entirely received. ## 3 Experimental Settings ### Data We train one model for each of the 8 languages of MuST-C v1.0 [15], namely English (en) to Dutch (nl), French (fr), German (de), Italian (it), Portuguese (pt), Romanian (ro), Russian (ru), and Spanish (es). We filter out segments longer than 30s from the training set to optimize GPU RAM consumption. We also apply sequence-level knowledge distillation [18] to increase the size of our training set and improve performance. To this aim, we employ NLLB 3.3B [19] as the MT model to translate the English transcripts of the training set into each of the 8 languages, and we use the automatic translations together with the gold ones during training. As a result, the final number of target sentences is twice the original one while the speech input remains unaltered. The performance of the NLLB 3.3B model on the MuST-C v1.0 test set is shown in Table 1. ### Architecture and Training Setup The model is made of 12 Conformer [20] encoder layers and 6 Transformer decoder layers, having 8 attention heads each. The embedding size is set to 512 and the feed-forward layers are composed of 2,048 neurons, with \(\sim\)115M parameters in total. The input is represented by 80 log Mel-filterbank audio features extracted every 10\(ms\) with a sample window of 25, and pre-processed by two 1D convolutional layers of striding 2 to reduce the input length by a factor of 4 [21]. Dropout is set to 0.1 for attention, feed-forward, and convolutional layers. The kernel size is 31 for both point- and depth-wise convolutions in the Conformer encoder. The SentencePiece-based [22] vocabulary size is 8,000 for translation and 5,000 for transcript. Adam optimizer with label-smoothed cross-entropy loss (smoothing factor 0.1) is used during training together with CTC loss [23] to compress audio input representation and speed-up inference time [24]. Learning rate is set to \(5\cdot 10^{-3}\) with Noam scheduler and 25,000 warm-up steps. Utterance-level Cepstral Mean and Variance Normalization (CMVN) and SpecAugment [25] are also applied during training. Trainings are performed on 2 NVIDIA A40 GPUs with 40GB RAM. We set 40k as the maximum number of tokens per mini-batch, update frequency 4, and 100,000 maximum updates (\(\sim\)28 hours). Early stopping is applied during training if validation loss does not improve for 10 epochs. We use the bug-free implementation of fairseq-ST [26]. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline Model & de & es & fr & it & nl & pt & ro & ru & Avg \\ \hline NLLB & 33.1 & 38.5 & 46.5 & 34.4 & 37.7 & 40.4 & 32.8 & 23.5 & 35.9 \\ \hline \end{tabular} \end{table} Table 1: _BLEU results on all the language pairs of MuST-C v1.0 test-COMMON of NLLB 3.3B model._ Figure 1: _Example of the AlignAtt policy with \(f=2\) at consecutive time steps \(t_{1}\) (a) and \(t_{2}\) (b)._ ### Terms of Comparison We conduct experimental comparisons with the other SimulST policies that can be applied to offline systems, thus policies that do not require training nor adaptation to be run, namely: * **Local Agreement (LA)**[6]: the policy used by [35] to win the SimulST task at the IWSLT 2022 evaluation campaign [1]. With this policy, a partial hypothesis is generated each time a new speech segment is added as input, and it is emitted, entirely or partially, if the previously generated hypothesis is equal to the current one. We adapted the docker released by the authors to Fairseq-ST [21]. Different latency regimes are obtained by varying the speech segment length \(T_{s}\). * **Wait-k**[36]: the most popular policy originally published for simultaneous machine translation and then adapted to SimulST [2, 4]. It consists in waiting for a predefined number of words (\(k\)) before starting to alternate between writing a word and waiting for new output. We employ adaptive word detection guided by the CTC prediction to detect the number of words in the speech as in [4, 5]. * **EDAtt**[37]: the only existing policy that exploits the attention mechanism to guide the inference. Contrary to our policy that computes audio-text alignments starting from the attention scores, in EDAtt the attention scores of the last \(\lambda\) frames are summed and a threshold \(\alpha\) is used to trigger the emission. While \(\alpha\) handles the latency, \(\lambda\) is a hyper-parameter that has to be empirically determined on the validation set. This represents the main flaw of this policy since, in theory, \(\lambda\) has to be estimated for each language. Here, we set \(\lambda=2\) following the authors' finding. ### Inference and Evaluation For inference, the input features are computed on the fly and Global CMV normalization is applied as in [3]. We use the SimulEval tool [38] to compare AlignAtt with the above policies. For the LA policy, we set \(T_{s}=[10,15,20,25,30]\)1; for the wait-k, we vary \(k\) in \([2,3,4,5,6,7]\)2; for EDAtt, we set \(\alpha=[0.6,0.4,0.2,0.1,0.05,0.03]\)3; for AlignAtt, we vary \(f\) in \([2,4,6,8,10,12,14]\). Moreover, to be comparable with EDAtt, for our policy we extract the attention weights from the 4th decoder layer and average across all the attention heads. All inferences are performed on a single NVIDIA TESLA K80 GPU with 12GB of RAM as in the IWSLT Simultaneous evaluation campaigns [39, 1]. We use scarcBLEU (\(\uparrow\)) [40]4 to evaluate translation quality and Length Adaptive Average Lagging [41] - or LAAL (\(\downarrow\)) - to measure latency.5 As suggested by [3], we report the computational-aware version of LAAL6 that accounts for the real elapsed time instead of the ideal one, consequently providing a more realistic latency measure. Footnote 1: Smaller values of \(T_{s}\) do not improve computational aware latency. Footnote 2: We do not report results obtained with \(k=1\) since the translation quality highly degrades. Footnote 3: These are the same values indicated by the authors of the policy. ## 4 Results In this section, we present the results of our offline systems trained for each language pair of MuST-C v1.0 to show their competitiveness compared to the systems published in literature (Section 4.1) and the results of the AlignAtt policy compared to the other policies presented in Section 3.3 (Section 4.2). ### Offline Results To provide an upper bound to the simultaneous performance and show the competitiveness of our models, we present in Table 2 the offline results of the systems trained on all the language pairs of MuST-C v1.0 compared to systems published in literature that report results for all languages. As we can see, our offline systems outperform the others on all but 2 language pairs, en\(\rightarrow\){es, fr, it, nl, pt, ro}, achieving the new state of the art in terms of translation quality. BLEU gains are more evident for en\(\rightarrow\)fr and en\(\rightarrow\)it, for which we obtain improvements of about 1 BLEU point, while they amount to about 0.5 BLEU points for the other languages. Concerning the other 2 languages (de, ru), our en\(\rightarrow\)ru model achieves a similar result (18.4 vs 18.5 BLEU) with that obtained by the best model for that language (XSTNet [29]), with only a 0.1 BLEU drop. Moreover, our system reaches a slightly worse but competitive result for en\(\rightarrow\)de (28.0 vs 28.7 BLEU) compared to STEMM [33], which instead makes use of a relevant amount of external speech data, and it also outperforms all the other systems for this language direction. On \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Ext. Data} & \multirow{2}{*}{de} & \multirow{2}{*}{es} & \multirow{2}{*}{fr} & \multirow{2}{*}{it} & \multirow{2}{*}{nl} & \multirow{2}{*}{pt} & \multirow{2}{*}{ro} & \multirow{2}{*}{ru} & \multirow{2}{*}{Avg} \\ \cline{2-2} \cline{5-12} & & & & & & & & & & & & \\ \hline Fairseq-ST [21] & - & - & 22.7 & 27.2 & 32.9 & 22.7 & 27.3 & 28.1 & 21.9 & 15.3 & 24.8 \\ ESPnet-ST [27] & - & - & 22.9 & 28.0 & 32.8 & 23.8 & 27.4 & 28.0 & 21.9 & 15.8 & 25.1 \\ Chimera [28] & ✓ & ✓ & 27.1 & 30.6 & 35.6 & 25.0 & 29.2 & 30.2 & 24.0 & 17.4 & 27.4 \\ W-Transf. [29] & ✓ & - & 23.6 & 28.4 & 34.6 & 24.0 & 29.0 & 29.6 & 22.4 & 14.4 & 25.8 \\ XSTNet [29] & ✓ & ✓ & 27.8 & 30.8 & 38.0 & 26.4 & 31.2 & 32.4 & 25.7 & **18.5** & 28.9 \\ LNA-E.D [30] & ✓ & ✓ & 24.3 & 28.4 & 34.6 & 24.4 & 28.3 & 30.5 & 23.3 & 15.9 & 26.2 \\ LightweightAdaptor [31] & - & - & 24.6 & 28.7 & 34.8 & 25.0 & 28.8 & 31.0 & 23.7 & 16.4 & 26.6 \\ E2E-ST-TDA [32] & ✓ & ✓ & 25.4 & 29.6 & 36.1 & 25.1 & 29.6 & 31.1 & 23.9 & 16.4 & 27.2 \\ STEMM [33] & ✓ & ✓ & **28.7** & 31.0 & 37.4 & 25.8 & 30.5 & 31.7 & 24.5 & 17.8 & 28.4 \\ ConST [34] & ✓ & - & 25.7 & 30.4 & 36.8 & 26.3 & 30.6 & 32.0 & 24.8 & 17.3 & 28.0 \\ \hline ours & - & ✓ & 28.0 & **31.5** & **39.0** & **27.3** & **31.8** & **32.9** & **26.3** & 18.4 & **29.4** \\ \hline \hline \end{tabular} \end{table} Table 2: BLEU results on MuST-C v1.0 tst-COMM. “Ext. Data” means that external data has been used for training: “Speech” means that either unlabelled or labelled additional speech data is used to train or initialize the model, “Text” means that either machine-translated or monolingual texts are used to train or initialize the model. “Avg” means the average over the 8 languages. average, our approach stands out as the best one even if it does not involve the use of external speech data: it obtains an average of 29.4 BLEU across languages, which corresponds to 0.5 to 4.6 BLEU improvements compared to the published ST models. ### Simultaneous Results Having demonstrated the competitiveness of our offline models, we now apply the SimulST policies introduced in Section 3.3 to the same offline ST model for each language pair of MuST-C v1.0. Figure 2 shows the results in terms of latency-quality trade-off (i.e. LAAL (\(\downarrow\)) - BLEU (\(\uparrow\)) curves). As we can see, our AlignATT policy is the only policy, together with EDAtt, capable of reaching a latency lower or equal to \(2s\) for all the 8 languages.7 Specifically, LA curves start at around \(2.5s\) or more for all the language pairs, even if they are able to achieve high translation quality towards \(3.5s\), with a 1.2 average drop in terms of BLEU across languages compared to the offline inference. Similarly, the wait-k curves start at around 2/\(2.5s\) but are not able to reach high translation quality even at high latency (LAAL approaching \(3.5s\)), therefore scoring the worst results. Compared to these two policies, AlignAtt shows a LAAL reduction of up to \(0.8s\) compared to LA and \(0.5s\) compared to wait-k. Despite achieving lower latency as AlignAtt, the EDAtt policy achieves worse translation quality at almost every latency regime compared to our policy, with drops of up to 2 BLEU points across languages. These performance drops are particularly evident for en\(\rightarrow\)de and en\(\rightarrow\)ru, where the latter represents the most difficult language pair also in offline ST (it is the only language with less than 20 BLEU on Table 2). The evident differences in the AlignAtt and EDAtt policy behaviors, especially in terms of translation quality, prove that, despite both exploiting attention scores as a source of information, the decisions taken by the two policies are intrinsically different. Moreover, AlignAtt is the closest policy to achieving the offline results of Table 2, with less than 1.0 BLEU average drop versus 1.8 of EDAtt. Footnote 7: The maximum acceptable latency limit is set between \(2s\) and \(3s\) from most works on simultaneous interpretation [42, 43]. We can conclude that, on all the 8 languages of MuST-C v1.0, the AlignAtt policy achieves a lower latency compared to both wait-k and LA, and an improved translation quality compared to EDAtt, therefore representing the new state-of-the-art SimulST policy applicable to offline ST models. ## 5 Conclusions We presented AlignAtt, a novel policy for SimulST that leverages the audio-translation alignments obtained from the cross-attention scores to guide an offline-trained ST model during simultaneous inference. Results on all 8 languages of MuST-C v1.0 showed the effectiveness of our policy compared to the existing ones, with gains of 2 BLEU and a latency reduction of 0.5-0.8\(s\), achieving the new state of the art. Code, offline ST models, and simultaneous outputs are released open source to help the reproducibility of our work. Figure 2: LAAL-BLEU curves for all the 8 language pairs of MuST-C tst-COMMON.AlignAtt is compared to the SimulST policy presented in Section 3.3. Latency (LAAL) is computationally aware and expressed in seconds (\(s\)).
2306.10246
Conceptual Study and Performance Analysis of Tandem Dual-Antenna Spaceborne SAR Interferometry
Multi-baseline synthetic aperture radar interferometry (MB-InSAR), capable of mapping 3D surface model with high precision, is able to overcome the ill-posed problem in the single-baseline InSAR by use of the baseline diversity. Single pass MB acquisition with the advantages of high coherence and simple phase components has a more practical capability in 3D reconstruction than conventional repeat-pass MB acquisition. Using an asymptotic 3D phase unwrapping (PU), it is possible to get a reliable 3D reconstruction using very sparse acquisitions but the interferograms should follow the optimal baseline design. However, current spaceborne SAR system doesn't satisfy this principle, inducing more difficulties in practical application. In this article, a new concept of Tandem Dual-Antenna SAR Interferometry (TDA-InSAR) system for single-pass reliable 3D surface mapping using the asymptotic 3D PU is proposed. Its optimal MB acquisition is analyzed to achieve both good relative height precision and flexible baseline design. Two indicators, i.e., expected relative height precision and successful phase unwrapping rate, are selected to optimize the system parameters and evaluate the performance of various baseline configurations. Additionally, simulation-based demonstrations are conducted to evaluate the performance in typical scenarios and investigate the impact of various error sources. The results indicate that the proposed TDA-InSAR is able to get the specified MB acquisition for the asymptotic 3D PU, which offers a feasible solution for single-pass 3D SAR imaging.
Fengming Hu, Feng Xu, Xiaolan Qiu, Chibiao Ding, Yaqiu Jin
2023-06-17T03:18:30Z
http://arxiv.org/abs/2306.10246v1
# Conceptual Study and Performance Analysis of Tandem Dual-Antenna Spaceborne SAR Interferometry ###### Abstract A new TDA-InSAR is proposed which is tailored to get the specified optimal MB interferograms for asymptotic 3D PU algorithm, achieving fast 3D reconstruction. * Performances of different baseline configurations and the impact of different error sources are systematically investigated. * Simulation-based performance evaluation is conducted, indicating that one example configuration of the proposed system can achieve a 3D reconstruction with a relative height precision of 0.3 m in built-up or man-made objects and that of 1.7 m in vegetation canopies. [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ local statistical information for the optimization. The phase difference based algorithms, such as maximum likelihood method Fornaro et al. (2005) and Two-Stage Programming Approach (TSPA) Yu and Yang (2016) improve the noise robustness from the perspective of global optimization. Optimal baseline design plays an important part on the performance of all MB PU methods, especially for very sparse acquisitions. In Yu et al. (2019), a nonlinear mixed-integer programming (NIP) criteria provides a credible lower bound of the baseline, and the followed closed-form robust CRT in Zhihui et al. (2020) gives a meaningful upper bound by considering the ambiguity height. Both indicate that increasing the number of acquisitions can not improve the height precision significantly if the longest baseline is limited. Using the accuracy information-theoretical assessment, research in Ferraiuolo et al. (2009) shows that the final height precision only depends on the longest baseline length of the MB interferograms. For a very sparse acquisition, such as three or four single-pass SAR images, an asymptotic 3D PU is developed to achieve a robust 3D reconstruction Hu et al. (2022). Following a 2D (space) + 1D (baseline) PU framework Thompson et al. (1999), it provides the optimal bounds for baseline design. The spaceborne single-pass SAR system, Shuttle Radar Topography Mission (SRTM) successfully get the global digital elevation model within \(-56^{\circ}\) and \(60^{\circ}\) latitude, showing that the single-pass dual-antenna interferogram has a good coherence Rabus et al. (2003). Since longer perpendicular baseline leads to better height precision, the SRTM uses double antennas with a 60-m cross-track separation and provides a height precision in the order of 10 m. Although the multi-antenna interferograms have the advantages of accurate baseline measurement and unique phase component corresponding to the ground elevation Ding et al. (2019). The short baseline length of the single-pass dual-antenna interferogram limits its practical application. The TanDEM-X mission consists of two satellites, which obtains the single-pass dual-satellite interferogram using a bi-static mode. This bi-static SAR with a longer baseline configuration provides a height precision of 2 m but requires strict synchronization between the satellites Krieger et al. (2007). Additionally, the dual-satellite interferograms shows a good coherence in vegetation canopies area, which is widely used in the tropical-forest biomass estimation Torano Caicoya et al. (2016). The following LuTan-1 mission follows the similar design as the TanDEM-X but adopts the L-band to achieve a better performance in biomass inversion Jin et al. (2020). During the global coverage of the TanDEM-X, the height ambiguity was set to 45 m in the first global acquisition to avoid phase unwrapping errors and then that was set to 35 m in the second global acquisition to improve the height precision Zink et al. (2014). Inappropriate height ambiguity will lead to phase unwrapping errors, especially in urban area. To improve the reliability of the PU, the adjustment of the baseline is necessary but this will reduces the timeliness of the data acquisition. Additionally, multi-satellite SAR interferogram will suffer from orbit error due to the inaccuracies of the orbit parameters. Since the orbit inaccuracies are correlated in time, the orbit error often result in a spatially correlated phase trend in the interferometric phase, which will bias the estimated parameters. Since the illumination time for a LEO SAR is only few seconds, this phase trend can be estimated jointly with other parameters of interest using the plane function Zhang et al. (2014) or the nonlinear model Liu et al. (2016); Hermann and Hanssen (2012). Note that the orbit error can be compensated only if the multi-satellite interferogram is successfully unwrapped. Future high-resolution wide-swath (HRWS) mission will cooperate with three MirrorSAR satellites to acquire MB interferograms in a single flight. This mission has a better performance in height precision and timeliness than previous TanDEM-X mission by using the baseline diversity Mittermayer et al. (2022). However, the orbit uncertainty of the small satellite will affect optimal baseline design and induce additional orbit error. Such single-pass multi-satellite MB interferograms don't satisfy the requirements of the asymptotic 3D PU. Thus, conceptual study of new SAR \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Acquisition mode} & \multirow{2}{*}{Repeat-pass} & \multicolumn{2}{c}{Single-pass} & \multicolumn{1}{l}{TDA-InSAR} \\ & & multi-antenna & multi-satellite & \\ \hline Satellite formulation & Single-satellite & Single-satellite & Multi-satellite & Tandem-satellite \\ Antenna configuration & Single-antenna & Multi-antenna & Single-antenna & Dual-antenna \\ Maximal baseline & Long & Short & Long & Long \\ length & & & & \\ Optimal baseline design & No & Yes & No & Yes \\ Acquisition period & Long & Short & Short & Short \\ Coherence & Low & High & High & High \\ Working mode & mono-static & bi-static & bi-static & bi-static/mono-static \\ Error sources & atmospheric delay, orbit error, deformation, system noise & orbit error, system noise & orbit error, system noise, synchronization \\ & system noise & & & error \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of MB interferograms obtained by different acquisition mode system based on the optimal baseline design is necessary, which is the foundation of the work presented in this paper. To overcome the limitations of these conventional InSAR modes, a new spaceborne InSAR configuration named Tandem Dual-Antenna SAR Interferometry (TDA-InSAR) is proposed, aiming to acquire the specified MB interferograms for the asymptotic 3D PU Hu et al. (2022). The comparison of MB interferograms obtained by different acquisition modes is shown in Table. 1. It shows that the proposed TDA-InSAR combines the advantages of both multi-antenna and multi-satellite interferograms and thus provides the best baseline design. The paper is organized as follows. We introduce the concept of the TDA-InSAR and briefly review the main approach of the asymptotic 3D PU in section II. Investigation of the optimal baseline design using different baseline configuration is presented in Section III, followed by the simulation-based performance evaluation and analysis of various error sources in Section IV. The conclusions are presented in Section V. ## 2 Concept of TDA-InSAR ### Basic Concept 2D radar image often suffers from severe geometric distortion, increasing the difficulty of target recognition. The key issue is estimating the height of all scaterers, which can be well solved by using the MB interferograms. The phase components of the repeat-pass MB interferograms include flat earth effect, height, atmospheric delay, orbit error, deformation and system noise, which can be written as follows \[\varphi=\text{wrap}\{\varphi_{\text{flat}}+\varphi_{\text{h}}+\varphi_{\text{ attmos}}+\varphi_{\text{orb}}+\varphi_{\text{def}}+\varphi_{\text{noise}}+2\pi n\}, \tag{1}\] where \(n\in\mathbb{Z}\) denotes the integer phase ambiguity. In order to get the height estimation with a high precision, it is necessary to model different phase components appropriately. Because the atmospheric signal, orbital phase trend and deformation show a strong correlation in space, leading to a bias in the estimated height. The spatio-temporal characteristics of the error sources are used to separate the difference phase contributions. First, the deformation is assumed to be correlated in time and a prior deformation model is used to unwrap the MB interferograms. Then, the spatial trend due to the orbital inaccuracies, together with a trend in the atmospheric signal can be estimated using the unwrapped phase. Supposing the acquisitions are taken few hours apart, the atmospheric delay is uncorrelated in time but the the unmodelled deformation is assumed to be temporally correlated. A temporal low-pass filter can be applied to isolate the atmospheric delay from the unmodelled deformation. Following this way, it is possible to remove most of the atmospheric delay and orbital phase trend in the original phase. Note that a number of acquisitions is required to improve the reliability. The conventional single-beam SAR lacks the timeliness, which is not suitable for the specified applications with the near real-time requirement, such as target recognition. Considering the single-pass MB interferograms, the atmospheric delay part in Eq. (1) can be neglected. Previous research shows that the asymptotic 3D PU enables a reliable 3D reconstruction with very sparse acquisitions. But the optimal single-pass MB interferograms should meet the following four conditions to achieve a good performance. 1) There is a real or pseudo short baseline interferogram (SBI), satisfying the PC assumption. 2) The phase components of SBI only include height and system noise. 3) The ratio of the baseline length between SBI and long baseline interferogram (LBI) satisfies the SU criteria. If not, medium baseline interferogram (MBI) should be used to improve the probability of successful unwrapping. 4) The successful unwrapped LBI can achieve the expected height precision. To meet the requirements of the optimal baseline design and decrease the impact of the orbit error, the proposed TDA-InSAR consists of two dual-antenna satellites, which acquires both dual-antenna and dual-satellite interferograms in a single flight. The single-pass dual-antenna interferogram has the advantages of short baseline length, high coherence and independence of orbit error, which can be a good SBI. On the contrary, the single-pass dual-satellite interferogram enable a longer baseline, which can be a good LBI or MBI. Note that the dual-satellite interferogram can obtained by either bi-static or mono-static mode. The bi-static interferogram will contain the orbit error while the mono-static interferogram will include both orbit error and atmospheric delay. Fig. 1 is the simplified sketch of the TDA-InSAR. The main parameters of this system are antenna baseline \(L_{1}\) and satellite baseline \(L_{2}\). The antenna baseline should be as short Figure 1: TDA-InSAR imaging geometry. This system is able to acquires three independent interferograms in a single flight. as possible to decrease the difficulties in the hardware design. Thus, the optimal satellite baseline is expected to be in a flexible interval to suppress the uncertainty of the orbit error. Such system can work in either bistatic or mono-static mode. In case of the bistatic mode, the bistatic measurement will include additional phase errors in azimuth slow time Krieger et al. (2007) because of the relative frequency deviation and phase noise between different radar instruments. Such phase error can be eliminated using either echo-domain He et al. (2012) or image-domain algorithm Krieger et al. (2007), which is not investigated in this work. In the following section, we review the asymptotic 3D PU algorithm and introduce the optimal baseline design for the TDA-InSAR. A modified 3D reconstruction approach with respect to the orbit error compensation is developed using the unwrapped phase. ### Asymptotic 3D PU for TDA-InSAR In a single flight, the proposed TDA-InSAR acquires three independent interferograms including both dual-antenna and dual-satellite interferograms. The dual-antenna interferogram is used as a SBI in the asymptotic 3D PU, providing the initial height estimation. The dual-satellite interferogram with the longest baseline is a LBI. Furthermore, other dual-satellite interferograms are used as the MBI, improving the reliability of the ambiguity estimation. In the following description, the subscripts S, M and L denote the parameters of SBI, MBI and LBI, respectively. Using the conventional InSAR process, the flat earth effect can be well determined using the parameters of the SAR system. After removing this phase, 3D PU can be applied to the flattened phase. The spatial PU is the integration of the phase gradient in the spatial domain. Its reliability depends on the selection of the phase difference. However, whether an interferogram satisfies the PC assumption can not be evaluated using the single baseline interferogram. Alternatively, using the smoothing criteria with a pair of interferogram Hu et al. (2022b), the reliability of the spatial PU can be improved. Since the dual-antenna interferogram always satisfies the PC assumption, the spatial PU can be conducted directly and the initial height estimation can be obtained as follows Hanssen (2001a) \[\hat{h}_{\mathrm{S}}=\frac{\lambda}{4\pi}\frac{R\sin\theta}{B_{ \perp,\mathrm{S}}}\phi_{\mathrm{S}}. \tag{2}\] where \(\phi_{\mathrm{S}}\) denotes the unwrapped phase of SBI. \(B_{\perp}\) denotes the perpendicular baseline. \(R\) is the slant range. \(\theta\) is the incident angle and \(\lambda\) is the radar wavelength. Using the initial height, the pseudo unwrapped phase of MBI and the corresponding phase ambiguity can be estimated as follows \[\phi_{\mathrm{pseudo,M}}=\frac{4\pi}{\lambda}\frac{B_{\perp, \mathrm{M}}}{R\sin\theta}\hat{h}_{\mathrm{S}}=\frac{B_{\perp,\mathrm{M}}}{B_{ \perp,\mathrm{S}}}\phi_{\mathrm{S}}, \tag{3}\] \[\hat{h}_{\mathrm{M,0}}=\mathrm{round}\left\{\frac{\phi_{\mathrm{ pseudo,M}}-\mathrm{wrap}\{\phi_{\mathrm{M}}-\phi_{\mathrm{M,ref}}\}}{2\pi}\right\}. \tag{4}\] where \(\varphi_{\mathrm{M,ref}}\) the wrapped phase of MBI on the reference point. Whether a MBI can be successfully unwrapped depends on the uncertainty of the estimated phase ambiguity, denoted as the successful unwrapping (SU) criteria, which is defined as follows \[\mid\hat{h}_{\mathrm{M}}-n_{\mathrm{M,true}}\mid<\frac{1}{2}, \tag{5}\] If \(\hat{h}_{\mathrm{M,0}}\) satisfies the SU criteria, the phase residual of the MBI can be calculated as \[\varphi_{\mathrm{res,M}}=\mathrm{wrap}\left\{\varphi_{\mathrm{M}}-2\pi\hat{h }_{M,0}\right\}. \tag{6}\] Using the same reference point during the spatial PU of the MBI, the phase residual of the MBI can be unwrapped. The final unwrapped phase of the MBI can be written as follows \[\phi_{\mathrm{M}}=\phi_{\mathrm{res,M}}+2\pi\cdot\mathrm{round} \left\{\frac{\frac{B_{\perp,\mathrm{M}}}{B_{\perp,\mathrm{S}}}\phi_{\mathrm{S} }-\mathrm{wrap}\{\phi_{\mathrm{M}}-\phi_{\mathrm{M,ref}}\}}{2\pi}\right\}. \tag{7}\] The same approach will be conducted on LBI. If the phase ambiguity for LBI satisfies the SU criteria, the final unwrapped phase of the LBI can be calculated as follows \[\phi_{\mathrm{L}}=\phi_{\mathrm{res,L}}+2\pi\cdot\mathrm{round} \left\{\frac{\frac{B_{\perp,\mathrm{M}}}{B_{\perp,\mathrm{M}}}\phi_{\mathrm{M}} -\mathrm{wrap}\{\phi_{\mathrm{L}}-\varphi_{\mathrm{L,ref}}\}}{2\pi}\right\}. \tag{8}\] With the unwrapped phase \(\phi_{\mathrm{L}}\), the final height can be estimated using the follows Hanssen (2001a) \[\hat{h}_{\mathrm{L}}=\frac{\lambda}{4\pi}\frac{R\sin\theta}{B_{ \perp,\mathrm{L}}}\phi_{L}. \tag{9}\] ### Optimal Baseline Design Baseline length has significantly impact on the result of 3D PU with very sparse acquisition. According to Eq. (7), performance of the asymptotic 3D PU depends on two key indicators, i.e., baseline length and phase variance of the MB interferograms. Given the specified coherence, it is possible to get the optimal baseline length using the SU criteria. However, the SU criteria described in Eq. (5) can not be directly used in the practical process due to the unknown value of the phase ambiguity. Nevertheless, it shows that the phase ambiguity can be estimated unambiguously if the phase bias induced by the noise is smaller than \(\pi\). With the approximation that the interferometric phase without an ambiguity follows a Gaussian distribution, the expected phase variance should be \((\pi/u_{\alpha})^{2}\) with a level of significance \(\alpha\). Supposing that the baseline lengths of four interferograms obtained by the TDA-InSAR are \(B_{\perp,1}\), \(B_{\perp,2}\), \(B_{\perp,3}\), in an ascending order, the SU criteria can be rewritten as follow \[\max_{i=2:4}\Bigg{\{}\begin{aligned} &\frac{B_{\perp,j}^{2}}{B_{\perp,j-1}^{2}} \sigma_{\phi_{i-1}}^{2}+\sigma_{\phi_{i}}^{2}\Bigg{\}}<\left(\frac{\pi}{u_{a}} \right)^{2}. \tag{10}\] The SU criteria in Eq. (10) can be used to optimize the parameters of the TDA-InSAR. In addition to the baseline length, the input parameters of this system includes coherence (\(\gamma\)), expected relative height precision (\(\sigma_{h}^{0}\)) and the maximal height difference (\(\Delta h_{\text{max}}\)). Then the performance of the MB interferograms can be evaluated using three decision variables, i.e., success rate of phase unwrapping (SR), relative height precision (\(\sigma_{h}\)) and height ambiguity (\(h_{\text{amb}}\)). Success rate of phase unwrapping is the foundation of the 3D reconstruction, which should be as high as possible. If all interferograms can be correctly unwrapped, the final relative height precision only depends on the longest baseline length. It is necessary to guarantee the longest baseline length. Additionally, larger height ambiguity will increase the flexibility of the system, which enables larger height difference within the experimental area. With the same success rate of phase unwrapping, we expect as high height ambiguity as possible. From Eq. (10), the baseline length of SBI has a low bound with a given LBI. On the satisfaction of the PC assumption, the baseline length of SBI should be as long as possible to achieve a good phase unwrapping performance. However, longer baseline length of SBI would decrease the height ambiguity, reducing the flexibility of the system. Thus, a trade-off between the height ambiguity and the success rate of phase unwrapping should be made. Along this direction, the optimal baseline design for the TDA-InSAR is defined as follows \[\text{arg}\,\max\{\,SR,h_{\text{amb}}\}\text{ and }\text{arg}\,\min\{\sigma_{h}\} \tag{11}\] \[\text{s.t. }SR=u^{-1}\left\{\max_{i=2:4}\Bigg{(}\pi/\sqrt{\left( \frac{B_{i}}{B_{i-1}}\right)^{2}\sigma_{\phi_{i-1}}^{2}+\sigma_{\phi_{i}}^{2} \Bigg{)}}\right\}>1-\alpha\] \[h_{\text{amb}}=\frac{\lambda R\text{sin}\theta}{2B_{1}};\sigma_ {h}=\frac{\lambda R\text{sin}\theta}{4\pi B_{4}}\sigma_{\phi_{1}};\sigma_{ \varphi}^{2}=\frac{1-\gamma^{2}}{2\gamma^{2}}\] \[B_{1}=\min\big{\{}B_{\perp,1},\,B_{1,2}-iB_{\perp,1}|_{i\in Z}, \,\ldots,\,B_{\perp,4}-iB_{\perp,3}|_{i\in Z}\big{\}}\] \[B_{2}=\max\big{\{}B_{\perp,1},\min\big{\{}B_{\perp,2},B_{\perp,2 }-iB_{\perp,1}|_{i\in Z},\,\ldots\,\big{\}}\big{\}}\] \[B_{3}=\max\big{\{}B_{\perp,2},\min\big{\{}B_{\perp,3},B_{\perp,2 }-iB_{\perp,1}|_{i\in Z},\,\ldots\,\big{\}}\big{\}}\] \[B_{4}=B_{\perp,4};B_{1}<\frac{\lambda R\text{sin}\theta}{2 \Delta h_{\text{max}}},\sigma_{h}<\sigma_{h}^{0}\] where \(B_{1}\sim B_{4}\) denote the effective baseline length. In this optimization, the relationship between the coherence and the phase variance is approximated for point scatterer with a coherence larger than 0.9 Hanssen (2001b). Note that the model is also simplified by assuming that the coherence corresponding to LBI, MBI and SBI are the same, which is suitable for point scatterers. For distributed scatterers, other factors, such as spatial and volume decorrelation should be considered during the optimization, which is not investigated in this paper. ### Height Inversion with the Error Compensation The dual-satellite interferogram enables a long baseline but suffers from significant orbit error, which will bias the final height estimation. Fortunately, with the help of the accurate initialization obtained by the dual-antenna interferogram, the dual-satellite interferogram can be successfully unwrapped since the phase change induced by the orbit error will not destroy the PC assumption. Therefore, the orbit error can be well compensated during the height inversion. Here the nonlinear baseline model is used to model the orbit error. Generally, the orbit errors in both master and slave images can be expressed as the baseline error. Fig. 2 shows the geometry of the baseline error. The additional phase change induced by the orbit error can be defined as \[\phi_{\text{orb}}=\frac{4\pi}{\lambda}|\vec{r}_{S}^{\prime}-\vec{r}_{S}|, \tag{12}\] where \(\vec{r}_{S}^{\prime}\) and \(\vec{r}_{S}\) denote the actual and error-free slave distance vector along the line of sight. Note that \(\vec{r}_{S}^{\prime}\) also includes the orbit error in the master distance vector \(\vec{r}_{M}\). In the practical process, the baseline vector \(\vec{B}\) in a TCN (track, cross, normal) representation is often used, denoted as (\(B_{i}\), \(B_{c}\), \(B_{n}\)). Then the orbit error as a function of baseline vector can be written as \[|\vec{r}_{S}^{\prime}-\vec{r}_{S}|=\frac{\partial\vec{r}_{S}}{\partial\vec{B} }=\frac{\partial\vec{r}_{S}}{\partial B_{t}}\vec{e}_{t}+\frac{\partial\vec{r}_ {S}}{\partial B_{c}}\vec{e}_{c}+\frac{\partial\vec{r}_{S}}{\partial B_{c}} \vec{e}_{n}, \tag{13}\] Figure 2: Geometry of the InSAR baseline error. where the partial deviation of the baseline component can be expressed as follows \[\frac{\partial\vec{r}_{S}}{\partial B_{t}}=\frac{\left|B_{t}\right|-\vec{r}_{M} \cdot\vec{e}_{t}}{\sqrt{\left|\vec{r}_{M}\right|^{2}+\left|\vec{B}\right|^{2}-2 \cdot\vec{r}_{M}\cdot\vec{B}}}. \tag{14}\] The relationship between the orbit error phase and baseline error can be written as \[E\{\phi_{\text{orb}}\}=\frac{4\pi}{\lambda}\left(\frac{\partial\vec{r}_{S}}{ \partial B_{t}}\delta B_{t}+\frac{\partial\vec{r}_{S}}{\partial B_{c}}\delta B _{c}+\frac{\partial\vec{r}_{S}}{\partial B_{n}}\delta B_{n}\right) \tag{15}\] Since the orbit error may change temporally during the radar image focusing, the first-order term \(\partial\vec{B}\) should be added to model the fringe residuals. Additionally, for small squint angles in spaceborne SAR, the interferometric phase is not sensitive to the orbit errors along the track direction. So the orbit error components along the track direction are neglected. The orbit error phase can be expressed as follows \[E\{\phi_{\text{orb}}\}= \frac{4\pi}{\lambda}\left(\frac{\partial\vec{r}_{S}}{\partial B_{ c}}(\delta B_{c}+\delta\vec{B}_{c}t)+\frac{\partial\vec{r}_{S}}{\partial B _{n}}(\delta B_{n}+\delta\vec{B}_{n}t)\right) \tag{16}\] If the TDA-InSAR works in a bi-static mode, the orbit errors of both MBI and LBI have the same spatial variation but different value. Additionally, the atmospheric delay can be totally neglected due to the strict synchronization. With the unwrapped phases \(\phi_{\text{M}}\) and \(\phi_{\text{L}}\), the parameters related to the orbit error can be estimated jointly with the height. Supposing that there are \(m\) coherent scatterers, the mathematical model can be defined as follows \[\begin{bmatrix}\phi_{\text{M},1}\\ \vdots\\ \phi_{\text{M},m}\\ \phi_{\text{L},1}\\ \vdots\\ \phi_{\text{L},m}\\ \end{bmatrix}=\frac{4\pi}{\lambda}\begin{bmatrix}\frac{R_{1}\sin\theta_{1}}{B _{\text{L},1}^{\text{L}}}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},n}^{\text{L}}}\\ \frac{R_{1}\sin\theta_{1}}{B_{\text{L},1}^{\text{L}}}&\ldots&0\\ \mathbf{B}_{\text{orb}}^{\text{L}}&\vdots&\ddots&\vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},m}^{\text{L}}}\\ \end{bmatrix}\begin{bmatrix}\delta B_{c}\\ \delta\vec{B}_{c}\\ \delta\vec{B}_{n}\\ \delta\vec{B}_{n}\\ \vdots\\ \vdots\\ \vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},m}^{\text{L}}}\\ \end{bmatrix}\begin{bmatrix}\delta B_{c}\\ \delta\vec{B}_{c}\\ \delta\vec{B}_{c}\\ \delta\vec{B}_{n}\\ \delta\vec{B}_{n}\\ \end{bmatrix} \tag{17}\] where \[\mathbf{B}_{\text{orb}}=\begin{bmatrix}\frac{\partial\vec{r}_{S}}{\partial B _{t,1}}&\frac{\partial\vec{r}_{S}}{\partial B_{t,n}}I_{1}&\frac{\partial\vec{r}_ {S}}{\partial B_{t,n}}I_{1}&\frac{\partial\vec{r}_{S}}{\partial B_{t,n}}I_{1} \\ \vdots&\vdots&\vdots&\vdots\\ \frac{\partial\vec{r}_{S}}{\partial B_{t,n}}&\frac{\partial\vec{r}_{S}}{ \partial B_{t,n}}&\ldots&\frac{\partial\vec{r}_{S}}{\partial B_{t,n}}I_{m}& \frac{\partial\vec{r}_{S}}{\partial B_{t,n}}I_{m}\\ \end{bmatrix} \tag{18}\] The expression in Eq. (19) represents a linear system of \(2m\) equations in \(m+4\) unknowns. Thus a unique solution can be obtained for such determined problem. If the TDA-InSAR works in a mono-static mode, the orbit error of the mono-static dual-satellite interferogram will be twice than that of the bi-static one. If the MBI is a bi-static interferogram and LBI is a mono-static interferogram, the coefficient \(\mathbf{B}_{\text{orb}}^{\text{M}}\) equals to \(\mathbf{B}_{\text{orb}}^{\text{L}}/2\). Research in Hu et al. (2022) shows that the atmospheric delay, especially the tropospheric delay will decorrelate to a half-time value within 3 minutes. Thus, if the TDA-InSAR works in a mono-static mode, both orbit error and atmospheric delay should be considered during the height estimation. The atmospheric delay should be calculated per pixel due to the high spatial correlation. The mathematical model with respect to the atmospheric delay correction is defined as follows \[\begin{bmatrix}\phi_{\text{S},1}\\ \vdots\\ \phi_{\text{S},m}\\ \phi_{\text{M},1}\\ \vdots\\ \phi_{\text{M},m}\\ \phi_{\text{L},1}\\ \vdots\\ \phi_{\text{L},m}\\ \end{bmatrix}=\frac{4\pi}{\lambda}\begin{bmatrix}\frac{R_{1}\sin\theta_{1}}{B _{\text{L},1}^{\text{L}}}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},m}^{\text{L}}}\\ \end{bmatrix}\begin{bmatrix}\delta B_{c}\\ \delta B_{c}\\ \delta B_{n}\\ \delta B_{n}\\ \vdots\\ \vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},m}^{\text{L}}}\\ \end{bmatrix}\begin{bmatrix}\delta B_{c}\\ \delta B_{c}\\ \delta B_{n}\\ \delta B_{n}\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ 0&\ldots&\frac{R_{n}\sin\theta_{n}}{B_{\text{L},m}^{\text{L}}}\\ \end{bmatrix}\begin{bmatrix}\delta B_{c}\\ \delta B_{c}\\ \delta B_{c}\\ \delta B_{n}\\ \delta B_{n}\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots \\ \vdots\\ \vdots\\ \vdots\\ \vdots \\ \vdots \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots \\ \vdots\\ \vdots \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots \\ \vdots\\ \vdots \vdots\\ \vdots\\ \vdots\\ \vdots\\ \vdots \\ \vdots\\ be implemented to guarantee the coherence between the transmitter and receiver. Additional antennas covering the full solid angle are used for a mutual exchange of phase synchronization signal between the two satellites. Details can be found in Jin et al. (2020); Liang et al. (2020). In the practical application, the baseline configurations 1 and 3 need to transmit the synchronous signal in a single direction while the baseline configuration 2 requires a bidirectional synchronization. Assuming that the antenna baselines of the two satellites are the same, the equivalent baselines of different baseline configurations can be obtained by setting the acquisition T1-R1 as the master image. The equivalent baselines of the baseline configuration 1 are as follows \[B_{\perp,1}=L_{1}/2;\,B_{\perp,2}=L_{2}/2;\,B_{\perp,3}=L_{2}+L_{1}, \tag{20}\] those of the baseline configuration 2 are \[B_{\perp,1}=L_{2}/2;\,B_{\perp,2}=L_{2}/2+L_{1};\,B_{\perp,3}=L_{2}+L_{1}, \tag{21}\] those of the baseline configuration 3 are \[B_{\perp,1}=L_{1}+L_{2}/2;\,B_{\perp,2}=L_{2}+L_{1}/2;\,B_{\perp,3}=L_{2}+L_{1}, \tag{22}\] and those of the baseline configuration 4 are \[B_{\perp,1}=L_{1}/2;\,B_{\perp,2}=L_{2}+L_{1}/2;\,B_{\perp,3}=L_{2}+L_{1}. \tag{23}\] Given a specified coherence, the SU criteria gives an upper bound of the baseline ratio. In order to unwrap the interferogram with as long baseline as possible, another interferogram with a medium baseline is required. Since the baseline length \(B_{\perp,2}\) is almost the same as \(B_{\perp,3}\) in Eq. (23), the performance of mono-static mode is theoretically poorer than that of the bistatic mode due to the lack of medium baseline interferogram. Considering the baseline configurations 2 and 3, the BLC approach should be applied to get a pseudo short baseline interferogram. But the combination integer of the baseline configuration 3 is larger than that of the baseline configuration 2, leading to a noisy initial height estimation. Theoretically, the baseline configuration 2 is better than the baseline configuration 3. Note that MB interferograms obtained by all baseline configurations have a large height ambiguity due to the ultra short baseline interferogram. Further simulation-based performance analysis of different baseline configurations will be conducted using the optimal baseline design in Eq.(11). ### Performance analysis The simulated parameters are listed in Table. 2. Setting the coherence to 0.99, the relationship between the long baseline length and the relative height precision is shown in Fig. 4. If all interferograms are unwrapped successfully,the \begin{table} \begin{tabular}{l l} \hline \hline Parameters & Values \\ \hline Frequency (GHz) & 9.6 \\ Resolution (range \(\times\) azimuth, \(m\)) & \(0.93\times 2.00\) \\ Incident angle (degree) & 30 \\ Near range (\(m\)) & 608015 \\ Maximal antenna baseline (\(m\)) & 20 \\ Maximal satellite baseline (\(m\)) & 500 \\ Expected height precision (\(m\)) & 0.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Main parameters for the simulation Figure 4: Relative height precision as a function of long baseline length.. Figure 3: TDA-InSAR with different baseline configurations. (a) (b) and (c) are bistatic modes. (d) is mono-static mode. relative height precision will not increase significantly after the baseline length reaches 400 m. Then a comparative study is conducted to evaluate the performance of 3D reconstruction using different baseline configurations. In the following simulation, the antenna baseline ranges from 0.5 to 20 m with a step of 0.1 m, the satellite baseline is from 10 to 400 m with a step of 2 m and the expected SR is set to 0.98. For each MB interferograms, the simulation is repeated 500 times and the corresponding relative height precision is obtained correspondingly. Fig. 5 denotes the relative height precision and success rate of phase unwrapping using different baseline configurations. Fig. 5 (d) shows that the longest baseline is 100 m for the MB interferograms obtained by the mono-static mode. On the contrary, the bi-static measurements between different satellites provide a transition interferogram with a medium baseline, which enables the successful phase unwrapping of LBI with a baseline length of 200 m, as shown in Figs. 5 (a) (b) and (c). Additionally, MB interferograms obtained by the baseline configuration 2 show the most flexible baseline design while that by the baseline configuration 4 show a very strict baseline design. For example, assuming that the antenna baseline is 10 m, the possible satellite baseline of the baseline configuration 2 ranges from 50 to 200 m with the expected relative height precision of 1 m while that of the baseline configuration 4 only succeeds with a satellite baseline of 60 m. Due to the instability of the platform, the baseline of dual-satellite interferogram may be inaccurate. Larger range of available baseline design is able to increase successful rate of phase unwrapping in the practical application. Furthermore, given the same coherence, the minimal antenna baseline of the baseline configuration 2 is 3.8 m while that of the baseline configuration 4 is 10 m. Shorter antenna would decrease the complexity of the system design. Since the success rate of phase unwrapping varies with the coherence, the maximal satellite baseline, minimal antenna baseline and relative height precision as a function of coherence are obtained, separately, as shown in Fig. 6. According to Figs. 6 (a) and (c), the maximal satellite baseline of the baseline configuration 2 is longer than other baseline configurations, leading to a better relative height precision. Figure 5: Performance of 3D reconstruction using different baseline configurations. First row: relative height precision; Second row: success rate of phase unwrapping. Figure 6: Performance of 3D reconstruction using different baseline configurations. Moreover, a shorter antenna length can be used using the baseline configuration 2 with the same performance. Furthermore, if this system only contains a dual-antenna and a single-antenna satellite, i.e., two interferograms in a single flight, it is also possible to achieve a 3D reconstruction using the simplified system. If the baseline configuration 2 is used, the three channel images are obtained by T1-R1, T2-R2, T2-R4. A performance comparison between the original TDA-InSAR and the simplified one is conducted, as shown in Fig. 7. Fig. 7 shows that the performance of the simplified TDA-InSAR is only half than that of the original one. For example, if the coherence is 0.98, the maximal satellite baseline of the TDA-InSAR is 300 m while that of the simplified one is 150 m. Similar conclusions can be found in the analysis of both minimal antenna baseline and relative height precision. Thus, the additional antenna significantly reduce the expected coherence, leading to less difficulties in the practical implementation. ### Orbit configuration and Formation An essential design of the TDA-InSAR is the simultaneous acquisition of the two satellites, giving a long baseline dual-satellite interferogram. This operational mission requires a coordinated formation of two satellites flying in similar orbit. In the practical application, the helix orbit adopted by TanDEM Krieger et al. (2007); Alberto et al. (2003) has been proven to be a reliable formation, which can be used in the TDA-InSAR design. A simple sketch of the helix formation is show in Fig. 8. The horizontal displacement depends on the ascending node while the vertical displacement is related to the difference of the eccentricity. The relative displacement between the two satellites varies with the satellite altitude. Fig. 9 shows that the perpendicular baseline varies with the latitude, leading to different height performance. Additionally, this simulation also indicates that the perpendicular baseline of the ascending data in northern latitudes is longer than the in southern latitudes while the performance of the descending data is better in southern latitudes. According to the performance analysis in Fig. 5 (c), the optimal baseline of the configuration 2 ranges from 50 m to 300 m with an antenna baseline of 15 m. The orbit formation is able to give an optimal baseline ranging from 150 m to 300 m, which satisfied the requirement of the optimal baseline design. ## 4 Simulation-Based Performance Evaluation In this section, simulation based analysis is conducted to evaluate the performance of the proposed TDA-InSAR. Two typical scenarios, i.e. built-up objects and vegetation canopies, are tested and followed by the analysis of the main error sources. ### Performance of 3D Reconstruction #### 4.1.1 Built-up objects Based on a real elevation model and simulated ground targets, as shown in Fig. 10, the MB interferograms are obtained using a multi-dimension coherent scattering model Xue et al. (2020). The real elevation model covers a range of 400 m \(\times\) 500 m with a maximal height difference of 100 m and 12 ground targets are added. Using a radar wavelength Figure 8: Sketch of the Helix satellite formation. Figure 7: Performance comparison between simplified and original TDA-InSAR. of 31 mm, the corresponding interferograms with the perpendicular baselines of 15, 150 and 300 m are obtained, respectively, as shown in Fig.11. It shows that the phase change in the shortest baseline interferogram is very slow, which is good for spatial PU but poor for height inversion. The long baseline interferogram can not be unwrapped using the spatial PU due to the rapid phase change induced by the ground targets. According to the performance analysis in Fig.5, the optimal baseline design of the tandem dual-antenna SAR varies with the baseline configuration. Assuming the coherence of 0.99 and antenna baseline 15 m, the optimal satellite baselines with different TDA-InSAR baseline configurations are 200, 300, 150 and 100 m, respectively. Then 3D reconstruction using asymptotic 3D PU are obtained, as shown in Fig. 12. The corresponding precision of the estimated relative height is shown in Table. 3. According to Figs. 12 (c) (f) (i) and (l), the final height precision only depends on the baseline length of LBI. So the 3D reconstruction obtained by the baseline configuration 2 shows the best performance. #### 4.1.2 Vegetation canopies Virtual 3D scenes with deterministic fractal trees on DEM are generated using the following steps. First, the trees are parsed into canonical components or scatterers (i.e., dielectric cylinders and disks). Then, the generalized Rayleigh-Gans (GRG) approximation and the infinite cylinder approximations are used for coherent scattering calculations Karam et al. (1988). Four-path multiple scattering mechanisms, i.e., direct scatterer scattering, ground-scatterer scattering, scatterer-ground scattering, and scatterer-ground-scatterer scattering, between vegetation and the ground are considered, Xue et al. (2020). Additionally, the scattering matrix of the rough surface under the vegetation can be obtained by the small perturbation method Williams (2006); Jin and Xu (2013). Assuming Foldy's approximation Tsang et al. (1985), the extinction induced by electromagnetic waves penetrating the scatterers in the vegetation is also considered. In this study, a total of 200 trees with an average height of 30 m are simulated and they are randomly distributed in the whole area, which is shown in Fig. 13. Note that the radar with a long wavelength is usually used in the practical tree height measurement to get a good penetration. In this simulation, the radar wavelength is setting to 0.24 m and the optimal baselines are also revised accordingly. Using the same procedure in Section 4.1.1, the estimated heights of the radar scatterers is obtained. Since the estimated height in InSAR approach is defined with respect to a specified reference point, the final absolute height contains a constant height offset, leading to significant coordinate offsets in both horizontal and vertical directions. Such height offset can be estimated using a searching strategy using an initial reference DEM. Details can be found in Hu et al. (2019). The estimated height of the radar scatterers after absolute height correction is shown in Fig. 14 (a). To show the inversed forest height clearly, the initial DEM is used to remove the ground elevation change, as shown in Fig. 14 (b). \begin{table} \begin{tabular}{l c c c c} \hline \hline Baseline configuration & 1 & 2 & 3 & 4 \\ \hline SBI & 8.61 & 6.45 & 14.04 & 10.68 \\ MBI & 0.92 & 0.48 & 1.11 & 2.55 \\ LBI & 0.41 & 0.31 & 1.08 & 2.27 \\ \hline \hline \end{tabular} \end{table} Table 3: Relative height precision by different TDA-InSAR baseline configuration Figure 10: Scene data for the simulation. Figure 9: Variation of the perpendicular baseline. (a) Ascending track (b) Descending track. Regarding the quantified analysis, three indicators, i.e., coverage, mean error (ME) and root mean square error (RMSE) are used to evaluate the reconstructed 3D forest height. Since the image resolution is lower than the simulated trees, the complex value of every pixel is the summation of all elementary scatterers within a resolution cell, which increases the difficulties in associating the radar scatterers to the corresponding trees. In this study, the nearest neighbor search approach is used to snap the point cloud in the simulated trees to their most likely radar scatterers. Based on the searching result, the ME, RMSE and coverage of the reconstructed 3D forest height are 0.14 m, 1.78 m and 59.7, correspondingly. Similar 3D forest reconstruction obtained by conventional 3D PU in Hu et al. (2022c) shows a height RMSE of 1.25 m using 29 repeat-pass interferograms. To further evaluate the similarity between the forest height obtained by conventional and asymptotic 3D PU statistically, the two-sample F test is formulated as follows DeGroot and Schervish (2012) \[F_{0}=\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}} \tag{24}\] where \(\sigma^{2}\) denotes the sample variance. Note that the impacts of both atmospheric delay and orbit error are neglected in this part, which will be investigated in the following section. The main factors that affect the precision of the estimated tree height are system noise. Then the statistical value \(F_{0}\) can be calculated with the assumption that the height error follows a Gaussian distribution. Setting the significance level to \(\alpha=0.02\), the critical value \(F_{a}\) is 4.5. In this case, \(F_{0}\) equals to 1.99, which is smaller than the critical value \(F_{a}\), showing that there are no significant differences on the estimated relative height by either conventional or asymptotic 3D PU. So the MB interferograms obtained by the TDA-InSAR can achieve fast 3D forest reconstruction. ### Impact of the main error sources #### 4.2.1 Orbit error Both mono-static and bi-static dual-satellite interferograms will suffer from the orbit error. In this demonstration, the parameters of the baseline configuration 2 was used to get the simulated MB interferograms and the orbit error was simulated using the nonlinear model in Eq.(16). The parameters \(\delta B_{c}\) and \(\delta B_{n}\) are set to 0.3 and 0.1 m while \(\delta B_{c}\) and \(\delta B_{n}\) are set to 0.02 and 0.02 m/PRF, respectively. The corresponding simulated orbit error phase is shown in Fig. 15 (a). It is obvious that the orbit error is spatially correlated signal, which will be mixed with the estimated height during the spatial PU. Fortunately, the dual-antenna interferogram is independent of the orbit error, which can be a good initialization for the spatial PU of the tandem satellite interferogram. The result of asymptotic 3D PU with orbit error is shown in Fig. 15 (b), showing that the orbit error will significantly bias the final height estimation. The standard deviation of the height difference between the estimated height and the true value is 26.53 m. Using the proposed height inversion with the orbit error compensation, the final height estimation and the estimated orbit error are shown in Figs. 16 (a) and (b). The standard deviation of the height difference between the refined height and the true value is 0.33 m, which is consistent with the relative height precision by the simulation without the orbit error, see Table. 3. #### 4.2.2 Atmospheric delay in bi-static interferogram Eliminating the atmospheric effect is the main challenge for the spaceborne SAR interferometry due to its spatio-temporal variation. For baseline configurations 1, 2 and 3 of the TDA-InSAR system, the dual-satellite interferogram is acquired using a bi-static mode, so the radar wave would propagate through the same tropospheric refractivity but different path. The impact of the atmospheric effect caused by the path difference should be investigated. Figure 11: Simulated MB interferograms with the baselines of (a) 15 m (b) 150 m and (c) 300 m. In fact, a large part of the total atmospheric delay is almost constant, i.e., the ionospheric and hydrostatic troposphere delay van Leijen (2014). The main contributor to the spatial variation is the turbulence, which is effectively due to the water vapor distribution. Using the numerical atmospheric model, we investigate the impact of turbulence on the height inversion. The model used in this research is DALES, the Dutch Atmospheric Large Eddy Simulation model Heus et al. (2010), which can provide reliable simulation of the 3D tropospheric refractivity distribution. In the practical process, the simulation was conducted based on radiosonde and ground observations in Oklahoma and Kansas, USA Brown et al. (2002). Additionally, the simulation is characterized by shallow cumulus convection with a cloud cover between 20 and 30 percent, which is the typical fair-weather clouds over continental mid-latitudes Hahn and Figure 12: Investigation of 3D reconstruction. First column: result of SBI. Second column: result of MBI. Third column: result of LBI. Each row denotes the results by different baseline configurations. Warren (2007). Then we get a 3D reflectivity distribution with the relevant parameters shown in Table. 4. With the 3D refractivity distribution, the tropospheric delay along the line of sight can be computed using ray-tracing Urquhart et al. (2011). Since the refractivity change due to the bending error for typical SAR incidence angles can be negligible Parkinson et al. (1996), the total tropospheric delay for one acquisition is obtained by integrating the refractivity from the elevation of the target to the total height of the simulated troposphere. The evaluation of atmospheric effect due to the path difference was demonstrated using the parameters of the baseline configuration 2. Then the atmospheric interferograms, synthetic interferograms only sensitive to atmospheric delay variability with the perpendicular baselines of 15, 150 and 300 m are obtained, as shown in Fig. 17. Since the perpendicular baselines of the TDA-InSAR system are very small compared with the radar altitude, the integration paths between different acquisitions are almost the same, leading to very limited phase change. Additionally, Fig. 17 also shows that such atmospheric delay will increase with the perpendicular baseline, which may be confused with the estimated height. In this case, the histogram of the height bias caused by the atmospheric effect is shown in Fig. 18, indicating that the atmospheric delay will lead to a significant height offset. Fortunately, such offset can be neglected since InSAR only gets the height estimation relative to a specified reference point. Moreover, the spatial variation of the height bias is only centimeter level, which is much smaller than the expected height precision. Thus, the atmospheric effect can be neglected if the TDA-InSAR works in a bi-static mode. Figure 14: 3D forest reconstruction (a) estimated heights of the radar scatterers (b) forest height inversion without the ground elevation change. Figure 13: Simulated 3D forest. Figure 15: Impact of the orbit error. (a) simulated orbit error (b) height estimation with the orbit error. #### 4.2.3 Atmospheric delay in mono-static interferogram On the contrary, the baseline configuration 4 of the TDA-InSAR system doesn't require the strict synchronization. If the two satellites works separately in a mono-static mode, the change of the tropospheric refractivity will increase the variation of atmospheric effects. Based on the DALES model, we simulate two 3D tropospheric refractivity distributions with a time interval of 15 minute and get the corresponding synthetic tropospheric interferogram, as shown in Fig. 19. It shows that the phase change caused by the atmospheric effect exceed one cycle, equaling to the contribution of 75 \begin{table} \begin{tabular}{l l} \hline Parameters & values \\ \hline Scale (km \(\times\) km) & 49.3\(\times\) 49.3 \\ Horizontal resolution (m\(\times\)m) & 40\(\times\) 40 \\ Maximal height (m) & 4500 \\ Vertical resolution (m) & 40 \\ \hline \end{tabular} \end{table} Table 4: Parameters of the refractivity distribution in the simulation Figure 16: Results of the height inversion with the orbit error compensation. (a) height estimation with the orbit error compensation (b) estimated orbit error. Figure 17: Simulated synthetic tropospheric interferograms using the baseline configuration 2 with the perpendicular baselines of (a) 15 m (b) 150 m (c) 300 m Figure 18: Histogram of the height error induced by the atmospheric delay. meter height, which would significantly bias the final height estimation. If the atmospheric delay is not considered during the process, the final height estimation is actually the atmospheric delay, as shown in Fig. 20(a). On the contrary, if additional parameters are used to model the atmospheric delay during the process, the final height estimation is better, as shown in Fig. 20(b). However, compared Fig. 12 (l) with Fig. 20 (b), height precision of the baseline configuration 4 is poorer than that of the baseline configuration 2 since the atmospheric effect will bias the ambiguity estimation. Therefore, strict synchronization is necessary to avoid the atmospheric effect, which can provide a good height precision. ## 5 Conclusions In this paper, we propose a new concept of TDA-InSAR to acquire the specified MB interferograms for asymptotic 3D PU, which enables a reliable 3D reconstruction using very sparse acquisitions. Two indicators, relative height precision and successful phase unwrapping rate, are used to develop the optimal baseline design and the performance of difference baseline configurations is evaluated correspondingly. Assuming that the antenna length is 15 m, the optimal satellite baseline of the bi-static mode can be selected in a flexible range from 50 m to 300 m while that of the monostatic mode is fixed at 100 m. The flexible baseline selection will guarantee the optimal baseline design when the orbit control is inaccurate. Although the bi-static mode requires a strict signal synchronization between the tandem satellites, increasing the complexity of the hardware design, the use of longer baseline interferogram leads to a better relative height precision. Additionally, simulation-based evaluation of one example configuration shows that the proposed system enables a 3D reconstruction with a relative height precision of 0.3 m in built-up or man-made objects and that of 1.7 m in vegetation canopies. This single-pass SAR system combines the advantages of both multi-antenna and multi-satellite SAR and thus shows a good coherence in both built-up objects and vegetation canopies. Considering the impact of the atmospheric delay, the LES model is used to get a realistic atmospheric simulation and the analysis of the atmospheric effects show the slight path difference due to the baseline diversity will not affect the height inversion but the temporal refractivity change leads to significant height bias. It indicates that the atmospheric effect can be neglected if the proposed TDA-InSAR works in a bi-static mode. Using the asymptotic 3D PU, the dual-satellite interferogram can be successfully unwrapped with the help of the dual-antenna interferogram and the orbit phase trend can be well compensated. Thus, the proposed TDA-InSAR system enables the fast 3D reconstruction in a single flight, showing its scientific importance in many applications, such as mapping terrain, target recognition or forest height inversion, especially for users demanding a single-pass acquisition. ## Declaration of competing interest The authors declare that they have no known competing financial interests ot personal relationships that could have appeared to influence the work reported in this paper. Figure 19: Simulated synthetic tropospheric interferogram with a temporal baseline of 15 minute. Figure 20: Height estimation without (a) and with (b) atmospheric delay correction using the single dual-antenna satellite SAR. ## Acknowledgments This work was supported in part by the National Nature Science Foundation of China under Grant 61991422 and 62201158.
2304.08270
Controlled regularity at future null infinity from past asymptotic initial data: massless fields
We study the relationship between asymptotic characteristic initial data at past null infinity and the regularity of solutions at future null infinity for the massless linear spin-s field equations on Minkowski space. By quantitatively controlling the solutions on a causal rectangle reaching the conformal boundary, we relate the (generically singular) behaviour of the solutions near past null infinity, future null infinity, and spatial infinity. Our analysis uses Friedrich's cylinder at spatial infinity together with a careful Gr\"onwall-type estimate that does not degenerate at the intersection of null infinity and the cylinder (the so-called critical sets).
Grigalius Taujanskas, Juan A. Valiente Kroon
2023-04-17T13:26:51Z
http://arxiv.org/abs/2304.08270v2
# Controlled regularity at future null infinity from past asymptotic initial data: massless fields ###### Abstract We study the relation between asymptotic characteristic initial data at past null infinity for the massless linear spin-\(s\) field equations and the regularity of the solutions at future null infinity. We quantitatively control the solutions to the spin-\(s\) equations on a causal rectangle reaching spatial infinity and containing portions of past and future null infinity. As a consequence, we show that even linear fields generically acquire polyhomogeneous expansions near future null infinity, with the regularity of the terms controlled precisely in terms of the regularity of the past characteristic initial data. Our analysis makes use of Friedrich's representation of spatial infinity together with a careful Gronwall-type estimate that does not degenerate at the critical sets where null infinity meets spatial infinity, and Luk's strategy for the construction of optimal existence domains for the characteristic initial value problem. ###### Contents * 1 Introduction * 2 Geometric setup * 2.1 Point representation of spatial infinity: the Penrose gauge * 2.2 Cylinder representation of spatial infinity: the F-gauge * 2.3 Null frame near \(\mathcal{I}^{0}\) * 2.4 Spin-\(s\) equations in the F-gauge * 3 Estimates near \(\mathscr{I}^{+}\) * 3.1 Geometric setup * 3.2 Construction of estimates * 3.3 Asymptotic expansions near \(\mathcal{I}\) * 3.4 Existence of solutions * 4 Asymptotic characteristic initial value problem for the spin-\(s\) equations * 4.1 Freely specifiable data on \(\mathscr{I}^{-}\) * 4.2 Symmetric hyperbolicity and the local existence theorem * 5 Estimates near \(\mathscr{I}^{-}\) * 5.1 Estimating the radiation field * 5.2 The boostrap bound * 5.3 Asymptotic expansions near \(\mathscr{I}^{-}\) * 5.4 Last slice argument Controlling the solutions from \(\mathscr{I}^{-}\) to \(\mathscr{I}^{+}\)6.1 From \(\mathscr{I}^{-}\) to \(\mathcal{S}_{-1+\varepsilon}\)6.2 From \(\mathcal{S}_{-1+\varepsilon}\) to \(\mathscr{I}^{+}\)6.3 Prescribing the regularity at \(\mathscr{I}^{-}\)6.4 Main result7 Physical leading order terms8 Concluding remarksA Geometry of \(\mathrm{SU}(2)\)A.1 Basic propertiesA.2 The vector fields \(\mathbf{X}_{\pm}\) and \(\mathbf{X}\)A.2.1 Coordinate expressionsA.2.2 A technical lemmaA.3 The functions \(T_{m}{}^{j}{}_{k}\)B Spin-\(s\) equationsB.1 Hyperbolic reductionB.2 Transport equations along null geodesicsB.3 Wave equationsB.4 A more general gaugeC F-expansionsC.1 Interior equations on \(\mathcal{I}\)C.2 Expansions in terms of \(T_{m}{}^{j}{}_{k}\)C.2.1 Properties of the Jacobi differential equationC.2.2 Solutions for \(|s-k|\leqslant q<p\)C.2.3 Solutions for \(p=q\)D Expansions near \(\mathscr{I}^{-}\)D.1 Leading order termsD.2 Higher order terms ## 1 Introduction Over the last half-century Penrose's notion of _asymptotic simplicity_[20] has been instrumental in the development of the current understanding of the asymptotic behaviour of massless fields, including gravity. The main idea rests on the observation that techniques from conformal geometry allow one to make use of local computations at the conformal boundary to analyse the far-field regime of fields--see e.g. [10]. In particular, questions regarding the asymptotic behaviour of physical fields translate into questions about the regularity of suitably rescaled fields at the conformal boundary--see e.g. [14, 15]. An important application of Penrose's ideas is to _scattering problems_--the asymptotic analysis of fields propagating on a given region of spacetime--see e.g. [16, 17, 18] and references therein. Scattering problems are naturally formulated in terms of _characteristic initial value problems_ in which information about incoming waves is encoded on a null hypersurface. The use of Penrose's compactification allows a rigorous formulation of this type of problem by prescribing data on the null hypersurface given by past null infinity, so that one is discussing the interaction of waves propagating from _infinity_. A central question in scattering problems is the extent to which asymptotic data prescribed at past null infinity determines the behaviour, and in particular the regularity, of the fields at future null infinity. Generically (e.g. unless the data is compactly supported), making this connection requires some discussion of the set linking the two connected components of null infinity, i.e. spatial infinity. In Penrose's original discussion [11] of the asymptotic properties of isolated systems, spatial infinity is represented by a point, \(i^{0}\). However, it has long been recognised that this representation leads to substantial difficulties when attempting to discuss evolution problems in a neighbourhood of spatial infinity [12]. At the core of the problem lies the fact that spatial infinity is essentially a caustic--it is the intersection of the integral curves of the generators of both past and future null infinities. Moreover, in Penrose's representation any discussion of the regularity of fields in a neighbourhood of spatial infinity requires the use of somewhat awkward direction-dependent limits--see e.g. [1, 10]. Alternative representations of spatial infinity have been put forward in the literature, each aimed at addressing a specific aspect of the problem of spatial infinity--see e.g. [1, 2, 3, 4, 13, 14]. A discussion of the connections between some of these representations can be found in [15]. A common aspect of all of these alternative representations of spatial infinity is that the point \(i^{0}\) is blown-up to an extended set in both spatial and temporal directions. In particular, the spatial sections of the blow-up of spatial infinity usually become sets with the topology of the 2-sphere \({\mathbb{S}}^{2}\). The technical details of how the blow-up of \(i^{0}\) is achieved depends on the particular approach. The key point, however, is that blowing up \(i^{0}\) avoids the need to talk about direction-dependent limits, and moreover provides a framework to study the way that structures defined at past and future null infinities, \({\mathscr{J}}^{-}\) and \({\mathscr{I}}^{+}\) respectively, are related. In this article we make use of a representation of spatial infinity first considered in [12]. This so-called _cylinder at spatial infinity_ was introduced by Friedrich in 1998 as a tool for the construction of asymptotically simple spacetimes from a spacelike Cauchy problem for the _conformal Einstein field equations_[12]. The utility of this conformal gauge lies in the fact that it permits the formulation of a _regular initial value problem in a neighbourhood of spatial infinity_ for which both the equations and the initial data are regular. In general, the construction of this conformal gauge is based on the use of certain conformal invariants (conformal geodesics) in a way that the location of the conformal boundary is known _a priori_ from the initial data. Crucially, the structural properties of this framework allow one to relate the regularity properties of solutions at null infinity (i.e. the future asymptotic behaviour of the physical fields) to properties of the initial data. The geometric underpinning of this representation of spatial infinity ensures that conclusions about the regularity of the various fields thus obtained are geometrically meaningful, as opposed to being artefacts of the choice of gauge. The programme started in [12] and continued in [13, 14] has led to the identification of two conceptually distinct classes of _obstructions_ to the smoothness of null infinity. The first class is tied to the fact that spatial infinity is a caustic, as mentioned above. The second class is associated with the singularity in the conformal structure at spatial infinity arising from the presence of mass in the spacetime. Due to the nonlinearities in the Einstein field equations, in full nonlinear general relativity these two classes of obstructions and their consequences are unavoidably intertwined and hard to disentangle. However, a general picture has emerged from this analysis, namely that the regularity conditions at null infinity assumed in Penrose's original definition of asymptotic simplicity are non-generic and, in fact, unnecessarily restrictive. Even in the case of initial data which is _analytic_ around spatial infinity, the best one can generically hope for is polyhomogeneous asymptotics. This picture has been reinforced by the recent proof in [15] of the nonlinear stability of the Minkowski spacetime, in which the global existence of polyhomogeneous solutions was shown. The analysis in [12] is based on an initial value problem from a spacelike Cauchy hypersurface. However, from the point of view of formulating a scattering theory for the Einstein field equations, characteristic initial value problems provide a more natural setting. Characteristic initial value problems in the conformal setting have been discussed in [12, 13, 14], formulating an asymptotic characteristic initial value problem with data prescribed on past null infinity (see also [1, 2]). The connection between the characteristic initial value problem formulation and Friedrich's representation of spatial infinity (a spacelike initial value problem formulation) has been explained by Paetz in [16]. The analysis of Paetz in particular identifies the regularity conditions on the characteristic data which ensure the smoothness of solutions to the conformal Einstein field equations at the sets where null infinity intersects spatial infinity. In what follows we refer to these sets as the _critical sets_, and denote them by \(\mathcal{I}^{\pm}\). ### Analysis of linear fields Being constructed from conformal invariants, Friedrich's representation of spatial infinity can also be used to analyse the behaviour of linear fields on a fixed background. In particular, massless spin-\(s\) fields provide a useful model to study the implications of the first class of obstructions to the smoothness of null infinity--the caustic structure of spatial infinity--see e.g. [22, 23, 24, 25, 26]. Indeed, a satisfactory understanding of the behaviour of linear fields near spatial infinity is essential for a complete resolution of the problem of spatial infinity for the Einstein field equations. In view of this, in the present article we study the massless spin-\(s\) field equations, for any \(s\in\frac{1}{2}\mathbb{N}\), on the Minkowski spacetime. Friedrich's representation of spatial infinity--the so-called _F-gauge_--blows up the point \(i^{0}\) to a cylinder, the cylinder at spatial infinity \(\mathcal{I}\) (see Figure 4 in the main text). The cylinder \(\mathcal{I}\) can be regarded as a limit set of outgoing and incoming null cones in a neighbourhood of spatial infinity. As a consequence, \(\mathcal{I}\) is a total characteristic of the massless spin-\(s\) equations--that is, the spin-\(s\) equations become intrinsic transport equations on \(\mathcal{I}\). As such, \(\mathcal{I}\) allows one to transport information between the critical sets \(\mathcal{I}^{\pm}=\mathcal{I}\cap\mathscr{I}^{\pm}\), and \(\mathcal{I}^{0}\), an intersection of \(\mathcal{I}\) with a Cauchy surface \(\mathcal{S}_{\star}\). This representation of spatial infinity arises naturally from an implementation of a _conformal Gaussian gauge_ by means of which coordinates are propagated from the initial surface along conformal geodesics. In the case of the Minkowski spacetime, however, it turns out that the F-gauge can also be easily obtained by writing down an ad hoc conformal transformation. Friedrich's representation of spatial infinity is in turn related, by a further conformal rescaling, to Ashtekar's _hyperboloid_ representation of spatial infinity, which is widely used and cited in the physics literature--see [23]. As already mentioned, the cylinder at spatial infinity \(\mathcal{I}\) intersects the two disjoint components of null infinity, \(\mathscr{I}^{\pm}\), at the critical sets \(\mathcal{I}^{\pm}\). While \(\mathcal{I}\) is a total characteristic for the massless spin-\(s\) field equations, each of \(\mathcal{I}^{+}\) and \(\mathcal{I}^{-}\) is only a standard characteristic: only a subset of the equations become transport equations on each of \(\mathcal{I}^{+}\) and \(\mathcal{I}^{-}\). This transition of behaviour between \(\mathcal{I}^{\pm}\) and \(\mathcal{I}\) gives rise to a structural degeneracy in the evolution equations. While away from the critical sets \(\mathcal{I}^{\pm}\) the massless spin-\(s\) equations are a symmetric hyperbolic system, one finds that at the critical sets the matrix multiplying the time derivatives loses rank, and hence fails to be positive definite. As a consequence, the standard hyperbolic energy estimates typically used to control the behaviour of solutions no longer hold. The standard estimates for symmetric hyperbolic systems (see e.g. [17]) rely on very general algebraic properties of the principal part of the equations and discard the detailed structure of the lower order terms. However, for degenerate systems like the massless spin-\(s\) system on \(\mathcal{I}\) described in the previous paragraph, this lower order structure becomes crucial. Indeed, by exploiting the lower order properties of the spin-2 field equations, Friedrich showed in [14] (see also [26]) that it is possible to construct certain estimates which remain valid up to \(\mathcal{I}^{\pm}\)--we extend this strategy to arbitrary massless spin-\(s\) fields in Section 3. This strategy exploits the fact that the total characteristic nature of \(\mathcal{I}\) allows one to construct a particular type of asymptotic expansion of the solution, the regularity of the terms of which can be explicitly determined and controlled in terms of the structure of, say, data on an initial hypersurface \(\mathcal{S}_{\star}\). In particular, one sees that, generically, the terms in these expansions have logarithmic divergences at \(\mathcal{I}^{\pm}\). This is ultimately a consequence of the degeneracy of the evolution equations at \(\mathcal{I}^{\pm}\) mentioned above, and is consistent with the fact that the standard energy estimates fail at the critical sets. An explicit computation of these expansions reveals that, at each order, the logarithmic divergences are associated with a specific spherical harmonic. Moreover, at each order in the expansion the logarithmic terms are regularized in a very specific way by multiplication by a polynomial expression which vanishes at \(\mathcal{I}^{\pm}\). The higher the order in the expansion, the higher the order of this smoothing polynomial--consequently, the logarithmic divergences become milder as one looks higher in the expansion. By taking a suitable number of derivatives, it is therefore possible to eliminate _specific_ logarithmic terms in this expansion. This heuristic idea is made precise by the estimates, for spin-2 fields, in [14], which control the difference between a true solution to the massless spin-2 equations and the asymptotic expansions obtained from integrating the transport equations at the cylinder \(\mathcal{I}\). In the first part of this article we extend Friedrich's strategy [13] to establish that, generically, if the Cauchy data does not vanish at \(\mathcal{I}\), the solutions to the massless spin-\(s\) equations admit polyhomogeneous expansions for any \(s\in\frac{1}{2}\mathbb{N}\). The details of this approach are discussed in Section 3.3 and Appendix C. Although it may not be immediately evident, a similar strategy is pursued in the analysis of the linearisation of the Einstein field equations in a neighbourhood of spatial infinity in [13]. #### Asymptotic characteristic initial value problem near spatial infinity The second--and main--objective of this article is to adapt the methods of [13] to the study of the asymptotic characteristic initial value problem near spatial infinity. That is, we prescribe initial data for the spin-\(s\) field on a portion of past null infinity which extends all the way to spatial infinity, and on an outgoing null hypersurface \(\underline{\mathcal{B}}_{\varepsilon}\) emanating from \(\mathscr{I}^{-}\)--see Figure 1. The objective is therefore to understand the solution on a causal diamond which contains a portion of future null infinity \(\mathscr{I}^{+}\). Our setup is in the spirit of Luk's construction of an _optimal_ existence domain for the characteristic initial value problem, whereby existence of a solution is guaranteed in a neighbourhood of the initial null hypersurface as long as the characteristic initial data is appropriately controlled--see [12]. Luk's approach relies on the well-known observation that the Einsten field equations, when written in a double null gauge (which is naturally adapted to characteristic problems), form a symmetric hyperbolic system with a hierarchical structure. It is precisely this hierarchical structure, together with suitable bootstrap assumptions, that is key to obtaining the optimal existence result. A similar (albeit simpler) hierarchical structure is present for linear fields. However, Luk's strategy is not directly applicable to the domain \(\mathscr{D}\) in Figure 1 due to the degeneracy of the evolution equations at spatial infinity, as discussed above. Figure 1: The Penrose diagram of the Minkowski spacetime showing the existence domain \(\mathscr{D}\) for solutions to the spin-\(s\) equations considered in this article. An asymptotic characterisitic initial value problem is formulated with data on past null infinity \(\mathscr{I}^{-}\) and on an incoming lightcone \(\underline{\mathcal{B}}_{\varepsilon}\). The existence domain \(\mathscr{D}\) is a causal domain reaching spatial infinity and containing a portion of future null infinity \(\mathscr{I}^{+}\). The main technical challenge in this analysis is the degeneracy of the spin-\(s\) equations at \(i^{0}\). The strategy put forward in this article is an amalgamation of the techniques in [11] for the construction of estimates which are resilient at spatial infinity, and the strategy in [12] for the construction of optimal existence domains for the characteristic initial value problem. In order to have suitable control of the characteristic initial data on past null infinity, a certain amount of regularity must be assumed on the freely specifiable data at the past critical set \(\mathcal{I}^{-}\). As we regard the present analysis as a proof of concept, our assumptions on the regularity of the data at \(\mathscr{I}^{-}\) are not sharp. If required, more general situations may be studied within the present framework. The **main result** of this article is as follows. Given asymptotic characteristic initial data on \(\mathscr{I}^{-}\) for the massless spin-\(s\) equations which possesses a regular asymptotic expansion towards \(\mathcal{I}^{-}\), there exists a unique solution to the spin-\(s\) equations in a neighbourhood of spatial infinity which contains pieces of both past and future null infinity, as shown in Figure 1, and moreover, the regularity of the solution at \(\mathscr{I}^{+}\) is controlled in terms of properties of the data at \(\mathscr{I}^{-}\). Remarkably, however, the regularity of the solution at future null infinity is determined not solely by the regularity of the data on \(\mathscr{I}^{-}\), but also by its multipolar structure at the critical set \(\mathcal{I}^{-}\). In particular, simply requiring additional regularity of the characteristic data does not necessarily result in improved behaviour at future null infinity. This is consistent with previous results obtained in the analysis of the corresponding Cauchy problem--see e.g. [13]. A detailed statement of our main result is provided in Theorem 6.1. #### Strategy of the proof As already mentioned, in our analysis we use Friedrich's representation of spatial infinity, the F-gauge. In this gauge the causal diamond depicted in Figure 1 corresponds to the grey area in Figure 2. We divide this domain in two subdomains, the _lower domain_\(\underline{\mathcal{N}}_{\epsilon}\) and the _upper domain_\(\mathcal{N}_{1}\), which are separated by a spacelike hypersurface \(\mathcal{S}_{-1+\varepsilon}\) terminating at the cylinder at spatial infinity \(\mathcal{I}\). **On the upper domain \(\mathcal{N}_{1}\)** we look for solutions which can be written as a formal asymptotic expansion around \(\mathcal{I}\) (which can be explicitly computed in terms of the Cauchy data) plus a remainder term. As the terms in this formal expansion are then known explicitly, its regularity can be controlled by fine-tuning the multipolar structure of the initial data. The computation of the formal expansion makes essential use of the fact that the cylinder at spatial infinity is a total characteristic of the massless spin-\(s\) equations. For the remainder, by generalizing the estimates of [10] for spin-2 fields to general spins \(s\), we are able to ensure control all the way up to \(\mathscr{I}^{+}\) in terms of the regularity of the Cauchy data. It is interesting to observe that there is a slightly subtle relation between the control of the remainder obtained via the estimates and the order of the formal expansion, in that the remainder becomes more regular as the order of the expansion increases. **On the lower domain \(\underline{\mathcal{N}}_{\varepsilon}\)** we again consider solutions in the form of an asymptotic expansion plus a remainder. Here, however, the geometric/structural properties of the setting require an expansion with respect to past null infinity rather than the cylinder \(\mathcal{I}\). For simplicity, we assume that this expansion takes the form of a regular Taylor expansion with respect to the parameter defining the location of \(\mathscr{I}^{-}\) (that is, we assume the data on \(\mathscr{I}^{-}\) is sufficiently regular). We note, by contrast, that we shall discover that the coefficients in the asymptotic expansion in the upper domain may contain logarithmic terms, which affect the regularity at \(\mathscr{I}^{+}\). To control the remainder, we construct estimates using a combination of Luk's strategy [14] for the characteristic initial value problem and the techniques in [10]. This construction proceeds in two stages: first, by making use of a bootstrap assumption on the radiation field, we estimate the other \(2s-1\) components of the spin-\(s\) field; in the second stage, we prove the bootstrap bound on the radiation field (recall that the radiation field encodes the freely specifiable data for the spin-\(s\) equations on \(\mathscr{I}^{-}\)). As in the case of the upper domain, the regularity of the remainder depends on the order of the asymptotic expansion. Finally, we stitch together the solutions on the lower and upper domains. We do this by ensuring that the lower solution has enough control at the spatial hypersurface \(\mathcal{S}_{-1+\epsilon}\) to be able to apply our estimates in the upper domain. We thus obtain a statement controlling the solution up to future null infinity in terms of the asymptotic characteristic initial data on past null infinity. The existence and uniqueness of solutions in each domain is proved using standard last slice arguments. ### Outline of the article Section 2 provides a succinct discussion of Friedrich's framework of the cylinder at spatial infinity as well as lays out general properties of the spin-\(s\) field equations in the F-gauge. Section 3 provides the construction of our estimates in the upper domain, where we also extend Friedrich's estimates [10] to arbitrary spins \(s\in\frac{1}{2}\mathbb{N}\). In Section 4 we discuss the set-up of the asymptotic characteristic initial value problem. Here we also discuss the difficulties that arise when attempting to analyse the behaviour of solutions near spatial infinity. Section 5 contains the main insight of our paper and provides the construction of our estimates in the lower domain. In Section 6 we combine the estimates in the lower and upper domains to establish the control of solutions at future null infinity in terms of past asymptotic initial data. Section 7 provides an interpretation of the main results in this article in terms of decay rates in the physical spacetime. We conclude in Section 8 with some remarks and prospective directions for future research. The article also contains four appendices containing material essential for the analysis but whose inclusion in the main text would hinder the flow of the reading. Appendix A discusses properties of \(\mathrm{SU}(2)\) which are relevant to the construction of the estimates in the main text. Appendix B provides a detailed derivation of the spin-\(s\) field equations in the F-gauge. Appendix C gives a detailed account of the construction of asymptotic expansions in a neighbourhood of the cylinder at spatial infinity--these expansions are central for the construction of the solution in the upper domain. Finally, Appendix D presents the construction of asymptotic expansions in a neighbourhood of past null infinity--these expansions are key to the construction of the solution in the lower domain. ### Conventions and notation Our conventions are consistent with [14]. In particular, our metric signature is \((+,-,-,-)\), and the Riemann curvature tensor associated with the Levi-Civita connection of a metric \(g_{ab}\) is defined by \([\nabla_{a},\nabla_{b}]u^{d}=R^{d}_{\epsilon ab}u^{c}\). For a given spin dyad \(\{{\alpha}^{A},{\iota}^{A}\}\) with \({{o}_{A}}^{A}=1\), we write \({{\epsilon}_{\mathbf{A}}}^{A}\), \({\mathbf{A}}\in\{0,1\}\), to denote \({{e}_{0}}^{A}={\sigma}^{A}\) and \({{\epsilon}_{1}}^{A}={\iota}^{A}\). Spinorial indices are raised and lowered using the antisymmetric \(\epsilon\)-spinor \({{\epsilon}_{AB}}={{o}_{A}}{{t}_{B}}-{{\iota}_{A}}{{o}_{B}}\) (with inverse \({\epsilon}^{AB}={{o}^{A}}{{t}_{B}}-{{\iota}_{A}}{{o}^{B}}\)), e.g. \(\xi_{B}={\xi}^{A}{{\epsilon}_{AB}}\), using the convention that contracted indices should be "adjacent, descending to the right". As usual, the spacetime metric \(g_{ab}\) decomposes as \(g_{ab}={{\epsilon}_{AB}}\bar{{\epsilon}_{A^{\prime}B^{\prime}}}\), where \(\bar{{\epsilon}_{A^{\prime}B^{\prime}}}=\overline{{\epsilon}_{\mathbf{AB}}}\). The spin dyad \({{\epsilon}_{\mathbf{A}}}^{A}\) gives rise to a tetrad of null vectors \({{\mathbf{e}}_{\mathbf{A}\mathbf{A}^{\prime}}}={{\mathbf{e}}_{\mathbf{A}\mathbf{A}^{\prime}}}^{AA^{ \prime}}{\partial}_{AA^{\prime}}={{\epsilon}_{\mathbf{A}}}^{A}\bar{{\epsilon}_{\bm {A}^{\prime}}}^{A^{\prime}}{\partial}_{AA^{\prime}}\). The spin connection coefficients \(\Gamma_{\mathbf{A}\mathbf{A}^{\prime}}{{}^{B}}_{\mathbf{C}}\) are then defined as \(\Gamma_{\mathbf{A}\mathbf{A}^{\prime}}{{}^{B}}_{\mathbf{C}}=-{{\epsilon}_{\mathbf{C}}}^{Q} \nabla_{\mathbf{A}\mathbf{A}^{\prime}}{{\epsilon}^{B}}_{Q}\), where \(\nabla_{\mathbf{A}\mathbf{A}^{\prime}}={{\mathbf{e}}_{\mathbf{A}\mathbf{A}^{\prime}}}^{AA^{\prime}} \nabla_{AA^{\prime}}\). In Newman-Penrose language, the spin connection coefficients \(\Gamma_{\mathbf{A}\mathbf{A}^{\prime}\mathbf{B}\mathbf{C}}=\Gamma_{\mathbf{A}\mathbf{A}^{\prime}\mathbf{C} \mathbf{B}}\) are exactly the spin coefficients \(\gamma_{\mathbf{A}\mathbf{A}^{\prime}\mathbf{B}\mathbf{C}}\) (see [PR86], Summary of Vol. 1). When integrating over SU(2), \(\mu\) denotes the normalized Haar measure on SU(2). ## 2 Geometric setup In this section we discuss the general geometric setup for our analysis of the region of spacetime in a neighbourhood of spatial and null infinity. ### Point representation of spatial infinity: the Penrose gauge Let \((y^{\mu})\) be Cartesian coordinates on the Minkowski spacetime \(({\mathbb{R}}^{4},\tilde{\mathbf{\eta}})\) with the standard metric \(\tilde{\mathbf{\eta}}=\tilde{\eta}_{\mu\nu}{\bf d}y^{\mu}\otimes{\bf d}y^{\nu}\), where \(\tilde{\eta}_{\mu\nu}=\text{diag}(1,-1,-1,-1)\). In spherical coordinates we have that \[\tilde{\mathbf{\eta}}={\bf d}t\otimes{\bf d}t-{\bf d}r\otimes{\bf d}r-r^{2}{\mathbf{ \sigma}},\] where \(t=y^{0}\), \(r^{2}=\sum_{i=1}^{3}y_{i}^{2}\), and \({\mathbf{\sigma}}\) denotes the standard metric on \({\mathbb{S}}^{2}\). On the region \[{\cal N}\equiv\{(y^{\mu})\in{\mathbb{R}}^{4}\mid\tilde{\eta}_{\mu\nu}y^{\mu}y^ {\nu}<0\}\] which includes the asymptotic region around spatial infinity, we consider the coordinate inversion \[y^{\mu}\mapsto x^{\mu}=-\frac{y^{\mu}}{y_{\nu}y^{\nu}},\] and formally extend the domain of validity of the coordinates \((x^{\mu})\) to include the set \({\mathscr{I}}^{\prime}=\{x^{\mu}x_{\mu}=0\}=\{y^{\mu}y_{\mu}=-\infty\}\). The set \({\mathscr{I}}^{\prime}\) formally forms the part of the boundary of \({\cal N}\) at infinity. Defining \(\rho^{2}\equiv\sum_{i=1}^{3}x_{i}^{2}\), the metric \(\tilde{\mathbf{\eta}}\) reads \[\tilde{\mathbf{\eta}}=\frac{1}{(x^{\sigma}x_{\sigma})^{2}}\tilde{\eta}_{\mu\nu}{ \bf d}x^{\mu}\otimes{\bf d}x^{\nu}=\frac{1}{(\rho^{2}-(x^{0})^{2})^{2}}\left({ \bf d}\left(x^{0}\right)\otimes{\bf d}\left(x^{0}\right)-{\bf d}\rho\otimes{ \bf d}\rho-\rho^{2}{\mathbf{\sigma}}\right), \tag{1}\] where on \({\cal N}\) the inverted coordinates \(x^{0}\) and \(\rho\) are given in terms of \(t\) and \(r\) by \[\rho=\frac{r}{r^{2}-t^{2}},\qquad x^{0}=\frac{t}{r^{2}-t^{2}}.\] The right-hand side of (1) is a conformally rescaled Minkowski metric in the new coordinates \((x^{\mu})\), where one notices that on \({\cal N}\) the conformal factor \[\Xi=-x^{\mu}x_{\mu}=-\frac{1}{y^{\mu}y_{\mu}}=\rho^{2}-(x^{0})^{2}\] extends smoothly to \({\mathscr{I}}^{\prime}\). Moreover, the metric \(\eta^{\prime}_{\mu\nu}\equiv\Xi^{2}\tilde{\eta}_{\mu\nu}\) (which is just the Minkowski metric again) does too, and now contains at its origin the point \[i^{0}\equiv\{(x^{\mu})\in{\mathbb{R}}^{4}\mid x^{\mu}=0\}\] called _spatial infinity_, formally a point on the \({\mathscr{I}}^{\prime}\) part of the boundary of \({\cal N}\). The sets \[{\mathscr{I}}^{\pm}\equiv{\mathscr{I}}^{\prime}\cap\{(x^{\mu})\in{\mathbb{R}}^ {4}\mid\eta_{\mu\nu}x^{\mu}x^{\nu}=0,\;\pm x^{0}>0\}\] are called the future and past _null infinity_, respectively, of Minkowski space (near \(i^{0}\)), and are the hypersurfaces which null geodesics in \({\cal N}\) approach asymptotically--see Figure 3. ### Cylinder representation of spatial infinity: the F-gauge In the construction above, the endpoint \(i^{0}\) of all _spacelike_ geodesics in Minkowski space is collapsed to a point, which, for the purposes of working with spacetimes with mass, turns out to be somewhat unnatural (compare this, for example, to \(\mathscr{I}^{\pm}\), the end_surfaces_ of null geodesics). Instead, it is useful to blow up \(i^{0}\) to a cylinder as follows. Set1\(x^{0}=\rho\tau\), and define a new conformal factor2 Footnote 1: Note that in terms of the physical coordinates \(t\) and \(r\), the new coordinate \(\tau\) is given by \(\tau=t/r\). Footnote 2: In terms of physical coordinates \(t\) and \(r\), this new conformal factor is simply \(\Theta=1/r\). This conformal scale in fact appears in the literature on the theory of _conformal scattering_[13, 14, 15], albeit without the use of the F-coordinates \(\tau\) and \(\rho\). It is the definition of these coordinates that is at the core of the blow-up of \(i^{0}\) to a cylinder. \[\Theta\equiv\frac{1}{\rho}\Xi=\rho\left(1-\tau^{2}\right).\] We then define the unphysical metric \[\boldsymbol{\eta} \equiv\Theta^{2}\tilde{\boldsymbol{\eta}}\] \[=\frac{1}{\rho^{2}}\left(\rho^{2}\mathbf{d}\tau\otimes\mathbf{d} \tau+\tau\rho\left(\mathbf{d}\tau\otimes\mathbf{d}\rho+\mathbf{d}\rho\otimes \mathbf{d}\tau\right)-\left(1-\tau^{2}\right)\mathbf{d}\rho\otimes\mathbf{d} \rho-\rho^{2}\boldsymbol{\sigma}\right).\] We call the coordinates \(\tau\) and \(\rho\) the _F-coordinates_ on Minkowski space. In terms of \(\tau\) and \(\rho\), the region \(\mathcal{N}\) then reads \[\mathcal{N}=\left\{(\tau,\rho)\times\mathbb{S}^{2}\,|\,-1<\tau<1,\;\rho>0 \right\}\simeq(-1,1)_{\tau}\times(0,\infty)_{\rho}\times\mathbb{S}^{2}\] and \[\mathscr{I}^{\pm}=\left\{(\tau,\rho)\times\mathbb{S}^{2}\,|\,\tau=\pm 1,\;\rho>0 \right\}\simeq(0,\infty)_{\rho}\times\mathbb{S}^{2}.\] It is also convenient to introduce the sets \[\mathcal{I}\equiv\left\{(\tau,\rho)\times\mathbb{S}^{2}\,|\,| \tau|<1,\;\rho=0\right\}\simeq(-1,1)_{\tau}\times\mathbb{S}^{2},\] \[\mathcal{I}^{0}\equiv\left\{(\tau,\rho)\times\mathbb{S}^{2}\,| \,\tau=0,\;\rho=0\right\}\simeq\mathbb{S}^{2},\] \[\mathcal{I}^{\pm}\equiv\left\{(\tau,\rho)\times\mathbb{S}^{2}\,| \,\tau=\pm 1,\;\rho=0\right\}\simeq\mathbb{S}^{2}.\] Figure 3: The point representation of spatial infinity \(i^{0}\) with its neighbourhood \(\mathcal{N}\). The left diagram depicts the region \(\mathcal{N}\) in physical space. The right diagram depicts a conformal extension of the region which includes spatial infinity and a portion of null infinity. In this representation the point \(i^{0}=\{x^{\mu}=0\}=\{\rho=0,\,x^{0}=0\}\) has therefore been blown to the cylinder \({\cal I}\). Note that although \({\cal I}\) is at a finite \(\rho\)-coordinate, it is still at infinity with respect to the metric \(\eta\). We call \({\cal I}\) the _cylinder at spatial infinity_ and \({\cal I}^{\pm}\) the _critical sets_. Let us denote the standard initial hypersurface by \[\tilde{\cal S}\equiv\{(y^{\mu})\in\mathbb{R}^{4}\mid t=y^{0}=0\}\] and set \[{\cal S}\equiv\tilde{\cal S}\cup{\cal I},\qquad\overline{\cal I}\equiv{\cal I }\cup{\cal I}^{+}\cup{\cal I}^{-},\qquad\overline{\cal N}\equiv{\cal N}\cup{ \cal I}^{+}\cup{\cal I}^{-}\cup\overline{\cal I},\] and consider \(\rho\), \(\tau\) and the spherical coordinates \(\sigma\) as coordinates on \[\overline{\cal N}\simeq[-1,1]_{\tau}\times[0,\infty)_{\rho}\times\mathbb{S}_{ \sigma}^{2}.\] In these coordinates the expressions for \(\Theta\) and \(\Xi=\rho\Theta\), and the coordinate expression for the inverse metric \(\eta^{\sharp}\) all extend smoothly to all points of \(\overline{\cal N}\). Observe, however, that \(\eta\) itself degenerates at \(\rho=0\), so that \(\eta\) is only smooth on \({\cal N}\cup{\cal I}^{+}\cup{\cal I}^{-}\). We call this conformal completion of \({\cal N}\), together with the F-coordinates \(\tau\) and \(\rho\), the _F-gauge_[10, 11]. ### Null frame near \({\cal I}^{0}\) In order to perform estimates, we will write the field equations in terms of a null frame \(\{\mathbf{e}_{\mathbf{a}}=\mathbf{e}_{\mathbf{a}}^{\mu}\mathbf{\partial}_{\mu}\}\). We make use of the Infeld-van der Waerden symbols to _spinorise_ the frame indices, \(\mathbf{e}_{\mathbf{a}}\mapsto\mathbf{e}_{\mathbf{A}\mathbf{A}^{\prime}}^{\mu}\mathbf{\partial}_{\mu}\), and introduce a frame \(\mathbf{e}_{\mathbf{A}\mathbf{A}^{\prime}}\) satisfying \[\mathbf{\eta}(\mathbf{e}_{\mathbf{A}\mathbf{A}^{\prime}},\mathbf{e}_{\mathbf{B}\mathbf{B}^{\prime}})= \epsilon_{\mathbf{A}\mathbf{B}}\epsilon_{\mathbf{A}^{\prime}\mathbf{B}^{\prime}}.\] Specifically, we choose \[\mathbf{e}_{\mathbf{0}\mathbf{0}^{\prime}}=\frac{1}{\sqrt{2}}\left((1-\tau)\partial_{\tau }+\rho\partial_{\rho}\right),\qquad\mathbf{e}_{\mathbf{1}\mathbf{1}^{\prime}}=\frac{1}{ \sqrt{2}}\left((1+\tau)\partial_{\tau}-\rho\partial_{\rho}\right) \tag{2}\] Figure 4: The cylinder at spatial infinity in the F-gauge. On the left, a depiction suppressing one angular dimension. On the right, a cross section. This type of 2-dimensional diagram is a convenient way of depicting the locations of various hypersurfaces in the F-gauge. Observe that this is a _coordinate diagram_ and not a Penrose diagram. In particular, null geodesics do not correspond to lines with slope of \(\pm 45^{\circ}\). and complex vector fields \(\boldsymbol{e_{01^{\prime}}}\) and \(\boldsymbol{e_{10^{\prime}}}\) which are tangent to the spheres \(\mathbb{S}^{2}_{\tau,\rho}\) of constant \(\tau\) and \(\rho\). However, non-vanishing smooth vector fields on \(\mathbb{S}^{2}\) cannot be defined globally, so we lift the dimension of the spheres by considering _all possible choices_ of such vector fields. Specifically, for each \((\tau,\rho)\) we consider the Hopf map \(p:\mathrm{SU}(2)\simeq\mathbb{S}^{3}\to\mathbb{S}^{2}\simeq\mathrm{SU}(2)/ \mathrm{U}(1)\), which defines a principal \(\mathrm{U}(1)\)-bundle over \(\mathbb{S}^{2}\). This map induces a group of rotations \(\boldsymbol{e_{01^{\prime}}}\mapsto e^{i\alpha}\boldsymbol{e_{01^{\prime}}}\), \(\alpha\in\mathbb{R}\), which leaves the tangent bundle of \(\mathbb{S}^{2}_{\tau,\rho}\) invariant. By lifting each 2-sphere along the Hopf map, we obtain a 5-dimensional manifold \([-1,1]_{\tau}\times(0,\infty)_{\rho}\times\mathrm{SU}(2)\); note that this is a 5-dimensional submanifold of the bundle of normalised spin frames on \(\overline{\mathcal{N}}\setminus\overline{\mathcal{I}}\). The geometric structures on \(\overline{\mathcal{N}}\setminus\overline{\mathcal{I}}\) can be naturally lifted to the 5-dimensional manifold introduced in the previous paragraph. Considering \(\tau\), \(\rho\) and \(t^{\boldsymbol{A}}_{\boldsymbol{B}}\in\mathrm{SU}(2)\) as coordinates on this manifold, the lifted vector fields in (2) have the same coordinate expressions as before. Allowing \(\rho\) to take the value 0, one can extend all geometric structures to include this value. Thus, one considers \((\tau,\rho,t^{\boldsymbol{A}}_{\boldsymbol{B}})\) as coordinates on the extended manifold, which we again denote by \[\overline{\mathcal{N}}\equiv[-1,1]_{\tau}\times[0,\infty)_{\rho}\times\mathrm{ SU}(2)_{t^{\boldsymbol{A}}_{\boldsymbol{B}}}.\] In this setting now \[\mathcal{I}\simeq(-1,1)_{\tau}\times\mathrm{SU}(2),\quad\mathcal{I}^{0}\simeq \mathrm{SU}(2),\quad\mathcal{I}^{\pm}\simeq\mathrm{SU}(2),\quad\text{and} \quad\mathscr{I}^{\pm}\simeq(0,\infty)_{\rho}\times\mathrm{SU}(2)\] as subsets of \(\overline{\mathcal{N}}\). To complete this construction, it remains to choose a frame of complex vector fields on \(\mathrm{SU}(2)\). We set \[\boldsymbol{e_{01^{\prime}}}=-\frac{1}{\sqrt{2}}\boldsymbol{X}_{+}\quad\text{ and}\quad\boldsymbol{e_{10^{\prime}}}=-\frac{1}{\sqrt{2}}\boldsymbol{X}_{-},\] where \(\boldsymbol{X}_{\pm}\) are vector fields on \(\mathrm{SU}(2)\) defined in Appendix A (briefly, they are complex linear combinations of two of the three left-invariant vector fields on \(\mathrm{SU}(2)\)). The connection form on the bundle of normalised spin frames then defines connection coefficients \(\Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}\boldsymbol{B}\boldsymbol{C}}= \Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}\boldsymbol{C}\boldsymbol{B}}\) with respect to the frame \(\boldsymbol{e_{\boldsymbol{A}\boldsymbol{A}^{\prime}}}\), the only independent non-zero values of which are \[\Gamma_{\boldsymbol{00^{\prime}01}}=\Gamma_{\boldsymbol{11^{\prime}01}}=- \frac{1}{2\sqrt{2}}.\] In Newman-Penrose language, these correspond to the spin coefficients \(\varepsilon\) and \(\gamma\), which are gauge quantities with respect to tetrad rescalings in the compacted spin coefficient formalism--see SS4.12, [10]. ### Spin-\(s\) equations in the F-gauge Let \[\phi_{A_{1}\ldots A_{2s}}=\phi_{(A_{1}\ldots A_{2s})}\] be a totally symmetric spinor of valence \(2s\), with \(s\in\frac{1}{2}\mathbb{N}\), where we take \(\mathbb{N}=\{1,\,2,\,3,\,\ldots\}\), i.e. we do not consider the spin zero case (the wave equation). The massless spin-\(s\) equations for \(\phi\) then read \[\nabla^{Q}{}_{A^{\prime}}\phi_{QA_{1}\ldots A_{2s-1}}=0. \tag{3}\] In the F-gauge equation (3) is equivalent to the system \[A_{k} \equiv(1+\tau)\partial_{\tau}\phi_{k+1}-\rho\partial_{\rho}\phi_{ k+1}-\boldsymbol{X}_{+}\phi_{k}+(k+1-s)\phi_{k+1}=0, \tag{4a}\] \[B_{k} \equiv(1-\tau)\partial_{\tau}\phi_{k}+\rho\partial_{\rho}\phi_{ k}-\boldsymbol{X}_{-}\phi_{k+1}+(k-s)\phi_{k}=0, \tag{4b}\] for \(k=0,\,\ldots,\,2s-1\), where \(\phi_{k}\) are the \(2s\) independent components of the spinor \(\phi_{A_{1}\cdots A_{2s}}\) with respect to a spin dyad \(\{\alpha^{A},\iota^{A}\}\) corresponding to the frame \(\{\boldsymbol{e_{\boldsymbol{A}\boldsymbol{A}^{\prime}}}\}\). The derivation of equations (4a) and (4b) is given in Appendix B. We note here that the component \(\phi_{2s}=\phi_{A_{1}\ldots A_{2s}}\iota^{A_{1}}\ldots\iota^{A_{2s}}\) is the _outgoing radiation field_ on \(\mathscr{I}^{+}\), while \(\phi_{0}=\phi_{A_{1}\ldots A_{2s}}\rho^{A_{1}}\ldots\sigma^{A_{2s}}\) is the _incoming radiation field_ on \(\mathscr{I}^{-}\). Estimates near \(\mathscr{I}^{+}\) In this section we construct estimates for the massless spin-\(s\) equations which allow us to control the solutions up to and including \(\mathscr{I}^{+}\) in terms of suitable norms on initial data prescribed on a Cauchy hypersurface. In doing so we recap the estimates of Friedrich [10] and extend them to fields of arbitrary spin \(s\). ### Geometric setup We use the F-gauge as discussed in Sections 2.2 and 2.3. Given \(t\in(-1,1]\), \(t>\tau_{\star}\in(-1,1)\), and \(\rho_{\star}>0\), one defines the sets \[\mathcal{N}_{t} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}_{\boldsymbol{B}}})\,| \,\tau_{\star}\leqslant\tau\leqslant t,\;0\leqslant\rho\leqslant\frac{\rho_{ \star}}{1+\tau},\;t^{\boldsymbol{A}_{\boldsymbol{B}}}\in\mathrm{SU}(2)\Big{\}} \subset[-1,1]_{\tau}\times[0,\infty)_{\rho}\times\mathrm{SU}(2),\] \[\mathcal{S}_{t} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}_{\boldsymbol{B}}})\,| \,\tau=t,\;0\leqslant\rho\leqslant\frac{\rho_{\star}}{1+t},\;t^{\boldsymbol{ A}_{\boldsymbol{B}}}\in\mathrm{SU}(2)\Big{\}},\] \[\mathcal{B}_{t} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}_{\boldsymbol{B}}})\,| \,\tau_{\star}\leqslant\tau\leqslant t,\;\rho=\frac{\rho_{\star}}{1+\tau},\;t ^{\boldsymbol{A}_{\boldsymbol{B}}}\in\mathrm{SU}(2)\Big{\}},\] \[\mathcal{I}_{t} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}_{\boldsymbol{B}}})\,| \,\tau_{\star}\leqslant\tau\leqslant t,\;\rho=0,\;t^{\boldsymbol{A}_{ \boldsymbol{B}}}\in\mathrm{SU}(2)\Big{\}},\] and \(\mathcal{S}_{\star}\equiv\mathcal{S}_{\tau_{\star}}\). A schematic depiction of these sets is given in Figure 5. ### Construction of estimates Given non-negative integers \(p\), \(p^{\prime}\), \(q\), \(q^{\prime}\) and a multiindex \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\), we consider the differential operators \[D\equiv D^{q,p,\alpha}\equiv\partial_{\tau}^{q}\partial_{\rho}^{p}\boldsymbol{ Z}^{\alpha}\quad\text{and}\quad D^{\prime}\equiv D^{q^{\prime},p^{\prime}, \alpha}\equiv\partial_{\tau}^{q^{\prime}}\partial_{\rho}^{p^{\prime}} \boldsymbol{Z}^{\alpha},\] where \(\boldsymbol{Z}^{\alpha}\) denotes the vector fields on \(\mathrm{SU}(2)\) introduced in Appendix A.2.2. We then have the following estimates controlling the solutions to the spin-\(s\) equations up to \(\mathscr{I}^{+}\). _Proposition 3.1_.: Let \(\tau_{\star}\in(-1,1)\), \(t>\tau_{\star}\), and consider the field \(\phi_{A_{1}\cdots A_{2s}}\) satisfying equations (33a) and (33b) in the region \(\mathcal{N}_{t}\). Let \((p,m)\in\mathbb{N}\times\mathbb{N}\) be such that \[p>m+s,\] Figure 5: Schematic depiction of the domain used in the construction of estimates controlling the solutions to the spin-\(s\) equations near \(\mathscr{I}^{+}\) in terms of Cauchy initial data. and suppose that \[\sum_{k=0}^{2s}\int_{\mathcal{S}_{*}}\sum_{q^{\prime}+p^{\prime}+|\alpha|\leqslant m }|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^{2}\;\mathrm{d}\rho\wedge\mathrm{d} \mu<\infty.\] Then there exists a constant \(C_{p,m,s}>0\) which is independent of \(t\) such that for all \(0\leqslant k\leqslant 2s\) \[\|\partial_{\rho}^{p}\phi_{k}\|_{H=(\mathcal{N}_{t})}^{2}\equiv\int_{\mathcal{ N}_{t}}\sum_{q^{\prime}+p^{\prime}+|\alpha|\leqslant m}|D^{\prime}(\partial_{ \rho}^{p}\phi_{k})|^{2}\,\mathrm{dv}\leqslant C_{p,m,s}\sum_{k=0}^{2s}\int_{ \mathcal{S}_{*}}\sum_{q^{\prime}+p^{\prime}+|\alpha|\leqslant m}|D^{\prime}( \partial_{\rho}^{p}\phi_{k})|^{2}\;\mathrm{d}\rho\wedge\mathrm{d}\mu. \tag{5}\] Proof.: Applying \(D\) to the equations (33a) and (33b), multiplying by \(\overline{D\phi_{k}}\) and \(\overline{D\phi_{k+1}}\) respectively, and summing the real parts, one obtains the identity \[2\operatorname{Re}\left(\overline{D\phi_{k+1}}DA_{k}[\phi]+\overline{D\phi_{ k}}DB_{k}[\phi]\right)=0.\] Commuting \(D\) into the operators \(A_{k}[\phi]\) and \(B_{k}[\phi]\), this may be rewritten as \[\begin{split} 0&=\partial_{\tau}\left((1+\tau)|D\phi_{k+1} |^{2}+(1-\tau)|D\phi_{k}|^{2}\right)+\partial_{\rho}\left(-\rho|D\phi_{k+1}|^{2 }+\rho|D\phi_{k}|^{2}\right)\\ &\quad-\mathbf{Z}^{\alpha}\mathbf{X}_{+}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\phi_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{ \phi}_{k+1})-\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})\bm {Z}^{\alpha}\mathbf{X}_{+}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{k+1}) \\ &\quad-\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\bar{\phi}_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p} \phi_{k+1})-\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{ k})\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}) \\ &\quad+2(k+1-s+q-p)|D\phi_{k+1}|^{2}+2(k-s-q+p)|D\phi_{k}|^{2}. \end{split} \tag{6}\] It is worth noting at this stage that the numerical factors in the last two terms in (6), arising from the explicit appearance of the coordinates \(\tau\) and \(\rho\) in equations (33a) and (33b), are a key aspect of the calculation. In particular, their signs may be manipulated by choosing the values of \(q\) and \(p\) appropriately. We now integrate (6) over \(\mathcal{N}_{t}\) (cf. Figure 5) with respect to the _Euclidean_ volume element \(\mathrm{dv}=\mathrm{d}\tau\wedge\mathrm{d}\rho\wedge\mathrm{d}\mu\), where \(\mathrm{d}\mu\) is the Haar measure on \(\mathrm{SU}(2)\). We have \[\begin{split} 0&=\int_{\mathcal{N}_{t}}\left(\begin{array}{ c}\partial_{\tau}\\ \partial_{\rho}\end{array}\right)\cdot\left(\begin{array}{c}(1+\tau)|D\phi_{k+1 }|^{2}+(1-\tau)|D\phi_{k}|^{2}\\ -\rho|D\phi_{k+1}|^{2}+\rho|D\phi_{k}|^{2}\end{array}\right)\mathrm{dv}\\ &\quad-\int_{\mathcal{N}_{t}}\left(\mathbf{Z}^{\alpha}\mathbf{X}_{+}( \partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})\mathbf{Z}^{\alpha}(\partial_{\tau} ^{q}\partial_{\rho}^{p}\bar{\phi}_{k+1})+\mathbf{Z}^{\alpha}(\partial_{\tau}^{q} \partial_{\rho}^{p}\phi_{k})\mathbf{Z}^{\alpha}\mathbf{X}_{+}(\partial_{\tau}^{q} \partial_{\rho}^{p}\bar{\phi}_{k+1})\right.\\ &\quad\quad+\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\bar{\phi}_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p} \phi_{k+1})+\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{ k})\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}) \right)\mathrm{dv}\\ &\quad+2(k+1-s+q-p)\int_{\mathcal{N}_{t}}|D\phi_{k+1}|^{2}\, \mathrm{dv}+2(k-s-q+p)\int_{\mathcal{N}_{t}}|D\phi_{k}|^{2}\,\mathrm{dv}.\end{split}\] Using Lemma A.1, it is straightforward to check that the second integral in the above equality will vanish when summed over \(|\alpha|\leqslant m^{\prime}\) for a given \(m^{\prime}\); as we will employ such a summation shortly, we simply write \(\operatorname{angular}(\alpha)\) to denote the second integral in the above. Next, using the Euclidean divergence theorem, one obtains \[\begin{split} 0&=\int_{\mathcal{S}_{t}}\left((1+t)|D\phi_{k+1 }|^{2}+(1-t)|D\phi_{k}|^{2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu\\ &\quad-\int_{\mathcal{S}_{*}}\left((1+\tau_{\star})|D\phi_{k+1}|^{ 2}+(1-\tau_{\star})|D\phi_{k}|^{2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu\\ &\quad+\int_{\mathcal{I}_{t}}\rho\left(|D\phi_{k}|^{2}-|D\phi_{k+1 }|^{2}\right)\mathrm{d}\tau\wedge\mathrm{d}\mu\\ &\quad+\int_{\mathcal{B}_{t}}\left(\begin{array}{c}(1+\tau)|D \phi_{k+1}|^{2}+(1-\tau)|D\phi_{k}|^{2}\\ -\rho|D\phi_{k+1}|^{2}+\rho|D\phi_{k}|^{2}\end{array}\right)\cdot\nu\left( \begin{array}{c}\rho\\ 1+\tau\end{array}\right)\mathrm{d}\mathcal{B}+\operatorname{angular}(\alpha)\\ &\quad+2(p-q+k-s)\int_{\mathcal{N}_{t}}|D\phi_{k}|^{2}\, \mathrm{dv}-2(p-q-k-1+s)\int_{\mathcal{N}_{t}}|D\phi_{k+1}|^{2}\,\mathrm{dv}, \end{split}\] where \(\mathrm{d}\mathcal{B}\) is the induced measure on \(\mathcal{B}_{t}\), \(\nu\equiv(\rho^{2}+(1+\tau)^{2})^{-1/2}\) is a normalisation factor for the outward normal to \(\mathcal{B}_{t}\), and we note that the integral over \(\mathcal{I}_{t}\) vanishes due to the factor of \(\rho\) in the integrand. Further, the integral over \(\mathcal{B}_{t}\) simplifies to just \(\int_{\mathcal{B}_{t}}2\nu\rho|D\phi_{k}|^{2}\,\mathrm{d}\mathcal{B}\), and in particular is manifestly non-negative. Altogether therefore \[\int_{\mathcal{S}_{t}}\left((1+t)|D\phi_{k+1}|^{2}+(1-t)|D\phi_{k} |^{2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu+2(p-q+k-s)\int_{\mathcal{N}_{t}} |D\phi_{k}|^{2}\,\mathrm{d}\mathrm{v}\] \[\leqslant\int_{\mathcal{S}_{\star}}\left((1+\tau_{\star})|D\phi_{ k+1}|^{2}+(1-\tau_{\star})|D\phi_{k}|^{2}\right)\mathrm{d}\rho\wedge\mathrm{d} \mu+2(p-q-k-1+s)\int_{\mathcal{N}_{t}}|D\phi_{k+1}|^{2}\,\mathrm{dv}\] \[-\mathrm{angular}(\alpha).\] We now perform the relabelling \[q\longrightarrow q^{\prime},\qquad p\longrightarrow p^{\prime}+p,\] under which \(D\longrightarrow D^{\prime}\partial_{\rho}^{p}\), where \(D^{\prime}=\partial_{\tau}^{q^{\prime}}\partial_{\rho}^{p^{\prime}}\,\mathbf{Z}^ {\alpha}\), and sum both sides over \[\mathcal{U}\equiv\left\{(q^{\prime},p^{\prime},\alpha)\,|\,q^{\prime}+p^{ \prime}+|\alpha|\leqslant m\right\}.\] Then \(\sum_{\mathcal{U}}\mathrm{angular}(\alpha)=0\), and we obtain \[(1+t) \sum_{\mathcal{U}}\int_{\mathcal{S}_{t}}|D^{\prime}(\partial_{ \rho}^{p}\phi_{k+1})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu+(1-t)\sum_{ \mathcal{U}}\int_{\mathcal{S}_{t}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^ {2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu\] \[+2\sum_{\mathcal{U}}(p^{\prime}+p-q^{\prime}+k-s)\int_{\mathcal{ N}_{t}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{dv}\] \[\leqslant(1+\tau_{\star})\sum_{\mathcal{U}}\int_{\mathcal{S}_{ \star}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k+1}|^{2}\,\mathrm{d}\rho\wedge \mathrm{d}\mu+(1-\tau_{\star})\sum_{\mathcal{U}}\int_{\mathcal{S}_{\star}}|D^{ \prime}(\partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu\] \[+2\sum_{\mathcal{U}}(p^{\prime}+p-q^{\prime}-k-1+s)\int_{\mathcal{ N}_{t}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k+1})|^{2}\,\mathrm{dv}.\] We now choose \(p\) and \(m\) such that \[p>m+s\] and use simple uniform bounds for the numerical factors in the above, \[(1+t) \sum_{\mathcal{U}}\int_{\mathcal{S}_{t}}|D^{\prime}(\partial_{ \rho}^{p}\phi_{k+1})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu+(1-t)\sum_{ \mathcal{U}}\int_{\mathcal{S}_{t}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^ {2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu \tag{7}\] \[+2(p-m-s+k)\sum_{\mathcal{U}}\int_{\mathcal{N}_{t}}|D^{\prime}( \partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{dv}\] \[\leqslant(1+\tau_{\star})\sum_{\mathcal{U}}\int_{\mathcal{S}_{ \star}}|D^{\prime}(\partial_{\rho}^{p}\phi_{k+1})|^{2}\,\mathrm{d}\rho\wedge \mathrm{d}\mu+(1-\tau_{\star})\sum_{\mathcal{U}}\int_{\mathcal{S}_{\star}}|D^{ \prime}(\partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu\] \[+2(p+m+s-1)\sum_{\mathcal{U}}\int_{\mathcal{N}_{t}}|D^{\prime}( \partial_{\rho}^{p}\phi_{k+1})|^{2}\,\mathrm{dv}.\] With the restriction \(p>m+s\) on \(p\) and \(m\), the bulk integral on the left-hand side is positive, so we deduce for \(0\leqslant k\leqslant 2s-1\) that \[\int_{\mathcal{S}_{t}}\sum_{\mathcal{U}}|D^{\prime}(\partial_{ \rho}^{p}\phi_{k+1})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu \lesssim\int_{\mathcal{S}_{\star}}\sum_{\mathcal{U}}\left(|D^{\prime}( \partial_{\rho}^{p}\phi_{k+1})|^{2}+|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^ {2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu \tag{8}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+2(p+m+s-1)\int_{\tau_{\star} }^{t}\left(\int_{\mathcal{S}_{\tau}}\sum_{\mathcal{U}}|D^{\prime}(\partial_{ \rho}^{p}\phi_{k+1})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu\right)\mathrm{d}\tau.\] The inequality (8) provides an estimate for all components of \(\phi_{A_{1}\cdots A_{2s}}\) except \(\phi_{0}\), the incoming radiation field. For this, we note that (7) also implies \[2(p-m-s)\int_{\mathcal{N}_{t}}\sum_{\mathcal{U}}|D^{\prime}( \partial_{\rho}^{p}\phi_{0})|^{2}\,\mathrm{d}\mathrm{v} \lesssim\int_{\mathcal{S}_{*}}\sum_{\mathcal{U}}\left(|D^{\prime}( \partial_{\rho}^{p}\phi_{0})|^{2}+|D^{\prime}(\partial_{\rho}^{p}\phi_{1})|^{2 }\right)\mathrm{d}\rho\wedge\mathrm{d}\mu \tag{9}\] \[\quad+\int_{\mathcal{N}_{t}}\sum_{\mathcal{U}}|D^{\prime}( \partial_{\rho}^{p}\phi_{1})|^{2}\,\mathrm{d}\mathrm{v},\] so that the norm \(\|\partial_{\rho}^{p}\phi_{0}\|_{H^{m}(\mathcal{N}_{t})}^{2}\) will be controlled if the norm \(\|\partial_{\rho}^{p}\phi_{1}\|_{H^{m}(\mathcal{N}_{t})}^{2}\) is controlled. Now applying Gronwall's inequality to (8), we obtain \[\int_{\mathcal{S}_{t}}\sum_{\mathcal{U}}|D^{\prime}(\partial_{\rho}^{p}\phi_{ k+1})|^{2}\,\,\mathrm{d}\rho\wedge\mathrm{d}\mu\lesssim_{p,m,s}\int_{ \mathcal{S}_{*}}\sum_{\mathcal{U}}\left(|D^{\prime}(\partial_{\rho}^{p}\phi_{ k+1})|^{2}+|D^{\prime}(\partial_{\rho}^{p}\phi_{k})|^{2}\right)\mathrm{d} \rho\wedge\mathrm{d}\mu,\] which, upon integration, gives \[\int_{\mathcal{N}_{t}}\sum_{\mathcal{U}}|D^{\prime}(\partial_{\rho}^{p}\phi_{ k+1})|^{2}\,\mathrm{d}\mathrm{v}\lesssim_{p,m,s}\int_{\mathcal{S}_{*}}\sum_{ \mathcal{U}}\left(|D^{\prime}(\partial_{\rho}^{p}\phi_{k+1})|^{2}+|D^{\prime} (\partial_{\rho}^{p}\phi_{k})|^{2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu\] for \(0\leqslant k\leqslant 2s-1\). This, combined with (9), then gives the estimate for \(\phi_{0}\), \[\int_{\mathcal{N}_{t}}\sum_{\mathcal{U}}|D^{\prime}(\partial_{\rho}^{p}\phi_{ 0})|^{2}\,\mathrm{d}\mathrm{v}\lesssim_{p,m,s}\int_{\mathcal{S}_{*}}\sum_{ \mathcal{U}}\left(|D^{\prime}(\partial_{\rho}^{p}\phi_{1})|^{2}+|D^{\prime}( \partial_{\rho}^{p}\phi_{0})|^{2}\right)\mathrm{d}\rho\wedge\mathrm{d}\mu.\] We therefore conclude that for all \(0\leqslant k\leqslant 2s\) and for \(p>m+s\) \[\|\partial_{\rho}^{p}\phi_{k}\|_{H^{m}(\mathcal{N}_{t})}^{2}\leqslant C_{p,m,s }\sum_{k=0}^{2s}\int_{\mathcal{S}_{*}}\sum_{\mathcal{U}}|D^{\prime}(\partial_ {\rho}^{p}\phi_{k})|^{2}\,\,\mathrm{d}\rho\wedge\mathrm{d}\mu, \tag{5}\] where in particular the constant \(C_{p,m,s}\) is independent of \(t\). _Remark 1_.: Observe that the right-hand side of (5) in principle contains large numbers of derivatives in \(\tau\). Using the spin-\(s\) equations (33a) and (33b), however, these can be eliminated entirely at the expense of producing derivatives in \(\rho\) and along \(\mathrm{SU}(2)\). That is, the right-hand side may be estimated by the initial data and its tangential derivatives on \(\mathcal{S}_{*}\). _Remark 2_.: As the right-hand side of (5) does not depend on \(t\), one has that the left-hand side is uniformly bounded as \(t\to 1\). Therefore for any \(t\in(-1,1]\), as the boundary \(\partial\mathcal{N}_{t}\) of the 5-dimensional domain \(\mathcal{N}_{t}\) is Lipschitz, the Sobolev embedding theorem gives for \(m,r\in\mathbb{N}\), \(\alpha\in[0,1)\) such that \(m\geqslant r+\alpha+\frac{5}{2}\) the continuous embedding \[H^{m}(\mathcal{N}_{t})\hookrightarrow C^{r,\alpha}(\mathcal{N}_{t}).\] In particular, if \(\alpha\leqslant\frac{1}{2}\) we have \[H^{r+3}(\mathcal{N}_{t})\hookrightarrow C^{r,\alpha}(\mathcal{N}_{t}).\] Note that it is important here that \(\mathcal{N}_{t}\) is an open set, i.e. we consider \(\mathcal{N}_{t}\) without its boundary. ### Asymptotic expansions near \(\mathcal{I}\) Using estimate (5) and the Sobolev embedding theorem as described above, one finds that for any \(0\leqslant k\leqslant 2s\), \(r\in\mathbb{N}\), and \(0\leqslant\alpha\leqslant\frac{1}{2}\), \[\partial_{\rho}^{p}\phi_{k}\in C^{r,\alpha}(\mathcal{N}_{1})\quad\text{for} \quad p\geqslant r+s+4,\] provided the data on \(\mathcal{S}_{\star}\) is sufficiently regular, as per Proposition 3.1. This allows one to ensure the desired regularity of the solution near \(\mathscr{I}^{+}\) by providing sufficiently regular data on \(\mathcal{S}_{\star}\). Now, using the wave equations of Appendix C on \(\mathcal{I}\), one may solve for the functions \[\phi_{k}^{(p^{\prime})}(\tau,t^{\boldsymbol{A}}_{\boldsymbol{B}})\equiv\partial _{\rho}^{p^{\prime}}\phi_{k}|_{\mathcal{I}},\] \(0\leqslant k\leqslant 2s\), on \(\mathcal{I}\) for any finite order \(p^{\prime}\in\mathbb{N}\cup\{0\}\). The functions \(\phi_{k}^{(p^{\prime})}\) are computable explicitly as a consequence of the fact that the cylinder at spatial infinity \(\mathcal{I}\) is a total characteristic of the spin-\(s\) equations. Regarding these functions as functions on \(\mathcal{N}_{1}\) which are independent of \(\rho\), one may integrate \(\partial_{\rho}^{p}\phi_{k}\) with respect to \(\rho\) to obtain for \(p\geqslant r+s+4\) the expansion \[\phi_{k}=\sum_{p^{\prime}=|k-2s|}^{p-1}\frac{1}{p^{\prime}!}\phi_{k}^{(p^{ \prime})}\rho^{p^{\prime}}+J^{p}(\partial_{\rho}^{p}\phi_{k}) \tag{10}\] on \(\mathcal{N}_{1}\), where \(J\) denotes the operator \[J\,:\,f\mapsto J(f)=\int_{0}^{\rho}f(\tau,\rho^{\prime},t^{\boldsymbol{A}}_{ \boldsymbol{B}})\,\mathrm{d}\rho^{\prime}.\] It follows then that \[J^{p}(\partial_{\rho}^{p}\phi_{k})=\phi_{k}-\sum_{p^{\prime}=0}^{p-1}\frac{1}{ p^{\prime}!}\phi_{k}^{(p^{\prime})}\rho^{p^{\prime}}\in C^{r,\alpha}(\mathcal{N }_{1})\quad\text{for}\quad p\geqslant r+s+4 \tag{11}\] for any given \(r\in\mathbb{N}\). One therefore obtains an expansion of the solution to the spin-\(s\) equations in terms of the explicitly known functions \(\phi_{k}^{(p^{\prime})}\), \(0\leqslant k\leqslant 2s\), \(0\leqslant p^{\prime}\leqslant p-1\), and a remainder of prescribed smoothness. We call such expansions _F-expansions_. The details of the explicit computation of the terms in the F-expansions is given in Appendix C. _Remark 3_.: Notice that the functions \(\phi_{k}^{(p^{\prime})}\) completely control whether the solution \(\phi\) extends smoothly to \(\mathscr{I}^{+}\) or acquires logarithmic singularities there. Since the gauge in our construction is defined in terms of smooth conformally invariant structures (conformal geodesics and associated conformal Gaussian coordinates, see e.g. [10]) and extends smoothly through \(\mathscr{I}^{\pm}\), these singularities are not a result of the choice of gauge. Rather, they are generated at all orders by the interplay between the structure of the data on \(\mathcal{S}_{\star}\) near \(\mathcal{I}\) and the nature of the evolution along \(\mathcal{I}\). _Remark 4_.: Inspecting the proof of Proposition 3.1, one notices that the bound in (11) can in fact be improved to \[p\geqslant r+s+4-k,\] provided one is prepared to consider the regularity of the solution component-by-component. In particular, setting \(p=0=r\) shows that the components \(\phi_{k\geqslant s+4}\) are always continuous. This set of components is non-empty for \(s\geqslant 4\). ### Existence of solutions In this subsection we provide a brief discussion of the argument showing the existence of solutions to the spin-\(s\) equations on the domain \(\mathcal{N}_{1}\). The argument is a classical _last slice argument_ which combines the local existence result for symmetric hyperbolic systems with a contradiction argument. As we are looking for solutions of the form (10) and the formal expansions are explicitly known, we only need to argue the existence of the derivatives \(\partial_{\rho}^{p}\phi_{k}\). Given that the components \(\phi_{k}\) satisfy a symmetric hyperbolic system (cf. (32)), it follows then that \(\partial_{\rho}^{p}\phi_{k}\) also satisfy a symmetric hyperbolic system with the same structural properties as (32). Now, assume that on \(\mathcal{S}_{\star}\) one has \[(\partial_{\rho}^{p}\phi_{k})_{\star}\in H^{m}(\mathcal{S}_{\star}),\qquad p>m +s.\] The existence theorem for symmetric hyperbolic systems (see e.g. [Kat75]) then shows that there exists a \(T>0\) such that \[\partial_{\rho}^{p}\phi_{k}(\tau,\cdot)\in H^{m}(\mathcal{S}_{\tau}),\qquad 0 \leqslant\tau<T.\] Moreover, the dependence on \(\tau\) of this inclusion is continuous. Assume now the existence of a time \(\tau_{*}\leqslant 1\) (i.e. a _last slice_) such that \[\partial_{\rho}^{p}\phi_{k}(\tau,\cdot)\not\in H^{m}(\mathcal{S}_{\tau})\quad \text{for}\quad\tau\in[\tau_{\star},1].\] It follows then that \[\int_{\tau_{*}}^{1}\|\partial_{\rho}^{p}\phi_{k}\|_{H^{m}(\mathcal{S}_{t})}^{2 }\,\mathrm{d}t=\infty.\] As this integral is bounded by \(\|\partial_{\rho}^{p}\phi_{k}\|_{H^{m}(\mathcal{N}_{1})}\), the existence of a last slice contradicts the fact that \(\partial_{\rho}^{p}\phi_{k}\in H^{m}(\mathcal{N}_{1})\), which is true by Proposition 3.1. ## 4 Asymptotic characteristic initial value problem for the spin-\(s\) equations The purpose of this section is to provide a brief discussion of the formulation of the _asymptotic characteristic initial value problem_ for the massless spin-\(s\) equations (3). By this we understand a setting in which suitable initial data is prescribed on a portion of past null infinity \(\mathscr{I}^{-}\) and an incoming null hypersurface \(\underline{\mathcal{B}}_{\varepsilon}\) intersecting \(\mathscr{I}^{-}\) at a cut \[\mathcal{C}_{\star} \equiv\mathscr{I}^{-}\cap\underline{\mathcal{B}}_{\varepsilon}\] \[=\{(\tau,\rho)\times\mathbb{S}^{2}\in\mathcal{N}\mid\tau=-1,\ \ \rho=\rho_{\star}\},\] with \(\rho_{\star}\) a constant. ### Freely specifiable data on \(\mathscr{I}^{-}\) Equations (4a) and (4b) are, respectively, transport equations along outgoing and incoming null geodesics in the Minkowski spacetime, written in the F-gauge. In particular, at \(\mathscr{I}^{-}\) one has that \[(A_{k})|_{\mathscr{I}^{-}}=-\rho\partial_{\rho}(\phi_{k+1})|_{\mathscr{I}^{- }}-\mathbf{X}_{+}(\phi_{k})|_{\mathscr{I}^{-}}+(k+1-s)(\phi_{k+1})|_{\mathscr{I}^{ -}}=0,\qquad k=0,\,\dots,\,2s-1, \tag{12}\] where \((\phi_{k})|_{\mathscr{I}^{-}}\) denotes the restriction of \(\phi_{k}\) to \(\mathscr{I}^{-}\); (12) is a set of \(2s\) transport equations along the null generators of \(\mathscr{I}^{-}\). The key observation is that only \((\phi_{0})|_{\mathscr{I}^{-}}\) is not fixed by these transport equations. This allows us to identify \(\phi_{0}\) as the freely specifiable data on \(\mathscr{I}^{-}\). That is, given \((\phi_{0})|_{\mathscr{I}^{-}}\), one can solve the ODEs (12) one by one, at least in a neighbourhood of \(\mathcal{C}_{\star}\), starting with the equation \(A_{0}|_{\mathscr{I}^{-}}=0\). One obtains \((\phi_{k+1})|_{\mathscr{I}^{-}}\) for \(k=0,\,\dots,\,2s-1\), provided that the initial values at the cut \((\phi_{k+1})_{\star}\equiv(\phi_{k+1})|_{\mathcal{C}_{\star}}\) are also known. Next, let \(\underline{\mathcal{B}}_{\varepsilon}\) be the incoming null hypersurface whose generators are tangent to the vector \(\mathbf{e_{00^{\prime}}}\) and which intersects \(\mathscr{I}^{-}\) at \(\mathcal{C}_{\star}\). On \(\underline{\mathcal{B}}_{\varepsilon}\) one has that \[B_{k}|_{\underline{\mathcal{B}}_{\varepsilon}}=\mathbf{e_{00^{\prime}}}(\phi_{k}) |_{\underline{\mathcal{B}}_{\varepsilon}}-\mathbf{X}_{-}(\phi_{k+1})|_{ \underline{\mathcal{B}}_{\varepsilon}}+(k-s)(\phi_{k})|_{\underline{\mathcal{ B}}_{\varepsilon}}=0 \tag{13}\] for \(k=0,\,\dots,\,2s-1\). These are transport equations along the null generators of \(\underline{\mathcal{B}}_{\varepsilon}\), in which only the component \((\phi_{2s})|_{\underline{\mathcal{B}}_{\varepsilon}}\) satisfies no differential condition. Accordingly, it can be identified with the freely specifiable data on \(\underline{\mathcal{B}}_{\varepsilon}\). Similarly, one can then solve the ODEs (13) in sequence (starting from \(B_{2s-1}\)) to obtain \((\phi_{k})|_{\underline{\mathcal{B}}_{\varepsilon}}\) for all \(0\leqslant k\leqslant 2s\), provided the initial values \((\phi_{k})|_{\star}\equiv(\phi_{k})|_{\mathcal{C}_{\star}}\), \(0\leqslant k\leqslant 2s-1\), at the cut \(\mathcal{C}_{\star}\) are known. We therefore have the following: _Lemma 4.1_.: The full set of initial data for the asymptotic characteristic initial value problem for the massless spin-\(s\) equations can be computed from the reduced data set consisting of \[\phi_{0}\quad\text{on}\quad\mathscr{I}^{-},\] \[\phi_{2s}\quad\text{on}\quad\underline{\mathcal{B}}_{\varepsilon}, \quad\text{and}\] \[\phi_{1},\,\ldots,\,\phi_{2s-1}\quad\text{on}\quad\mathcal{C}_{ \star}=\mathscr{I}^{-}\cap\underline{\mathcal{B}}_{\varepsilon}.\] ### Symmetric hyperbolicity and the local existence theorem For reasons that will become clear shortly, in order to discuss the existence of solutions to the asymptotic characteristic initial value problem in a neighbourhood of \(\mathcal{C}_{\star}\), it is convenient to make use of the evolution equations in a slightly different conformal gauge than the one discussed in Section 2.3. More precisely, we make use of the evolution equations (39) as given in Appendix B.4 and assume that the smooth function \(\mu\) is no longer identically equal to \(1\). A long but straightforward calculation shows that the spin-\(s\) equation (3) is equivalent to an evolution system of the form \[\mathbf{A}^{\mu}\partial_{\mu}\mathbf{\Phi}=\mathbf{B}\cdot\mathbf{\Phi} \tag{14}\] where \[\mathbf{A}^{\mu}\partial_{\mu}=\left(\begin{array}{cccccc}\sqrt{2} \hat{\boldsymbol{e}}_{\mathbf{00}^{\prime}}&-\mu\boldsymbol{X}_{-}&0&\cdots& \cdots&0\\ -\mu\boldsymbol{X}_{+}&2\partial_{\hat{\tau}}&-\mu\boldsymbol{X}_{-}&0&\cdots &0\\ 0&-\mu\boldsymbol{X}_{+}&2\partial_{\hat{\tau}}&-\mu\boldsymbol{X}_{-}&\ddots& \vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&0\\ 0&\cdots&0&-\mu\boldsymbol{X}_{+}&2\partial_{\hat{\tau}}&-\mu\boldsymbol{X}_ {-}\\ 0&\cdots&\cdots&0&-\mu\boldsymbol{X}_{+}&\sqrt{2}\hat{\boldsymbol{e}}_{\mathbf{1 1}^{\prime}}\end{array}\right),\] \[\mathbf{\Phi}=\big{(}\phi_{0},\,\ldots,\,\phi_{2s}\big{)}^{t},\] and \(\mathbf{B}\) is a constant \(2s\times 2s\) matrix. It can be readily verified that the above matrix is Hermitian. Moreover, \[\mathbf{A}^{0}\equiv\text{diag}\big{(}1-\kappa^{\prime}\hat{\tau},\,2,\,\ldots,\,2,\,1+\kappa^{\prime}\hat{\tau}\big{)}.\] This matrix is positive definite away from \(\mathcal{I}^{\pm}\). As \(\kappa^{\prime}=\mu^{\prime}\rho+\mu\) with \(\mu(0)=1\), but not identically \(1\) away from \(\rho=0\), it follows that \[\mathbf{A}^{0}|_{\mathcal{I}^{-}}=\text{diag}\big{(}2,\,2,\,\ldots,\,2,\,0 \big{)}. \tag{15}\] An analogous expression holds on \(\mathcal{I}^{+}\). Thus, _the system (14) is symmetric hyperbolic away from the critical sets \(\mathcal{I}^{\pm}\), but degenerates at \(\mathcal{I}^{\pm}\)._ _Remark 5_.: For the choice \(\mu=1\) corresponding to the basic F-gauge discussed in Section 2.2, one has that \[\mathbf{A}^{0}|_{\mathscr{I}^{-}}=\text{diag}\big{(}2,\,2,\,\ldots,\,2,\,0 \big{)}.\] That is, in this representation the corresponding symmetric hyperbolic system degenerates on the _whole_ of \(\mathscr{I}^{-}\), which creates problems when attempting to solve the system (14). The use of the more general gauge of Appendix B.4 shows that the degeneration of \(\mathbf{A}^{0}\) on the whole of \(\mathscr{I}^{-}\) in the case \(\mu=1\) is a _coordinate_ singularity. While this more singular representation (\(\mu=1\)) does not lend itself naturally to the discussion of local existence in a neighbourhood of \(\mathcal{C}_{\star}=\mathscr{I}^{-}\cap\underline{\mathcal{B}}_{\varepsilon}\), it gives rise to a straightforward construction of asymptotic initial data. Moreover, as we do in Sections 3 and 5, it allows us to construct robust estimates. Using the general theory set up in [11] (see also [12], SS12.5.3), one can therefore obtain the following local existence theorem. _Proposition 4.2_.: Given a smooth choice of reduced initial data as in Lemma 4.1, there exists a neighbourhood \(\mathcal{U}\subset D^{+}(\mathscr{I}^{-}\cup\underline{\mathcal{B}}_{\varepsilon})\) of \(\mathcal{C}_{\star}\) in which the asymptotic characteristic initial value problem for the massless spin-\(s\) equation has a unique smooth solution \(\phi_{k}\) for \(k=0,\dots,2s\), where \(D^{+}(S)\) denotes the future domain of dependence of the set \(S\subset\mathcal{N}\). In fact, using a strategy similar to that in [10], it is possible to ensure the existence of solutions along a narrow causal rectangle along \(\mathscr{I}^{-}\) bounded away from \(\mathcal{I}\) (see Figure 6). However, the degeneracy in the matrix \(\mathbf{A}^{0}\) at \(\mathcal{I}^{-}\) as given in (15) precludes one from obtaining a domain of existence which includes the past critical point (see Remark 6). In order to make this domain extension statement more precise, let us introduce some further notation. Given \((\tau_{\bullet},\rho_{\bullet})\in[-1,1]\times[0,\infty)\), let \[\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}\equiv\big{\{}(\tau,\rho,t^{ \boldsymbol{A}}_{\boldsymbol{B}})\in\mathcal{N}\mid\tau=\tau_{\bullet},\;\rho =\rho_{\bullet},\;t^{\boldsymbol{A}}_{\boldsymbol{B}}\in\mathrm{SU}(2)\big{\}}.\] For a given choice of \((\tau_{\bullet},\rho_{\bullet})\), the set \(\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}\) naturally projects to a 2-sphere via the Hopf map. Suppose that \((\tau_{\bullet},\rho_{\bullet})\) are chosen so that \(\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}\) lies in the chronological future of \(\mathcal{C}_{\star}=\mathcal{S}_{-1,\rho_{\star}}\). The _causal diamond_ defined by the sets \(\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}\) and \(\mathcal{C}_{\star}\) is given by \[\mathscr{D}_{\tau_{\bullet},\rho_{\bullet}}=J^{+}(\mathcal{C}_{\star})\cap J^ {-}(\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}).\] A schematic representation of an example of such a causal diamond \(\mathscr{D}_{\tau_{\bullet},\rho_{\bullet}}\) is provided in Figure 6. The set \(\mathcal{S}_{\tau_{\bullet},\rho_{\bullet}}\) can be thought of as the set of points of intersection of future-directed null geodesics emanating from \(\mathcal{S}_{-1,\rho_{\circ}}\subset\mathscr{I}^{-}\) with the future-directed null geodesics emanating from \(\mathcal{S}_{\tau_{\star},\rho_{\star}}\subset\underline{\mathcal{B}}_{\varepsilon}\), where \(\rho_{\circ}\), \(\tau_{\star}\), and \(\rho_{\star}\) depend on \((\tau_{\bullet},\rho_{\bullet})\). Conversely, given points \((-1,\rho_{\circ},t^{\boldsymbol{A}}_{\boldsymbol{B}})\in\mathscr{I}^{-}\) and \((\tau_{\bullet},\rho_{\star},t^{\boldsymbol{A}}_{\boldsymbol{B}})\in \underline{\mathcal{B}}_{\varepsilon}\), there exist coordinates \((\tau_{\bullet},\rho_{\bullet})\) depending on \((\rho_{\circ},\tau_{\star},\rho_{\star})\) such that \(\mathcal{S}_{\tau_{\bullet},\rho_{\star}}\) is the set of intersections of the future-directed null geodesics emanating from \(\mathcal{S}_{-1,\rho_{\circ}}\) and \(\mathcal{S}_{\tau_{\star},\rho_{\star}}\). In this notation one then has the following. _Proposition 4.3_.: Given \(\rho_{\circ}\in(0,\rho_{\star})\), consider smooth initial data for the asymptotic characteristic initial value problem for the spin-\(s\) equations on \(\underline{\mathcal{B}}_{\varepsilon}\cup\mathscr{I}^{-}_{[\rho_{\circ},\rho_{ \star}]}\) where \[\mathscr{I}^{-}_{[\rho_{\circ},\rho_{\star}]}\equiv\{p\in\mathscr{I}^{-}\mid \rho(p)\in[\rho_{\circ},\rho_{\star}]\}.\] Then there exists a unique smooth solution to the massless spin-\(s\) field equations on the causal diamond \(\mathscr{D}_{\tau_{\bullet},\rho_{\bullet}}\). Figure 6: The domains of existence of Propositions 4.2 and 4.3. Observe that these domains do not include spatial infinity. _Remark 6_.: The conclusion of Proposition 4.3 ensures the existence of solutions to the spin-\(s\) equations on a causal diamond which can get arbitrarily close to spatial infinity, but cannot actually reach it. The development of estimates to deal with this degeneracy is the objective of the next section. ## 5 Estimates near \(\mathscr{I}^{-}\) In this section we construct estimates which allow us to control the solutions to the spin-\(s\) equations in terms of asymptotic characteristic initial data on past null infinity. At the core of these estimates is a combination of the methods developed in [10] to obtain an optimal existence result for the characteristic initial value problem, and the strategy adopted in Section 3 for the standard initial value problem near spatial infinity. We recall once more that the degeneracy of the equations at the critical point \(\mathcal{I}^{-}\) prevents a direct application of the results of [10], necessitating a different treatment of the region which includes spatial infinity. ### Estimating the radiation field The component \(\phi_{2s}\) (the _outgoing radiation field_) is the most problematic to estimate from \(\mathscr{I}^{-}\) in our setting, as the naive energy estimate for equation (33a) for \(k=2s-1\) degenerates at past null infinity \(\mathscr{I}^{-}=\{\tau=-1\}\). However, we may produce estimate analogous to those of Section 3. Given \(\rho_{\star}>0\), we consider the following hypersurfaces in \(\overline{\mathcal{N}}\): \[\mathscr{I}^{-}_{\rho_{\star}} \equiv\mathscr{I}^{-}\cap\{0\leqslant\rho\leqslant\rho_{\star}\},\] \[\underline{\mathcal{B}}_{\varepsilon} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}}{}_{\boldsymbol{B}}) \,|\,-1\leqslant\tau\leqslant-1+\varepsilon,\ \rho=\frac{\rho_{\star}}{1-\tau},\ t^{ \boldsymbol{A}}{}_{\boldsymbol{B}}\in\mathrm{SU}(2)\Big{\}}\,,\] \[\mathcal{S}_{-1+\varepsilon} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}}{}_{\boldsymbol{B}}) \,|\,\tau=-1+\varepsilon,\ 0\leqslant\rho\leqslant\frac{\rho_{\star}}{2- \varepsilon},\ t^{\boldsymbol{A}}{}_{\boldsymbol{B}}\in\mathrm{SU}(2)\Big{\}}\,,\] \[\mathcal{I}_{\varepsilon} \equiv\Big{\{}(\tau,\rho,t^{\boldsymbol{A}}{}_{\boldsymbol{B}}) \,|\,-1\leqslant\tau\leqslant-1+\varepsilon,\ \rho=0,\ t^{\boldsymbol{A}}{}_{\boldsymbol{B}}\in\mathrm{SU}(2)\Big{\}}.\] Observe that the set \(\underline{\mathcal{B}}\) is is a short incoming null hypersurface intersecting \(\mathscr{I}^{-}\) at \(\rho=\rho_{\star}\). Moreover, let \(\underline{\mathcal{N}}_{\varepsilon}\) denote the spacetime slab bounded by the hypersurfaces \(\underline{\mathcal{B}}_{\varepsilon}\), \(\mathscr{I}^{-}_{\rho_{\star}}\), \(\mathcal{S}_{-1+\varepsilon}\) and \(\mathcal{I}_{\varepsilon}\)--see Figure 7. As in the previous section, given \(m\geqslant 0\) we let \[\mathcal{U}\equiv\{(q^{\prime},p^{\prime},\alpha)\in\mathbb{N}\times\mathbb{ N}\times\mathbb{N}^{3}\,:\,q^{\prime}+p^{\prime}+|\alpha|\leqslant m\}.\] Figure 7: We perform energy estimates from \(\mathscr{I}^{-}_{\rho_{\star}}\cup\underline{\mathcal{B}}_{\varepsilon}\) to a Cauchy surface \(\mathcal{S}_{-1+\varepsilon}\) for some \(\varepsilon\ll 1\). In terms of the above we have the following estimate for the outgoing radiation field. _Proposition 5.1_.: Let \(\varepsilon>0\). Suppose that for \(m\), \(q\), \(p\in\mathbb{N}\) and \((q^{\prime},p^{\prime},\alpha)\in\mathbb{N}\times\mathbb{N}\times\mathbb{N}^{3}\) satisfying \[q^{\prime}+p^{\prime}+|\alpha|\leqslant m\quad\text{and}\quad m+p<s+q\] we have the bound \[\sum_{\mathrm{U}}\int_{\mathcal{S}_{\varepsilon}^{-}}|D^{\prime}(\partial_{ \tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{d}\rho\wedge\mathrm{d}\mu+ \sum_{\mathrm{U}}\int_{\underline{\mathcal{B}}_{\star}}|D^{\prime}(\partial_{ \tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\,\mathrm{d}\underline{\mathcal{B}} \leqslant\Omega_{\star} \tag{16}\] on the characteristic data for \(0\leqslant k\leqslant 2s\), for some \(\Omega_{\star}>0\). Additionally, assume that the bootstrap bound \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k}\|_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon}^{-})}^{2}<\Omega_{\star} \tag{17}\] holds for \(0\leqslant k\leqslant 2s-1\). Then there exists a constant \(C>0\) such that \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{2s}\|_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}^{2}\leqslant C\Omega_{\star}. \tag{18}\] _Remark 7_.: The bootstrap bound (17) is proven in Proposition 5.2 assuming only the bound (16). Proof.: As before, we write \[D\equiv D^{q,p,\alpha}\equiv\partial_{\tau}^{q}\partial_{\rho}^{p}\mathbf{Z}^{ \alpha},\] and commute \(D\) into the equations (33a) and (33b) to arrive at the identity \[\begin{split} 0&=\partial_{\tau}\left((1+\tau)|D\phi_{k+1 }|^{2}+(1-\tau)|D\phi_{k}|^{2}\right)+\partial_{\rho}\left(-\rho|D\phi_{k+1}|^ {2}+\rho|D\phi_{k}|^{2}\right)\\ &\quad-\mathbf{Z}^{\alpha}\mathbf{X}_{+}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\phi_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{ \phi}_{k+1})-\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})\bm {Z}^{\alpha}\mathbf{X}_{+}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{k+1}) \\ &\quad-\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\bar{\phi}_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p} \phi_{k+1})-\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{ k})\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}) \\ &\quad+2(k+1-s+q-p)|D\phi_{k+1}|^{2}+2(k-s-q+p)|D\phi_{k}|^{2}, \end{split} \tag{6}\] where \(0\leqslant k\leqslant 2s-1\). Integrating (6) over the region \(\underline{\mathcal{N}}_{\varepsilon}\) against the 5-form \(\mathrm{d}\mathrm{v}\equiv\mathrm{d}\tau\wedge\mathrm{d}\rho\wedge\mathrm{d}\mu\), we have \[\begin{split} 0&=\int_{\underline{\mathcal{N}}_{ \varepsilon}}\left(\begin{array}{c}\partial_{\tau}\\ \partial_{\rho}\end{array}\right)\cdot\left(\begin{array}{c}(1+\tau)|D\phi_{k +1}|^{2}+(1-\tau)|D\phi_{k}|^{2}\\ -\rho|D\phi_{k+1}|^{2}+\rho|D\phi_{k}|^{2}\end{array}\right)\mathrm{d}\mathrm{v }\\ &\quad+2(k-s-q+p)\int_{\underline{\mathcal{N}}_{\varepsilon}}|D \phi_{k}|^{2}\,\mathrm{d}\mathrm{v}+2(k+1-s+q-p)\int_{\underline{\mathcal{N}}_ {\varepsilon}}|D\phi_{k+1}|^{2}\,\mathrm{d}\mathrm{v}\\ &\quad-\int_{\underline{\mathcal{N}}_{\varepsilon}}\left(\mathbf{Z}^{ \alpha}\mathbf{X}_{+}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})\mathbf{Z}^{ \alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{k+1})+\mathbf{Z}^{ \alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})\mathbf{Z}^{\alpha}\mathbf{X}_{ +}(\partial_{\tau}^{q}\partial_{\rho}^{p}\bar{\phi}_{k+1})\right.\\ &\quad\underbrace{\qquad+\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q} \partial_{\rho}^{p}\bar{\phi}_{k})\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{ \rho}^{p}\phi_{k+1})+\mathbf{Z}^{\alpha}(\partial_{\tau}^{q}\partial_{\rho}^{p} \bar{\phi}_{k})\mathbf{Z}^{\alpha}\mathbf{X}_{-}(\partial_{\tau}^{q}\partial_{\rho}^{p} \phi_{k+1})}_{\operatorname{angular}(\alpha)}.\end{split}\] Using Lemma A.1, as in Section 3 we will have \(\sum_{|\alpha|\leqslant m^{\prime}}\operatorname{angular}(\alpha)=0\) for any \(m^{\prime}\in\mathbb{N}\). We will therefore not write the angular terms out in detail in the following computations. Using the Euclidean divergence theorem, we then find \[\begin{split} 0&=\int_{\mathcal{S}_{-1+\varepsilon}} \varepsilon|D\phi_{k+1}|^{2}+(2-\varepsilon)|D\phi_{k}|^{2}\,\,\mathrm{d}\rho \wedge\mathrm{d}\mu-\int_{\mathcal{S}_{\varepsilon}^{-}}2|D\phi_{k}|^{2}\,\, \mathrm{d}\rho\wedge\mathrm{d}\mu\\ &\quad-\int_{\mathcal{I}_{\varepsilon}}\rho\left(-|D\phi_{k+1}|^{2 }+|D\phi_{k}|^{2}\right)\mathrm{d}\tau\wedge\mathrm{d}\mu\\ &\quad+\int_{\underline{\mathcal{B}}_{\star}}\left(\begin{array}[] {c}(1+\tau)|D\phi_{k+1}|^{2}+(1-\tau)|D\phi_{k}|^{2}\\ -\rho|D\phi_{k+1}|^{2}+\rho|D\phi_{k}|^{2}\end{array}\right)\cdot\nu\left( \begin{array}{c}-\rho\\ 1-\tau\end{array}\right)\mathrm{d}\underline{\mathcal{B}}+\operatorname{ angular}(\alpha)\\ &\quad+2(k-s-q+p)\int_{\underline{\mathcal{N}}_{\varepsilon}}|D\phi_{k}|^{2}\, \mathrm{d}\mathrm{v}+2(k+1-s+q-p)\int_{\underline{\mathcal{N}}_{\varepsilon}}|D \phi_{k+1}|^{2}\,\mathrm{d}\mathrm{v},\end{split} \tag{19}\] where \(\nu\equiv(\rho^{2}+(1-\tau)^{2})^{-1/2}\) is a normalisation factor for the outward normal to \(\underline{\mathcal{B}}_{\varepsilon}\), \(\mathrm{d}\underline{\mathcal{B}}\) is the induced measure on \(\underline{\mathcal{B}}_{\varepsilon}\), and we note that the integral over \(\mathcal{I}_{\varepsilon}\) vanishes due to the factor of \(\rho\) in the integrand. In a slight deviation from the computations of Section 3, here we perform the relabeling \[q\longrightarrow q+q^{\prime},\qquad p\longrightarrow p+p^{\prime},\] so that \(D\longrightarrow D^{\prime}\partial_{\tau}^{q}\partial_{\rho}^{p}\), where \(D^{\prime}=\partial_{\tau}^{q^{\prime}}\partial_{\rho}^{p^{\prime}}\,\mathbf{Z}^{\alpha}\), and sum both sides of the above equality over \(\mathbb{U}\). The integrals over angular terms vanish, and noting that the integral of \((2-\varepsilon)|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^ {2}\) over \(\mathcal{S}_{-1+\varepsilon}\) is non-negative, we have the estimate \[\varepsilon\sum_{\mathbb{U}}\int_{\mathcal{S}_{-1+\varepsilon}}|D ^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1})|^{2}\ \mathrm{d}\rho\wedge\mathrm{d}\mu \leqslant 2\sum_{\mathbb{U}}\int_{\underline{\mathcal{S}}_{\varepsilon}^ {-}}|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\ \mathrm{d}\rho\wedge\mathrm{d}\mu\] \[+2\sum_{\mathbb{U}}\int_{\underline{\mathcal{B}}_{\varepsilon}} \nu\rho|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\ \mathrm{d}\underline{\mathcal{B}}\] \[+2\sum_{\mathbb{U}}(q+q^{\prime}-p^{\prime}-p+s-k)\int_{\underline {\mathcal{N}}_{\varepsilon}}|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p} \phi_{k})|^{2}\ \mathrm{d}\nu\] \[+2\sum_{\mathbb{U}}(-q-q^{\prime}+p^{\prime}+p+s-k-1)\int_{ \underline{\mathcal{N}}_{\varepsilon}}|D^{\prime}(\partial_{\tau}^{q}\partial _{\rho}^{p}\phi_{k+1})|^{2}\ \mathrm{d}\nu\] \[\leqslant 2\sum_{\mathbb{U}}\int_{\mathcal{S}_{\varepsilon}^{-}}|D ^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\ \mathrm{d}\rho\wedge\mathrm{d}\mu\] \[+2\sum_{\mathbb{U}}\int_{\underline{\mathcal{B}}_{\varepsilon}} \nu\rho|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\ \mathrm{d} \underline{\mathcal{B}}\] \[+C^{(1)}_{m,p,s,k}\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k }\|^{2}_{H^{m}(\underline{\mathcal{N}}_{\varepsilon})}+C^{(2)}_{m,p,s,k}\| \partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})},\] where \[C^{(2)}_{m,p,s,k}\equiv 2(m-q+p+s-1-k)\] and \[\|\psi\|^{2}_{H^{m}(\underline{\mathcal{N}}_{\varepsilon})}\equiv\sum_{ \mathbb{U}}\int_{\mathcal{N}_{\varepsilon}}|D^{\prime}\psi|^{2}\,\mathrm{d}v.\] Recall now that we assumed that the bootstrap bound (17) holds--that is, we have that \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}\leqslant\Omega_{\star}\] for \(0\leqslant k\leqslant 2s-1\). Plugging in this bootstrap bound and assumption (16) into the above estimate, we obtain \[\varepsilon\sum_{\mathbb{U}}\int_{\mathcal{S}_{-1+\varepsilon}}|D^{\prime}( \partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1})|^{2}\ \mathrm{d}\rho\wedge\mathrm{d}\mu\leqslant C\left(\Omega_{\star}+C^{(2)}_{m,p,s, k}\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}\right).\] The surfaces \(\mathcal{S}_{-1+t}\), \(t\in(0,\varepsilon)\), foliate \(\underline{\mathcal{N}}_{\varepsilon}\), so this estimate is of the form \[\varepsilon f_{k}^{\prime}(\varepsilon)\leqslant C\left(\Omega_{\star}+C^{(2)} _{m,p,s,k}f_{k}(\varepsilon)\right),\] where \(f_{k}(\varepsilon)\equiv\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}\|^{2 }_{H^{m}(\underline{\mathcal{N}}_{\varepsilon})}\). This may be rewritten as \[\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\left(\varepsilon^{-CC^{(2)}_{m,p,s,k} }f_{k}(\varepsilon)\right)\leqslant C\Omega_{\star}\varepsilon^{-CC^{(2)}_{m,p,s,k}-1}, \tag{20}\] which is integrable near \(\varepsilon=0\) if \(C^{(2)}_{m,p,s,k}<0\). This is satisfied for \(k=2s-1\) provided \[m+p<s+q. \tag{21}\] Assuming (21) holds and integrating (20), we arrive at \[f_{2s-1}(\varepsilon)\leqslant\frac{\Omega_{\star}}{-C^{(2)}_{m,p,s,2s-1}} \lesssim\Omega_{\star}.\] This is the required estimate. ### The boostrap bound To prove the bootstrap bound (17), we return to the identity (19). _Proposition 5.2_.: Let \(\varepsilon>0\). Suppose that for \(m\), \(q\), \(p\in\mathbb{N}\) and all \((q^{\prime},p^{\prime},\alpha)\in\mathbb{N}\times\mathbb{N}\times\mathbb{N}^{3}\) satisfying \[q^{\prime}+p^{\prime}+|\alpha|\leqslant m\quad\text{and}\quad m+p<s+q\] we have the bound (16) on characteristic data for \(0\leqslant k\leqslant 2s\). Then there exists a constant \(C>0\) depending on \(m\), \(p\), \(s\) and \(\varepsilon\) such that \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}\leqslant C\Omega_{\star}\] for \(0\leqslant k\leqslant 2s-1\), where \(\underline{\mathcal{N}}_{\varepsilon}\) is as in Proposition 5.1. In other words, the bootstrap estimate (17) holds. Proof.: Performing the shifts \(q\to q+q^{\prime}\) and \(p\to p+p^{\prime}\) as in Proposition 5.1 and summing the identity (19) over \(\mathcal{U}\), we deduce (this time dropping the manifestly positive \(\varepsilon|D^{\prime}(\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1})|^{2}\) term) that \[\sum_{\mathcal{U}}\int_{\mathcal{S}_{-1+\varepsilon}}|D^{\prime}(\partial_{ \tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\;\mathrm{d}\rho\wedge\mathrm{d} \mu\lesssim\Omega_{\star}+C^{(1)}_{m,p,s,k}\|\partial_{\tau}^{q}\partial_{ \rho}^{p}\phi_{k}\|^{2}_{H^{m}(\mathcal{N}_{\varepsilon})}+C^{(2)}_{m,p,s,k} \|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k+1}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}, \tag{22}\] where as before \[C^{(2)}_{m,p,s,k}=2(m-q+p+s-1-k)\] and \(C^{(1)}_{m,p,s,k}=2(m+q-p+s-k)\). We begin with \(k=2s-1\). Then \(C^{(2)}_{m,p,s,2s-1}<0\) is guaranteed by the assumption that \(m+p<s+q\), and the estimate (22) reads \[g_{2s-1}(\varepsilon)\lesssim\Omega_{\star}+C^{(1)}_{m,p,s,2s-1}\int_{0}^{ \varepsilon}g_{2s-1}(t)\,\mathrm{d}t,\] where \[g_{k}(t)\equiv\sum_{\mathcal{U}}\int_{\mathcal{S}_{-1+t}}|D^{\prime}(\partial_ {\tau}^{q}\partial_{\rho}^{p}\phi_{k})|^{2}\;\mathrm{d}\rho\wedge\mathrm{d}\mu.\] Using Gronwall's lemma, we find \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{2s-1}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}=\int_{0}^{\varepsilon}g_{2s-1}(t)\,\mathrm{d}t \lesssim\Omega_{\star}e^{C_{m,p,s}\varepsilon}\lesssim_{m,p,s,\varepsilon} \Omega_{\star}.\] We can now return to (22) for \(k=2s-2\). We have, using the above estimate for \(\phi_{2s-1}\), \[g_{2s-2}(\varepsilon)\lesssim\Omega_{\star}+C^{(1)}_{m,p,s,2s-2}\int_{0}^{ \varepsilon}g_{2s-2}(t)\,\mathrm{d}t,\] and so deduce that \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{2s-2}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}\lesssim\Omega_{\star}.\] Proceeding inductively for \(0\leqslant k\leqslant 2s-3\), we conclude that there exists a constant \(C=C(m,p,s,\varepsilon)>0\) such that \[\|\partial_{\tau}^{q}\partial_{\rho}^{p}\phi_{k}\|^{2}_{H^{m}(\underline{ \mathcal{N}}_{\varepsilon})}\leqslant C\Omega_{\star}\] for \(0\leqslant k\leqslant 2s-1\). ### Asymptotic expansions near \(\mathscr{I}^{-}\) The estimates obtained in Section 5 may now be used to control the solutions to the spin-\(s\) field equations in a narrow causal diamond extending along \(\mathscr{I}^{-}\) and containing a portion of \(\mathcal{I}\). The strategy is to decompose the solution into a formal expansion around \(\mathscr{I}^{-}\), the regularity of the terms of which will be known explicitly, and a remainder whose regularity is controlled by the estimates on \(\partial_{\tau}^{q}\phi_{k}\). It is clear from the condition \(m+p<s+q\) in Proposition 5.1 that requiring \(\rho\)-derivatives near \(\mathscr{I}^{-}\) hinders the regularity sought on \(\mathcal{S}_{-1+\varepsilon}\). Therefore setting \(p=0\) (and writing \(m\to m+3\) for convenience) in Proposition 5.1, we have \(\forall 0\leqslant k\leqslant 2s\) \[\partial_{\tau}^{q}\phi_{k}\in H^{m+3}(\underline{\mathcal{N}}_{\varepsilon}) \quad\text{if}\quad m+3<s+q. \tag{23}\] Sobolev embedding therefore gives \[\partial_{\tau}^{q}\phi_{k}\in C^{m,\alpha}(\underline{\mathcal{N}}_{ \varepsilon})\quad\text{for}\quad 0<\alpha\leqslant\frac{1}{2},\quad\ m+3<s+q.\] Integrating this \(q\) times with respect to \(\tau\), one finds \[\phi_{k}=\sum_{q^{\prime}=0}^{q-1}\frac{1}{q^{\prime}!}(\partial_{\tau}^{q^{ \prime}}\phi_{k})|_{\mathscr{I}^{-}}(\tau+1)^{q^{\prime}}+I^{q}(\partial_{\tau }^{q}\phi_{k}), \tag{24}\] where \(I^{q}(\partial_{\tau}^{q}\phi_{k})\in C^{m,\alpha}(\underline{\mathcal{N}}_{ \varepsilon})\) and \(I\) denotes the operator \[f\mapsto I(f)=\int_{-1}^{\tau}f(\tau^{\prime},\rho,t^{\boldsymbol{A}}_{ \boldsymbol{B}})\,\mathrm{d}\tau^{\prime}.\] _Remark 8_.: Each term (except for the remainder) in the expansion (24) can be computed explicitly from the characteristic initial data \(\phi_{k}|_{\mathscr{I}^{-}}\) on \(\mathscr{I}^{-}\): see Appendix D. In particular, the regularity of these terms is known explicitly. In fact, we will pose regular (in particular possessing no logarithmic singularities) characteristic initial data on \(\mathscr{I}^{-}\), so the corresponding expansion (24) will be regular. Nevertheless, we will show that logarithmic singularities develop near \(\mathscr{I}^{+}\). ### Last slice argument As in the case of the upper domain \(\mathcal{N}_{1}\), the existence of solutions in the lower domain \(\underline{\mathcal{N}}_{\varepsilon}\) can be shown by a last slice argument. We provide a sketch of the argument below. As for the resolution of the asymptotic characteristic initial value problem in Section 4, here it is more convenient to consider the massless spin-\(s\) equations in the more general gauge discussed in Appendix B.4--rather than in the horizontal representation of Section 2.2--as in the latter the equations are singular on the whole of \(\mathscr{I}^{-}\), not only at the critical set \(\mathcal{I}^{-}\). In the more general gauge we have \(x^{0}=\hat{\tau}\kappa\), where \(\kappa=\rho\mu\) and \(\mu\not\equiv 1\), and the hypersurfaces of constant \(\hat{\tau}\) give rise to a foliation of the lower domain \(\underline{\mathcal{N}}_{\varepsilon}\), and indeed the region to the past of \(\underline{\mathcal{B}}_{\varepsilon}\). Since \(\mathscr{I}^{-}\) is now given by \(\hat{\tau}=-\mu^{-1}\) and \(\mu(0)=1\), the hypersurfaces \(\hat{\mathcal{S}}_{\hat{\tau}}\) for \(\hat{\tau}<-1\) terminate at \(\mathscr{I}^{-}\), whereas the ones for \(-1<\hat{\tau}<1\) terminate at \(\mathcal{I}\). In particular, the hypersurface \(\hat{\mathcal{S}}_{-1}\) terminates at \(\mathcal{I}^{-}\)--see Figure 8. Moreover, there exists a hypersurface \(\hat{\mathcal{S}}_{\tau_{\rho_{\star}}}\) which lies entirely in the past of \(\underline{\mathcal{B}}_{\varepsilon}\), and which intersects \(\mathscr{I}^{-}\) at \(\underline{\mathcal{B}}_{\varepsilon}\cap\mathscr{I}^{-}_{\rho_{\star}}\). We concentrate on the region \(\underline{\mathcal{N}}_{\varepsilon}\), i.e. in what follows we refer to the parts of hypersurfaces \(\hat{\mathcal{S}}\) in the future of \(\underline{\mathcal{B}}_{\varepsilon}\). As we are looking for solutions of the form (24) and the asymptotic expansion is formally known, we only need to ensure the existence of the remainder. We therefore consider an evolution system for the fields \(\partial_{\tau}\phi_{k}^{q}\) for \(m+3<s+q\). This system can be obtained by a repeated application of the derivative \(\partial_{\tau}\) to the equations (39). The resulting system has the same structure as the original one, so it is a symmetric hyperbolic system for \(\partial_{\tau}\phi_{k}^{q}\). Now, for the asymptotic characteristic initial value problem for the fields \(\partial_{\tau}\phi_{\epsilon}^{q}\), Rendall's theorem [11] ensures the existence of a solution in a neighbourhood \(\mathcal{V}\subset\underline{\mathcal{N}}_{\varepsilon}\subset J^{+}( \underline{\mathcal{B}}_{\varepsilon}\cap\mathscr{I}^{-})\) of the initial cut, region \((a)\) in Figure 8. Next, there exists a value \(\hat{\tau}_{\delta}<-1\) of the parameter \(\hat{\tau}\) such that all leaves \(\hat{\mathcal{S}}_{\hat{\tau}}\) with \(\hat{\tau}_{\rho_{\star}}<\hat{\tau}<\hat{\tau}_{\delta}\) are contained entirely in the region \(\mathcal{V}\). On one of these hypersurfaces, say \(\hat{\mathcal{S}}_{\hat{\tau}_{\delta}}\), one can formulate a standard initial value problem for the symmetric hyperbolic system (14) implied by the spin-\(s\) field equations. With data posed on \(\hat{\mathcal{S}}_{\hat{\tau}_{\circ}}\), the standard theory of symmetric hyperbolic systems ensures the existence of a solution in \(D^{+}(\hat{\mathcal{S}}_{\hat{\tau}_{\circ}})\), region \((b)\) in Figure 8. The solution in region \((b)\) possesses a Cauchy horizon \(\mathcal{H}^{+}(\hat{\mathcal{S}}_{\hat{\tau}_{\circ}})=\partial D^{+}(\hat{ \mathcal{S}}_{\hat{\tau}_{\circ}})\), depicted by the hypersurfaces separating the region \((b)\) from regions \((b^{\prime})\) and \((b^{\prime\prime})\). We extend the solution to \((b^{\prime})\) and \((b^{\prime\prime})\) by posing two further characteristic initial value problems: (i) one with data on \(\mathscr{I}^{-}\) and the Cauchy horizon of \(\mathcal{S}_{\tau_{\circ}}\), and (ii) a second one with data on \(\underline{\mathcal{B}}_{\varepsilon}\) and the Cauchy horizon; the characteristic data on the Cauchy horizon is induced by the solution in the region \((b)\). For these characteristic initial value problems we again use Rendall's theorem to obtain a solution in a subset of the causal future of \(\mathscr{I}^{-}\cap\mathcal{H}^{+}(\mathcal{S}_{\tau_{\circ}})\) (region \((b^{\prime})\)) and \(\underline{\mathcal{B}}_{\varepsilon}\cap\mathcal{H}^{+}(\mathcal{S}_{\tau_{ \circ}})\) (region \((b^{\prime\prime})\)), respectively. In this way we obtain the solution in a subdomain of \(\underline{\mathcal{N}}_{\varepsilon}\) which contains a hypersurface \(\hat{\mathcal{S}}_{\hat{\tau}}\) with \(\hat{\tau}>\hat{\tau}_{\circ}\). This construction can then be repeated to obtain a larger extension domain for the solution. Now assume that there exists \(\hat{\tau}_{\star}^{j}\in(\hat{\tau}_{\rho_{\star}},-1)\) such that on the hypersurface \(\hat{\mathcal{S}}_{\hat{\tau}_{\star}^{j}}\) the construction for the enlargement of the domain of existence as described in the previous paragraph fails. It is Figure 8: Schematic depiction of the region \(\mathscr{D}=\underline{\mathcal{N}}_{\varepsilon}\cup\mathcal{N}_{1}\), now in the \((\hat{\tau},\rho)\) coordinates. The last slice argument from \(\mathscr{I}^{-}\) is best carried out in a representation of spatial infinity where past and future null infinities are not horizontal. In region \((a)\), Rendall’s theorem ensures the existence of a solution in a neighbourhood of the initial cut \(\mathscr{I}^{-}\cap\underline{\mathcal{B}}_{\varepsilon}\). This neighbourhood contains a spacelike hypersurface of constant \(\hat{\tau}\). Starting from this hypersurface, in region \((b)\) we can use the standard local existence result for symmetric hyperbolic systems to extend the solution. As this extension of the solution possesses Cauchy horizon in \(\underline{\mathcal{N}}_{\varepsilon}\), we solve supplementary characteristic initial value problems in regions \((b^{\prime})\) and \((b^{\prime\prime})\) starting from data on \(\mathscr{I}^{-}\) and \(\underline{\mathcal{B}}_{\varepsilon}\), and the induced data on the Cauchy horizon. The estimates in \(\underline{\mathcal{N}}_{\varepsilon}\) then ensure that the solution can be extended up to \(\hat{\mathcal{S}}_{-1+\varepsilon}\). Once we are in the upper domain \(\mathcal{N}_{1}\), we proceed in a similar fashion. In region \((c)\) the assumption of the existence of a _last slice_ is in contradiction with the the estimates in terms of the data on \(\hat{\mathcal{S}}_{-1+\varepsilon}\). straightforward to show (as in the case of the upper existence domain) that this is then in contradiction with the estimates in Proposition 5.1 and in Proposition 5.2. In this way one ensures the existence of the solution up to the hypersurface \(\hat{\mathcal{S}}_{-1}\). Once one has reached the hypersurface \(\hat{\mathcal{S}}_{-1}\), the domain extension procedure simplifies as now one only needs one supplementary characteristic initial value problem to complete the new slab--the one on the intersection of the Cauchy horizon and \(\underline{\mathcal{B}}_{\varepsilon}\). Again, assuming the existence of a hypersurface \(\hat{\mathcal{S}}_{\tau_{*}}\) with \(\hat{\tau}_{*}>-1\) beyond which it is no longer possible to formulate an initial value problem and extend the solution (region \((c)\) in Figure 8) leads to a contradiction with the estimates in Proposition 5.1 and Proposition 5.2. ## 6 Controlling the solutions from \(\mathscr{I}^{-}\) to \(\mathscr{I}^{+}\) We have obtained two sets of estimates: * estimates which control the solution on a spacelike hypersurface \(\mathcal{S}_{-1+\varepsilon}\) in terms of initial data on \(\mathscr{I}^{-}\) (Propositions 5.1 and 5.2), and * ones which control the solution up to \(\mathscr{I}^{+}\) in terms of initial data on a spacelike hypersurface, say \(\mathcal{S}_{-1+\varepsilon}\) (Proposition 3.1). Each of the estimates (i) and (ii) allow the construction of a particular type of asymptotic expansion with its own requirements on the freely specifiable data and associated implications for the regularity of the solutions. In this section we show how these two sets of estimates can be stitched together to control the solutions to the massless spin-\(s\) field equation in a neighbourhood of spatial infinity which contains parts of both \(\mathscr{I}^{-}\) and \(\mathscr{I}^{+}\) (Figure 2). _Remark 9_.: We refer to the domain in which estimates (i) hold as the lower domain, and the domain in which estimates (ii) hold as the upper domain. The parameters \(m,\ p,\ q\) in the lower domain will be denoted by \(m_{-},\ p_{-},\ q_{-}\), while those in the upper domain will be denoted by \(m_{+},\ p_{+},\ q_{+}\). ### From \(\mathscr{I}^{-}\) to \(\mathcal{S}_{-1+\varepsilon}\) Proposition 5.1 controls the regularity of the solution \(\phi_{k}\) near \(\mathscr{I}^{-}\) in the sense that (setting \(p_{-}=0\), as remarked in Section 5.3) if the data on \(\mathscr{I}^{-}_{\rho_{*}}\cup\underline{\mathcal{B}}_{\varepsilon}\) is such that \[(\partial_{\tau}^{q_{-}}\phi,\,\partial_{\tau}^{q_{-}+1}\phi,\,\dots,\, \partial_{\tau}^{q_{-}+m_{-}+3}\phi)\in H^{m_{-}+3}\times H^{m_{-}+2}\times \dots\times L^{2}\] for \[m_{-}+3<s+q_{-},\] then the solution is \(\partial_{\tau}^{q_{-}}\phi\in H^{m_{-}+3}(\underline{\mathcal{N}}_{ \varepsilon})\), and in particular for \(0<\alpha\leqslant\frac{1}{2}\) \[\phi_{k}=\sum_{q^{\prime}=0}^{q_{-}-1}\frac{1}{q^{\prime}!}(\partial_{\tau}^ {q^{\prime}}\phi_{k})|_{\mathscr{I}^{-}}(\tau+1)^{q^{\prime}}+C^{m_{-},\alpha }(\underline{\mathcal{N}}_{\varepsilon}).\] ### From \(\mathcal{S}_{-1+\varepsilon}\) to \(\mathscr{I}^{+}\) On the other hand, Proposition 3.1 controls the regularity of the solution near \(\mathscr{I}^{+}\), in the sense that if the data on the Cauchy surface \(\mathcal{S}_{-1+\varepsilon}\) is such that \[(\partial_{\rho}^{p_{+}}\phi,\,\partial_{\rho}^{p_{+}+1}\phi,\,\dots,\, \partial_{\rho}^{p_{+}+m_{+}+3}\phi)\in H^{m_{+}+3}\times H^{m_{+}+2}\times \dots\times L^{2}\] for \[p_{+}\geqslant m_{+}+s+4,\] then the solution is \(\partial_{\rho}^{p_{+}}\phi\in H^{m_{+}+3}(\mathcal{N}_{1})\), and in particular \[\phi_{k}=\sum_{p^{\prime}=0}^{p_{+}-1}\frac{1}{p^{\prime 1}}\phi_{k}^{(p^{ \prime})}\rho^{p^{\prime}}+C^{m_{+},\alpha}(\mathcal{N}_{1}).\] ### Prescribing the regularity at \(\mathscr{I}^{-}\) For a desired regularity \(m_{+}\) of the remainder at \(\mathscr{I}^{+}\), we may therefore put \(p_{+}=m_{+}+\lceil s\rceil+4\) for the minimum number of \(\rho\)-derivatives on \(\mathcal{S}_{-1+\varepsilon}\) required by the estimate of Proposition 3.1. In order to match the estimates in the two domains, in the lower domain we therefore require the remainder to be at least \(C^{p_{+}}\), i.e. \(m_{-}=p_{+}\). Combining the above bounds then gives \[q_{-}\geqslant m_{+}+8. \tag{25}\] Note that while in (25) the value \(s\) of the spin cancels out, it still plays a role in length of the expansions of \(\phi_{k}\) via \(m_{-}=p_{+}=m_{+}+\lceil s\rceil+4\). As \(s\) increases, so does the length of the expansion before the remainder can be guaranteed to be \(C^{m_{+}}\) near \(\mathscr{I}^{+}\). ### Main result We summarise the analysis of this article in the following theorem. _Theorem 6.1_.: Let real numbers \(\rho_{\star}>0\), \(\varepsilon>0\) and positive integers \(m,\;q\) such that \(q\geqslant m+8\) be given, and suppose that we have asymptotic characteristic initial data for the massless spin-\(s\) equations on \(\underline{\mathcal{B}}_{\varepsilon}\cup\mathscr{I}_{\rho_{\star}}^{-}\) such that \[(\partial_{\tau}^{q}\phi,\,\partial_{\tau}^{q+1}\phi,\,\ldots,\,\partial_{ \tau}^{q+m+3}\phi)\in H^{m+3}\times H^{m+2}\times\cdots\times L^{2}.\] Then in the domain \(\mathscr{D}=\underline{\mathcal{N}}_{\varepsilon}\cup\mathcal{N}_{1}\) this data gives rise to a unique solution to the massless spin-\(s\) equations (3) which near \(\mathscr{I}^{+}\) admits the expansion \[\phi_{k}=\sum_{p^{\prime}=0}^{p-1}\frac{1}{p^{\prime 1}}\phi_{k}^{(p^{\prime})} \rho^{p^{\prime}}+C^{m,\alpha}(\mathcal{N}_{1}),\] where \(p=m+\lceil s\rceil+4\) and \(0<\alpha\leqslant\frac{1}{2}\). The coefficients \(\phi_{k}^{(p^{\prime})}\), \(0\leqslant p^{\prime}<p-1\), can be computed explicitly in terms of Jacobi polynomials in \(\tau\) and spin-weighted spherical harmonics, and their regularity depends on the multipolar structure of the characteristic data at \(\mathcal{I}^{-}\). In particular, the coefficients \(\phi_{k}^{(p^{\prime})}\) are generically polyhomogeneous at \(\mathscr{I}^{+}\). ## 7 Physical leading order terms To illustrate the implications of our estimates in physical space, we recall that the rescaled spin-\(s\) field \(\phi_{A_{1}\ldots A_{2s}}\) is related to the physical spin-\(s\) field \(\tilde{\phi}_{A_{1}\ldots A_{2s}}\) by \(\phi_{A_{1}\ldots A_{2s}}=\Theta^{-1}\tilde{\phi}_{A_{1}\ldots A_{2s}}\), where \(\eta_{\mu\nu}=\Theta^{2}\tilde{\eta}_{\mu\nu}\) and the spin basis \(\{o_{A},\iota_{A}\}\) scales as \[o_{A}=\Theta^{1/2}\tilde{o}_{A},\qquad\iota_{A}=\Theta^{1/2}\tilde{\iota}_{A}.\] This implies that the components \(\phi_{k}\) scale as \(\phi_{k}=\Theta^{-1-s}\tilde{\phi}_{k}\). In the physical coordinates \((t,r)\) the conformal factor \(\Theta\) is given by \[\Theta=\rho(1-\tau^{2})=\frac{1}{r}.\] For an example, set \(m_{+}=0\), \(m_{-}=p_{+}=s+4\), and \(q_{-}=8\), \(s\in\mathbb{N}\), i.e. assume that the data on \(\mathscr{I}_{\rho_{*}}^{-}\cup\underline{\mathcal{B}}_{\varepsilon}\) is such that for all \(0\leqslant k\leqslant 2s\) \[(\phi_{k},\,\partial_{\tau}\phi_{k},\,\ldots,\,\partial_{\tau}^{15+s}\phi_{k}) \in H^{15+s}\times H^{14+s}\times\cdots\times L^{2}.\] Then the most singular term near \(\mathscr{I}^{+}\), \(\phi_{2s}\), has the expansion \[\phi_{2s}=\omega_{1}\rho^{s}\log(1-\tau)+\omega_{2}\rho^{s+1}(1-\tau)\log(1- \tau)+C^{0,\alpha}(\mathcal{N}_{1})\] for some functions \(\omega_{1,2}\) which are smooth on \(\mathscr{I}^{+}\cup\mathcal{I}^{+}\). This gives, in terms of the physical field, \[\tilde{\phi}_{2s}=\tilde{\omega}_{1}\rho^{2s+1}(1-\tau)^{s+1}\log(1-\tau)+ \tilde{\omega}_{2}\rho^{2s+2}(1-\tau)^{s+2}\log(1-\tau)+\rho^{s+1}(1-\tau)^{s+ 1}C^{0,\alpha}(\mathcal{N}_{1}),\] or, along lines of constant \(u=t-r\), \[\tilde{\phi}_{2s}\sim\tilde{\omega}_{1}^{\prime}r^{-(s+1)}\log\frac{|u|}{r}+ \tilde{\omega}_{2}^{\prime}r^{-(s+2)}\log\frac{|u|}{r}+r^{-(s+1)}C^{0,\alpha}( \mathcal{N}_{1})\] as \(r\to\infty\). The functions \(\tilde{\omega}_{1,2}^{\prime}\) generically do not vanish if the characteristic initial data for \(\phi\) does not vanish in a neighbourhood of \(\mathcal{I}^{-}\). ## 8 Concluding remarks In this paper we have constructed estimates and asymptotic expansions for massless spin-\(s\) fields which control the behaviour of the solutions at future null infinity \(\mathscr{I}^{+}\) in terms of data prescribed at past null infinity \(\mathscr{I}^{-}\). We exploit the structural properties of the equations at the cylinder at spatial infinity \(\mathcal{I}\) to construct asymptotic expansions in a neighbourhood of \(\mathcal{I}\) (an observation that had already been made by Friedrich e.g. in [11]), and prove, using estimates such as the ones in [11], that these expansions capture the leading order behaviour of the true solutions. The main contribution of this paper is the observation that, by exploiting the lower order structure of the equations, similar estimates can also be constructed near \(\mathscr{I}^{-}\), and that these estimates are compatible with the ones near \(\mathscr{I}^{+}\). It should be stressed that our assumptions on the asymptotic characteristic initial data have been made on the basis of ease of presentation. Our theory can, however, applied to a more general setting, e.g. for polyhomogeneous data at \(\mathscr{I}^{-}\). A discussion of a range of examples will be given in a subsequent paper. A natural next application of the results of this article could be to the construction of a conformal scattering theory for massless fields in which the behaviour of the fields at spatial infinity is nontrivial and sharply controlled. This will also be discussed elsewhere. We believe that the theory constructed in this article also sheds light on the Einstein field equations. Nevertheless, a generalisation of these results to the (conformal) Einstein field equations is a far more substantial and challenging task. For this, one would first need to find a way of generalising the estimates in [11] to the nonlinear setting. Moreover, the computational complexity of the Einstein field equations in this setting may pose a (technical) obstruction. That analysis may require the use of computer algebra as in [12]. ## Appendix A Geometry of \(\mathrm{SU}(2)\) In this appendix we recall standard results about the geometry and representation theory of the Lie group \(\mathrm{SU}(2)\). ### Basic properties We make use of spinorial notation to denote elements in \(\operatorname{GL}(2,\mathbb{C})\), so that \(t^{\boldsymbol{A}}_{\boldsymbol{B}}\), \(\boldsymbol{A},\boldsymbol{B}\in\{0,1\}\), denotes an invertible \(2\times 2\) matrix. The subgroups \(\operatorname{SL}(2,\mathbb{C})\) and \(\operatorname{SU}(2,\mathbb{C})\) may then be defined by \[\operatorname{SL}(2,\mathbb{C}) =\left\{t^{\boldsymbol{A}}_{\boldsymbol{B}}\in\operatorname{GL}( 2,\mathbb{C})\mid\epsilon_{\boldsymbol{A}\boldsymbol{B}}t^{\boldsymbol{A}}_{ \boldsymbol{C}}t^{\boldsymbol{B}}_{\boldsymbol{D}}=\epsilon_{\boldsymbol{C} \boldsymbol{D}}\right\},\] \[\operatorname{SU}(2,\mathbb{C}) =\left\{t^{\boldsymbol{A}}_{\boldsymbol{B}}\in\operatorname{SL}( 2,\mathbb{C})\mid\tau_{\boldsymbol{A}\boldsymbol{A}^{\prime}}t^{\boldsymbol{A} }_{\boldsymbol{B}}\bar{t}^{\boldsymbol{A}^{\prime}}_{\boldsymbol{B}^{\prime}}= \tau_{\boldsymbol{B}\boldsymbol{B}^{\prime}}\right\},\] where \(\tau_{\boldsymbol{A}\boldsymbol{A}^{\prime}}\) in our frame is simply the \(2\times 2\) identity matrix. It is classical that the group \(\operatorname{SU}(2)\) is diffeomorphic to the 3-sphere \(\mathbb{S}^{3}\), and its elements \(t^{\boldsymbol{A}}_{\boldsymbol{B}}\) may be written in the explicit form \[t^{\boldsymbol{A}}_{\boldsymbol{B}}=\frac{1}{\sqrt{1+|\zeta|^{2}}}\left( \begin{array}{cc}e^{i\alpha}&ie^{-i\alpha}\zeta\\ ie^{i\alpha}\bar{\zeta}&e^{-i\alpha}\end{array}\right),\] where \(\zeta=x+iy\in\mathbb{C}\) and \(\alpha\in\mathbb{R}\) coordinatize \(\operatorname{SU}(2)\simeq\mathbb{S}^{3}\). This representation makes manifest the fact that there exist \(\operatorname{U}(1)\) orbits in \(\operatorname{SU}(2)\) generated by \(\alpha\), with the coordinates \((x,y)\) constant along these orbits. Quotienting out the \(\operatorname{U}(1)\) subgroup returns the 2-sphere of the rescaled spacetime, \(\operatorname{SU}(2)/\operatorname{U}(1)\simeq\mathbb{S}^{2}\), as described in Section 2.3. ### The vector fields \(\boldsymbol{X}_{\pm}\) and \(\boldsymbol{X}\) Consider the basis \[\boldsymbol{u}_{1}=\frac{1}{2}\left(\begin{array}{cc}0&i\\ i&0\end{array}\right),\qquad\boldsymbol{u}_{2}=\frac{1}{2}\left(\begin{array}[] {cc}0&-1\\ 1&0\end{array}\right),\qquad\boldsymbol{u}_{3}=\frac{1}{2}\left(\begin{array}[] {cc}i&0\\ 0&-i\end{array}\right),\] of the real Lie algebra \(\mathfrak{su}(2)\) of \(\operatorname{SU}(2)\), where \(\boldsymbol{u}_{3}\) is the generator of the \(\operatorname{U}(1)\) subgroup. These obey the commutation relations \[[\boldsymbol{u}_{i},\,\boldsymbol{u}_{j}]=\epsilon_{ijk}\boldsymbol{u}_{k}.\] Denote by \(\boldsymbol{Z}_{1}\), \(\boldsymbol{Z}_{2}\) and \(\boldsymbol{Z}_{3}\) the left-invariant vector fields on \(\operatorname{SU}(2)\) generated by \(\boldsymbol{u}_{1}\), \(\boldsymbol{u}_{2}\) and \(\boldsymbol{u}_{3}\), respectively, via the exponential map. These inherit the commutation relations \[[\boldsymbol{Z}_{i},\,\boldsymbol{Z}_{j}]=\epsilon_{ijk}\boldsymbol{Z}_{k}.\] We then pass to the complexified Lie algebra \(\mathfrak{su}(2)+i\,\mathfrak{su}(2)=\mathfrak{sl}(2;\mathbb{C})\) by setting \[\boldsymbol{X}_{+}\equiv-(\boldsymbol{Z}_{2}+\mathrm{i}\boldsymbol{Z}_{1}), \qquad\boldsymbol{X}_{-}\equiv-(\boldsymbol{Z}_{2}-\mathrm{i}\boldsymbol{Z}_ {1}),\quad\text{and}\quad\boldsymbol{X}\equiv-2\mathrm{i}\boldsymbol{Z}_{3}.\] The vector fields \(\boldsymbol{X}_{\pm}\) and \(\boldsymbol{X}\) then satisfy the commutation relations \[[\boldsymbol{X},\,\boldsymbol{X}_{+}]=2\boldsymbol{X}_{+},\qquad[\boldsymbol {X},\,\boldsymbol{X}_{-}]=-2\boldsymbol{X}_{-},\quad\text{and}\quad[\boldsymbol {X}_{+},\,\boldsymbol{X}_{-}]=-\boldsymbol{X}.\] The vector fields \(\boldsymbol{X}_{+}\) and \(\boldsymbol{X}_{-}\) are complex conjugates of each other in the sense that \[\overline{\boldsymbol{X}_{+}\phi}=\boldsymbol{X}_{-}\bar{\phi}\] for any sufficiently smooth function \(\phi:\operatorname{SU}(2)\to\mathbb{C}\). The Casimir element of \(\operatorname{SU}(2)\) is given by \[\boldsymbol{C}\equiv 4(\boldsymbol{Z}_{1}^{2}+\boldsymbol{Z}_{2}^{2}+\boldsymbol {Z}_{3}^{2})=2\{\boldsymbol{X}_{+},\,\boldsymbol{X}_{-}\}-\boldsymbol{X}^{2},\] where \(\{\boldsymbol{X}_{+},\,\boldsymbol{X}_{-}\}=\boldsymbol{X}_{+}\boldsymbol{X} _{-}+\boldsymbol{X}_{-}\boldsymbol{X}_{+}\) is the anticommutator of \(\boldsymbol{X}_{+}\) and \(\boldsymbol{X}_{-}\). The \(2\{\boldsymbol{X}_{+},\,\boldsymbol{X}_{-}\}=4(\boldsymbol{Z}_{1}^{2}+ \boldsymbol{Z}_{2}^{2}\) ) part of the Casimir corresponds to the Laplacian on the 2-sphere \(\mathbb{S}^{2}\) when acting on functions \(f\) independent of \(\alpha\), i.e. ones that satisfy \(\boldsymbol{X}f=0\). #### a.2.1 Coordinate expressions The vector fields \(\mathbf{\partial}_{x}\), \(\mathbf{\partial}_{y}\), \(\mathbf{\partial}_{\alpha}\) at the identity in \(\mathrm{SU}(2)\) coincide, respectively, with the generators \(\mathbf{u}_{1}\), \(\mathbf{u}_{2}\) and \(\mathbf{u}_{3}\) of \(\mathfrak{su}(2)\). More generally, writing \[P\equiv\frac{1}{2}(1+|\zeta|^{2}),\] one may express the vector fields \(\mathbf{Z}_{i}\) in terms of \(\mathbf{\partial}_{x}\), \(\mathbf{\partial}_{y}\), \(\mathbf{\partial}_{\alpha}\) as \[\mathbf{Z}_{1} =(P\cos 2\alpha)\mathbf{\partial}_{x}+(P\sin 2\alpha)\mathbf{\partial}_{y} +\frac{1}{2}\big{(}x\sin 2\alpha-y\cos 2\alpha\big{)}\mathbf{\partial}_{\alpha},\] \[\mathbf{Z}_{2} =-(P\sin 2\alpha)\mathbf{\partial}_{x}+(P\cos 2\alpha)\mathbf{\partial}_{y} +\frac{1}{2}\big{(}y\sin 2\alpha+x\cos 2\alpha\big{)}\mathbf{\partial}_{\alpha},\] \[\mathbf{Z}_{3} =\frac{1}{2}\mathbf{\partial}_{\alpha}.\] Then \[\mathbf{X}_{+} =-2\mathrm{i}Pe^{2\alpha\mathrm{i}}\mathbf{\partial}_{\zeta}-\frac{ 1}{2}\bar{\zeta}e^{2\alpha\mathrm{i}}\mathbf{\partial}_{\alpha},\] \[\mathbf{X}_{-} =2\mathrm{i}Pe^{-2\alpha\mathrm{i}}\mathbf{\partial}_{\bar{\zeta}}- \frac{1}{2}\zeta e^{-2\alpha\mathrm{i}}\mathbf{\partial}_{\alpha},\] \[\mathbf{X} =-\mathrm{i}\mathbf{\partial}_{\alpha}.\] When acting on functions \(f\) such that \(\partial_{\alpha}f=0\), in these coordinates the Casimir \(\mathbf{C}\) has the form \[\mathbf{C}f=2\{\mathbf{X}_{+},\,\mathbf{X}_{-}\}f=4(1+|\zeta|^{2})^{2}\partial_{\zeta} \partial_{\bar{\zeta}}f,\] which is precisely the Laplacian on \(\mathbb{S}^{2}\) in stereographic coordinates \(\zeta=x+\mathrm{i}y\). #### a.2.2 A technical lemma For a given multi-index \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) of non-negative integers \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\), set \[\mathbf{Z}^{\alpha}\equiv\mathbf{Z}_{1}^{\alpha_{1}}\mathbf{Z}_{2}^{\alpha_{2}}\mathbf{Z}_{3} ^{\alpha_{3}}\] with the convention that \(\mathbf{Z}^{\alpha}=1\) if \(|\alpha|=\alpha_{1}+\alpha_{2}+\alpha_{3}=0\). We record the following technical lemma from [10], which will be useful to us when performing estimates in the main text. _Lemma A.1_.: For any smooth complex-valued functions \(\phi\), \(\psi\) on \(\mathrm{SU}(2)\), \(k\in\{1,2,3\}\) and \(m\in\mathbb{N}\), one has \[\sum_{|\alpha|=m}\left(\mathbf{Z}^{\alpha}\mathbf{Z}_{k}\phi\mathbf{Z}^{\alpha}\psi+\mathbf{Z} ^{\alpha}\phi\mathbf{Z}^{\alpha}\mathbf{Z}_{k}\psi\right)=\sum_{|\alpha|=m}\mathbf{Z}_{k} \left(\mathbf{Z}^{\alpha}\phi\mathbf{Z}^{\alpha}\psi\right).\] In particular, one has that \[\sum_{|\alpha|=m}\int_{\mathrm{SU}(2)}\left(\mathbf{Z}^{\alpha}\mathbf{X}_{\pm}\phi \mathbf{Z}^{\alpha}\psi+\mathbf{Z}^{\alpha}\phi\mathbf{Z}^{\alpha}\mathbf{X}_{\pm}\psi\right) \mathrm{d}\mu=0. \tag{26}\] _Remark 10_.: Lemma A.1 is an extension of the divergence theorem on \(\mathrm{SU}(2)\). Indeed, for \(m=0\) the statement of (26) reduces to \[\int_{\mathrm{SU}(2)}\mathbf{X}_{\pm}(\phi\psi)\,\mathrm{d}\mu=0.\] This, in turn, follows from the divergence theorem and the fact that left-invariant vector fields on unimodular Lie groups are divergence free. ### The functions \(T_{m}{}^{j}{}_{k}\) It will be useful to explicitly define the matrix elements of the unitary irreducible representations of \(\mathrm{SU}(2)\) in the conventions of [10]. The irreducible representations of \(\mathrm{SU}(2)\) are uniquely labelled by the natural numbers \(m\in\mathbb{N}\). For each \(m\in\mathbb{N}\) there exists a unique unitary irreducible representation \(T_{m}\), and the set of irreducible representations \(\{T_{m}\}_{m\in\mathbb{N}}\) contains all the unitary irreducible representations of \(\mathrm{SU}(2)\). The dimension of each \(T_{m}\) is \(m+1\), and each \(T_{m}\) is an eigenfunction of the Casimir operator \(\mathbf{C}\) on \(\mathrm{SU}(2)\) with eigenvalue \(-m(m+2)\). The matrix elements of these representations are given by the complex-valued functions \(T_{m}{}^{j}{}_{k}\) defined by \[\mathrm{SU}(2)\ni t^{\mathbf{A}}{}_{\mathbf{B}}\mapsto T_{m}{}^{j}{}_{k}(t^{\mathbf{A}}{}_ {\mathbf{B}})\equiv\binom{m}{j}^{1/2}\binom{m}{k}^{1/2}t^{(\mathbf{A}_{1}}{}_{(\mathbf{B}_ {1}}\cdots t^{\mathbf{A}_{m)}{}_{j}}{}_{\mathbf{B}_{m})_{k}}\] for \(j\), \(k=0\),..., \(m\), \(m=1\), \(2\), \(3,\ldots\), and \[T_{0}{}^{0}{}_{0}=1,\] where the notation \((\mathbf{A}_{1}\),..., \(\mathbf{A}_{m})_{j}\) indicates that the \(\mathbf{A}\) indices are symmetrized, and then \(j\) of them are set to \(1\) and the remaining \(m-j\) are set to \(0\). The functions are real-analytic and the associated representation \(T_{m}\) is then given by \[\mathrm{SU}(2)\ni t^{\mathbf{A}}{}_{\mathbf{B}}\mapsto T_{m}(t^{\mathbf{A}}{}_{\mathbf{B}})=T_ {m}{}^{j}{}_{k}(t^{\mathbf{A}}{}_{\mathbf{B}})\in\mathrm{SU}(m+1).\] By the Schur orthogonality relations and the Peter-Weyl Theorem, the functions \[\sqrt{m+1}\,T_{m}{}^{j}{}_{k}\] form a Hilbert basis for the Hilbert space \(L^{2}(\mathrm{SU}(2),\mu)\) where \(\mu\) denotes the normalised Haar measure on \(\mathrm{SU}(2)\). In particular, any complex analytic function \(\phi\) on \(\mathrm{SU}(2)\) admits the expansion \[\phi(t^{\mathbf{A}}{}_{\mathbf{B}})=\sum_{m=0}^{\infty}\sum_{j=0}^{m}\sum_{k=0}^{m} \phi_{m,k,j}T_{m}{}^{j}{}_{k},\] with complex coefficients \(\phi_{m,k,j}\) which decay rapidly as \(m\to\infty\). From the earlier definition, one can check that under complex conjugation we have \[\overline{T_{m}{}^{j}{}_{k}}=(-1)^{j+k}T_{m}{}^{m-j}{}_{m-k}.\] Moreover, one has for \(0\leqslant j\), \(k\leqslant m\), \(m=0\), \(1,2\), \(\ldots\), that \[\mathbf{X}T_{m}{}^{j}{}_{k}=(m-2k)T_{m}{}^{j}{}_{k},\] \[\mathbf{X}_{+}T_{m}{}^{j}{}_{k}=\beta_{m,k}T_{m}{}^{j}{}_{k-1},\] \[\mathbf{X}_{-}T_{m}{}^{j}{}_{k}=-\beta_{m,k+1}T_{m}{}^{j}{}_{k+1},\] where \[\beta_{m,k}\equiv\sqrt{k(m-k+1)}.\] In particular, the 2-sphere Laplacian acts on the representations \(T_{m}{}^{j}{}_{k}\) by \[\{\mathbf{X}_{+},\,\mathbf{X}_{-}\}T_{m}{}^{j}{}_{k}=(2k(k-m)-m)T_{m}{}^{j}{}_{k}.\] These formulae ensure that for a function \(\phi\) with spin weight \(\varsigma\in\frac{1}{2}\mathbb{N}\), i.e. satisfying \[\mathbf{X}\phi=2\varsigma\phi,\] the expansions above reduce to \[\phi=\sum_{m\leqslant|2\sigma|}\sum_{j=0}^{m}\phi_{m,j}{T_{m}}^{j}{}_{m/2-\sigma},\] where \(m\) takes even values if \(\sigma\) is an integer and odd values if \(\sigma\) is a half-integer [10]. In particular, we note that for a given \(k\) (\(0\leqslant k\leqslant 2s\)), the component \(\phi_{k}\) has spin weight \((k-s)\); that is, \(\boldsymbol{X}\phi_{k}=2(k-s)\phi_{k}\). _Remark 11_.: The matrix coefficients \({T_{m}}^{j}{}_{k}\) as defined above are related to the perhaps more widely used spin-weighted spherical harmonics \({}_{s}Y_{lm}\), as well as Wigner's \(D\)-matrices \(\mathcal{D}^{j}_{m^{\prime}m}\). One has \[{}_{s}Y_{lm}=(-1)^{s+m}\sqrt{\frac{2l+1}{4\pi}}{T_{2l}}^{l-m}{}_{l-s} \tag{27}\] for \(l\in\mathbb{N}\cup\{0\}\), and \(m\), \(s\in\{-l,-l+1,\,\ldots,\,l-1,l\}\)[10]. Further, \[\mathcal{D}^{j}_{m^{\prime}m}\propto{T_{2j}}^{j-m}{}_{j-m^{\prime}},\] where \(j\in\frac{1}{2}\mathbb{N}\) and \(m^{\prime}\), \(m\in\{-j,-j+1,\,\ldots,\,j-1,j\}\), [11]. ## Appendix B Spin-\(s\) equations In this appendix we provide a derivation of the massless spin-\(s\) equations in the F-gauge. Let \[\phi_{A_{1}\ldots A_{2s}}=\phi_{(A_{1}\ldots A_{2s})}\] denote a totally symmetric spinor of valence \(2s\), with \(s\in\frac{1}{2}\mathbb{N}\). The massless spin-\(s\) equations for \(\phi\) then read \[\nabla^{Q}{}_{A^{\prime}}\phi_{QA_{1}\ldots A_{2s-1}}=0. \tag{28}\] ### Hyperbolic reduction The system (28) is not manifestly symmetric hyperbolic, but a _hyperbolic reduction_ may be obtained by making use of the _space-spinor_ formalism (see e.g. [12], SS4) as follows. Let \(\tau^{AA^{\prime}}\) denote the spinorial counterpart of the timelike vector field \(\tau^{a}\), with normalization \(\tau_{AA^{\prime}}\tau^{AA^{\prime}}=2\). We define a spin dyad \[\{\epsilon_{\boldsymbol{A}}{}^{A}\}=\{o^{A},\,\iota^{A}\}\] adapted to \(\tau^{AA^{\prime}}\) by requiring that \[\tau^{AA^{\prime}}=o^{A}\bar{o}^{A^{\prime}}+\iota^{A}\bar{\iota}^{A^{\prime}}.\] It then follows that \[\tau_{AA^{\prime}}\tau^{BA^{\prime}}=\delta_{A}{}^{B}.\] Defining \(\nabla_{AB}\equiv\tau_{B}{}^{A^{\prime}}\nabla_{AA^{\prime}}\), one has the decomposition \[\nabla_{AB}=\frac{1}{2}\epsilon_{AB}\mathcal{D}+\mathcal{D}_{AB},\] where \[\mathcal{D}\equiv\tau^{AA^{\prime}}\nabla_{AA^{\prime}}\quad\text{and}\quad \mathcal{D}_{AB}\equiv\tau_{(A}{}^{A^{\prime}}\nabla_{B)A^{\prime}}\] denote, respectively, the so-called _Fermi_ and _Sen_ derivative operators. The operators \(\mathcal{D}\) and \(\mathcal{D}_{AB}\) correspond, respectively, to temporal and spatial parts of \(\nabla_{AA^{\prime}}\). Using this decomposition in equation (28), one gets \[\mathcal{D}\phi_{A_{1}\ldots A_{2s}}-2\mathcal{D}^{Q}{}_{A_{1}}\phi_{A_{2} \ldots A_{2s}Q}=0.\] The above equation has two irreducible components: the totally symmetric part and the trace. That is, \[\mathcal{D}\phi_{A_{1}\ldots A_{2s}}-2\mathcal{D}^{Q}{}_{(A_{1}} \phi_{A_{2}\ldots A_{2s})Q}=0, \tag{29a}\] \[\mathcal{D}^{PQ}\phi_{PQA_{1}\ldots A_{2s-2}}=0. \tag{29b}\] Equations (29a) and (29b) will be referred to as the _evolution_ and _constraint_ equations, respectively. _Remark 12_.: It can be shown, through a standard propagation of constraints argument, that if (29b) is satisfied on some initial hypersurface \(\{\tau=0\}\), then it is also satisfied at later times whenever (29a) holds. _Remark 13_.: Note that the above decomposition into evolution and constraint equations fails when \(s=\frac{1}{2}\). In this case \(\phi_{A}\) has only two independent components and there are two corresponding equations, \[\mathcal{D}\phi_{A}-2\mathcal{D}^{Q}{}_{A}\phi_{Q}=0,\] so there are no constraints. ### Transport equations along null geodesics Given a spinor \(\mu_{AB\ldots C}\), we denote its components with respect to the spin dyad \(\{\epsilon_{\boldsymbol{A}}{}^{A}\}\) by \(\mu_{\boldsymbol{A}\boldsymbol{B}\ldots\boldsymbol{C}}\). In particular, then \[\nabla_{\boldsymbol{A}\boldsymbol{A}^{\prime}}\mu_{\boldsymbol{B} \ldots\boldsymbol{C}} =\epsilon_{\boldsymbol{A}}{}^{A}\epsilon_{\boldsymbol{A}^{\prime }}{}^{A}\epsilon_{\boldsymbol{B}}{}^{B}\ldots\epsilon_{\boldsymbol{C}}{}^{C} \nabla_{AA^{\prime}}\mu_{B\ldots C}\] \[=\partial_{\boldsymbol{A}\boldsymbol{A}^{\prime}}\mu_{\boldsymbol {B}\ldots\boldsymbol{C}}-\Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}}{}^{ \boldsymbol{Q}}{}_{\boldsymbol{B}}\mu_{\boldsymbol{Q}\ldots\boldsymbol{C}}- \cdots-\Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}}{}^{\boldsymbol{Q}}{}_{ \boldsymbol{D}}\mu_{\boldsymbol{B}\ldots\boldsymbol{Q}},\] where \[\partial_{\boldsymbol{A}\boldsymbol{A}^{\prime}}\equiv e_{\boldsymbol{A} \boldsymbol{A}^{\prime}}{}^{\mu}\partial_{\mu} \tag{30}\] denotes the directional derivatives of the Newman-Penrose (NP) frame \(\{\boldsymbol{e}_{\boldsymbol{A}\boldsymbol{A}^{\prime}}\}\) associated to the spin dyad \(\{\epsilon_{\boldsymbol{A}}{}^{A}\}\), and \(\Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}}{}^{\boldsymbol{C}}{}_{ \boldsymbol{D}}\) are the corresponding spin connection coefficients. In order to give more explicit expressions, it is convenient to define the following basis of symmetric \((0,2)\)-spinors with unprimed indices, \[\sigma^{0}_{AB}\equiv o_{A}o_{B}, \sigma^{1}_{AB}\equiv\iota_{(A}o_{B)}, \sigma^{2}_{AB}\equiv\iota_{A}\iota_{B}.\] The spinors \(\sigma^{i}_{AB}\), \(i\in\{0,1,2\}\), satisfy the orthogonality relations \[\sigma^{0}_{AB}(\sigma^{0})^{AB}=0, \sigma^{0}_{AB}(\sigma^{1})^{AB}=0, (\sigma^{0})^{AB}\sigma^{2}_{AB}=1,\] \[\sigma^{1}_{AB}(\sigma^{1})^{AB}=-\frac{1}{2}, \sigma^{1}_{AB}(\sigma^{2})^{AB}=0, \sigma^{2}_{AB}(\sigma^{2})^{AB}=0.\] For higher valence spinors, we similarly define, for half-integer \(s\), the basis \[\sigma^{0}_{A_{1}\ldots A_{2s}}\equiv o_{(A_{1}}\ldots o_{A_{2s})},\quad \sigma^{1}_{A_{1}\ldots A_{2s}}\equiv\iota_{(A_{1}}o_{A_{2}}\ldots o_{A_{2s}) },\quad\ldots\quad\sigma^{2s}_{A_{1}\ldots A_{2s}}\equiv\iota_{(A_{1}}\ldots \iota_{A_{2s})}.\] By contracting with \(\tau^{\boldsymbol{A}\boldsymbol{A}^{\prime}}\), the directional derivatives (30) can be decomposed into temporal and spatial parts as \[\partial_{\boldsymbol{A}\boldsymbol{A}^{\prime}}=\frac{1}{2}\tau_{\boldsymbol {A}\boldsymbol{A}^{\prime}}\partial-\tau^{\boldsymbol{Q}}{}_{\boldsymbol{A} ^{\prime}}\partial_{\boldsymbol{A}\boldsymbol{B}},\] where we write \(\partial\equiv\tau^{\boldsymbol{A}\boldsymbol{A}^{\prime}}\partial_{ \boldsymbol{A}\boldsymbol{A}^{\prime}}\) and \(\partial_{\boldsymbol{A}\boldsymbol{B}}\equiv\tau_{(\boldsymbol{A}}{}^{ \boldsymbol{A}^{\prime}}\partial_{\boldsymbol{B})\boldsymbol{A}^{\prime}}\). In the particular case of the F-gauge used in the main text, one has that \[\partial=\sqrt{2}\partial_{\tau},\] \[\partial_{\boldsymbol{AB}}=\sqrt{2}\sigma^{1}_{\boldsymbol{AB}}\left(-\tau \partial_{\tau}+\rho\partial_{\rho}\right)+\frac{1}{\sqrt{2}}\sigma^{0}_{ \boldsymbol{AB}}\boldsymbol{X}_{+}-\frac{1}{\sqrt{2}}\sigma^{2}_{\boldsymbol{AB }}\boldsymbol{X}_{-},\] \[\Gamma_{\boldsymbol{AA}^{\prime}\boldsymbol{CD}}=-\frac{1}{\sqrt{2}}\tau_{ \boldsymbol{AA}^{\prime}}\sigma^{1}_{\boldsymbol{CD}}.\] In particular, it follows that the spatial part \(\Gamma_{\boldsymbol{ABCD}}=\tau_{(\boldsymbol{B}}^{\boldsymbol{B}^{\prime}} \Gamma_{\boldsymbol{A})\boldsymbol{B}^{\prime}\boldsymbol{CD}}\) of the spin connection coefficients vanishes. Thus, the only non-trivial part of the connection is given by the _acceleration_\(f_{\boldsymbol{AB}}\equiv-\tau^{\boldsymbol{CC}^{\prime}}\Gamma_{\boldsymbol{ CC}^{\prime}\boldsymbol{AB}}\). In fact it is easy to see that \[f_{\boldsymbol{AB}}=\sqrt{2}\sigma^{1}_{\boldsymbol{AB}}.\] As a consequence of \(\Gamma_{\boldsymbol{ABCD}}=0\) and the fact that the components \(\sigma^{k}_{\boldsymbol{A}_{1}\ldots\boldsymbol{A}_{2s}}\) of \(\sigma^{k}_{A_{1}\ldots A_{2s}}\) are constants, it follows that \[\mathcal{D}_{\boldsymbol{AB}}\sigma^{k}_{\boldsymbol{A}_{1}\ldots\boldsymbol{ A}_{2s}}=\partial_{\boldsymbol{AB}}\sigma^{k}_{\boldsymbol{A}_{1}\ldots \boldsymbol{A}_{2s}}=0,\] for all \(0\leqslant k\leqslant 2s\). On the other hand, a calculation shows that \[\mathcal{D}\sigma^{k}_{\boldsymbol{A}_{1}\ldots\boldsymbol{A}_{2s}}=\frac{1}{ \sqrt{2}}\sigma^{k}_{\boldsymbol{A}_{1}\ldots\boldsymbol{A}_{2s}}.\] In order to write down the equations (29a) and (29b) in our gauge, we now expand \(\phi_{A_{1}\ldots A_{2s}}\) in terms of the basis elements \(\sigma^{k}_{A_{1}\ldots A_{2s}}\) as \[\phi_{A_{1}\ldots A_{2s}}=\sum_{k=0}^{2s}(-1)^{k}\binom{2s}{k}\phi_{2s-k} \sigma^{k}_{A_{1}\ldots A_{2s}}, \tag{31}\] where \[\phi_{0}\equiv\phi_{A_{1}\ldots A_{2s}}\sigma^{A_{1}}\ldots\sigma^{A_{2s}}, \quad\phi_{1}\equiv\phi_{A_{1}\ldots A_{2s}}\iota^{A_{1}}\sigma^{A_{2}}\ldots \sigma^{A_{2s}},\quad\ldots\quad\phi_{2s}\equiv\phi_{A_{1}\ldots A_{2s}}\iota^ {A_{1}}\ldots\iota^{A_{2s}}\] are the components of the massless spinor \(\phi_{A_{1}\ldots A_{2s}}\). That is, the component \(\phi_{k}\) is obtained from \(k\) contractions with \(\iota^{A}\) and \(2s-k\) contractions with \(o^{A}\). By plugging in the expansion (31) into the equations (29a) and (29b), one may derive the \(2s+1\) scalar evolution equations and \(2s-1\) scalar constraint equations satisfied by the components \(\phi_{k}\), \(0\leqslant k\leqslant 2s\). These turn out to be \[E_{0}\equiv(1-\tau)\partial_{\tau}\phi_{0}+\rho\partial_{\rho} \phi_{0}-\boldsymbol{X}_{-}\phi_{1}-s\phi_{0}=0, \tag{32a}\] \[E_{k}\equiv\partial_{\tau}\phi_{k}-\frac{1}{2}\boldsymbol{X}_{+} \phi_{k-1}-\frac{1}{2}\boldsymbol{X}_{-}\phi_{k+1}+(k-s)\phi_{k}=0,\qquad k=1, \,\ldots,\,2s-1,\] (32b) \[E_{2s}\equiv(1+\tau)\partial_{\tau}\phi_{2s}-\rho\partial_{\rho} \phi_{2s}-\boldsymbol{X}_{+}\phi_{2s-1}+s\phi_{2s}=0, \tag{32c}\] and \[C_{k}\equiv\tau\partial_{\tau}\phi_{k}-\rho\partial_{\rho}\phi_{k}+\frac{1}{2} \boldsymbol{X}_{-}\phi_{k+1}-\frac{1}{2}\boldsymbol{X}_{+}\phi_{k-1}=0,\qquad k =1,\,\ldots,\,2s-1.\] _Remark 14_.: Note that the equations \(C_{k}=0\) are termed _constraints_ despite containing \(\partial_{\tau}\) derivatives. Indeed, the quantities \(C_{k}\) are propagated as noted in Remark 12. The analysis in the main text makes use of certain combinations of the above evolution and constraint equations. We set \[A_{k}\equiv E_{k+1}+C_{k+1}\quad\text{for}\ \ k=0,\,\ldots,\,2s-2\quad\text{and} \quad A_{2s-1}\equiv E_{2s},\] and \[B_{0}\equiv E_{0}\quad\text{and}\quad B_{k}\equiv E_{k}-C_{k}\quad\text{for}\ \ k=1,\,\ldots,\,2s-1.\] Explicitly, these are \[A_{k}=(1+\tau)\partial_{\tau}\phi_{k+1}-\rho\partial_{\rho}\phi_{k+1}- \boldsymbol{X}_{+}\phi_{k}+(k+1-s)\phi_{k+1}=0, \tag{33a}\] \[B_{k}=(1-\tau)\partial_{\tau}\phi_{k}+\rho\partial_{\rho}\phi_{k}-\mathbf{X}_{-}\phi _{k+1}+(k-s)\phi_{k}=0, \tag{33b}\] for \(k=0,\,\ldots,\,2s-1\). The equations \(A_{k}=0\) and \(B_{k}=0\) are, respectively, transport equations along outgoing and incoming null geodesics in Minkowski space in the F-gauge (introduced in Section 2.2), and we shall refer to them as the _outgoing equations_ and _incoming equations_ respectively. A crucial feature of the outgoing and incoming equations is that they become degenerate at \(\tau=-1\) and \(\tau=+1\) respectively. Further, we observe here that for a given \(k\) the pair \((A_{k},B_{k})\) involves only the components \(\phi_{k}\) and \(\phi_{k+1}\). ### Wave equations It will be useful to note that the spinor \(\phi_{A_{1}\ldots A_{2s}}\) satisfies the wave equation. Applying \(\nabla_{P}{}^{A^{\prime}}\) to equation (28) and making use of the decomposition \[\nabla_{PA^{\prime}}\nabla_{Q}{}^{A^{\prime}}=\tfrac{1}{2}\epsilon_{PQ}\Box+ \Box_{PQ},\] one finds that \[\Box\phi_{A_{1}\ldots A_{2s}}+2\Box_{P}{}^{Q}\phi_{QA_{1}\ldots A_{2s-1}}=0,\] where \(\Box_{AB}\equiv\nabla_{A^{\prime}(A}\nabla_{B)}{}^{A^{\prime}}\) is the _Penrose box_ encoding the commutator of two \(\nabla_{AA^{\prime}}\) derivatives. The Penrose box may be expressed entirely in terms of the Weyl spinor \(\Psi_{ABCD}\) and the Ricci scalar \(\Lambda=\mathrm{R}/24\). In the F-gauge one has \[\Psi_{ABCD}=0\quad\text{and}\quad\Lambda=0.\] As a result, one obtains the simple wave equation \[\Box\phi_{A_{1}\ldots A_{2s}}=0. \tag{34}\] Using the splitting \[\nabla_{AA^{\prime}}=\frac{1}{2}\tau_{AA^{\prime}}\mathcal{D}-\tau^{Q}{}_{A^{ \prime}}\mathcal{D}_{AQ},\] in the F-gauge the wave operator (34) on \(\phi_{A_{1}\ldots A_{2s}}\) may be written as \[\Box\phi_{A_{1}\ldots A_{2s}}=\frac{1}{2}\mathcal{D}^{2}\phi_{A_{1}\ldots A_{ 2s}}+\mathcal{D}^{AB}\mathcal{D}_{AB}\phi_{A_{1}\ldots A_{2s}}-\sqrt{2}(\sigma ^{1})^{AB}\mathcal{D}_{AB}\phi_{A_{1}\ldots A_{2s}}.\] Scalarising this equation and writing the derivative operators \(\mathcal{D}\) and \(\mathcal{D}_{AB}\) in terms of \(\partial_{\tau}\) and \(\partial_{AB}\), one finds, after a calculation, that the components \(\phi_{k}\) satisfy the wave equations \[\mathcal{W}_{k}[\mathbf{\phi}]\equiv(1-\tau^{2})\partial_{\tau}^{2}\phi_{k}+2\tau \rho\partial_{\tau}\partial_{\rho}\phi_{k}-\rho^{2}\partial_{\rho}^{2}\phi_{k }-\frac{1}{2}\{\mathbf{X}_{+},\mathbf{X}_{-}\}\phi_{k}+2(s-k-\tau)\partial_{\tau}\phi _{k}+2(s-k)^{2}\phi_{k}=0\] for \(k=0,\,\ldots,\,2s\). For convenience, we also introduce the _reduced wave operator_ acting on a scalar \(\zeta\) as \[\blacksquare\mathbf{\zeta}\equiv(1-\tau^{2})\ddot{\zeta}+2\tau\rho\dot{\zeta}^{\prime}- \rho^{2}\zeta^{\prime\prime}-2\tau\dot{\zeta}-\frac{1}{2}\{\mathbf{X}_{+},\mathbf{X}_ {-}\}\zeta, \tag{35}\] where we denote \(\dot{\ }\equiv\partial_{\tau}\) and \(\dot{\ }\equiv\partial_{\rho}\). In this notation \[\mathcal{W}_{k}[\mathbf{\phi}]=\blacksquare\phi_{k}+\mathbf{L}\phi_{k},\qquad k=0,\, \ldots,\,2s, \tag{36}\] where \(\mathbf{L}\) is a linear lower order operator such that \([\mathbf{L},\partial_{\rho}]=0\). _Remark 15_.: In a standard (i.e. non-characteristic) initial value problem, the system (36) of wave equations needs to be supplemented with the initial data \((\phi_{k}^{\star},\dot{\phi}_{k}^{\star})\) where \(\phi_{k}^{\star}\equiv\phi_{k}(\tau_{\star})\) and \(\dot{\phi}_{k}^{\star}\equiv\dot{\phi}_{k}(\tau_{\star})\) for some initial hypersurface \(\mathcal{S}_{\star}=\{\tau=\tau_{\star}\}\), \(\tau_{\star}\in(-1,1)\). We observe that, since the system (36) arises from the first order system (32a)-(32c), here the time derivative part \(\dot{\phi}_{k}^{\star}\) of the data is always expressible in terms of \(\phi_{k}^{\star}\). ### A more general gauge Certain arguments in the main text require a version of the F-gauge in which only the critical sets \(\mathcal{I}^{\pm}\) are singular sets of the evolution equations, and \(\mathscr{I}^{\pm}\) is given by a non-horizontal hypersurface in a generalized coordinate plane \((\hat{\tau},\rho)\). Proceeding as in Section 2.2--but instead of writing \(x^{0}=\tau\rho\)--we define a new time coordinate \(\hat{\tau}\) by \[x^{0}=\hat{\tau}\kappa,\quad\text{where}\quad\kappa=\rho\mu,\] and \(\mu\) is a smooth function of \(\rho\) such that \(\mu(0)=1\), but \(\mu\not\equiv 1\). The specific version of the F-gauge introduced in Section 2.2 corresponds to the choice \(\mu\equiv 1\). This more general choice of the coordinate \(\hat{\tau}\) leads to the conformal factor \[\hat{\Theta}=\frac{\rho}{\mu}\big{(}1-\mu^{2}\hat{\tau}^{2}\big{)}=\kappa^{-1}\Xi, \tag{37}\] which, in turn, gives rise to the unphysical metric \[\hat{\boldsymbol{\eta}} =\hat{\Theta}^{2}\tilde{\boldsymbol{\eta}},\] \[=\mathbf{d}\hat{\tau}\otimes\mathbf{d}\hat{\tau}+\frac{\hat{\tau }\kappa^{\prime}}{\kappa}\big{(}\mathbf{d}\hat{\tau}\otimes\mathbf{d}\rho+ \mathbf{d}\rho\otimes\mathbf{d}\hat{\tau}\big{)}-\frac{(1-\hat{\tau}^{2}\kappa ^{\prime 2})}{\kappa^{2}}\mathbf{d}\rho\otimes\mathbf{d}\rho-\frac{1}{\mu^{2} }\boldsymbol{\sigma}.\] This conformal metric is supplemented with the following choice of frame: \[\hat{\boldsymbol{e}}_{\mathbf{0}\boldsymbol{0}^{\prime}} =\frac{1}{\sqrt{2}}\big{(}(1-\kappa^{\prime}\hat{\tau})\big{)} \boldsymbol{\partial}_{\hat{\tau}}+\kappa\boldsymbol{\partial}_{\rho},\] \[\hat{\boldsymbol{e}}_{\mathbf{1}\boldsymbol{1}^{\prime}} =\frac{1}{\sqrt{2}}\big{(}(1+\kappa^{\prime}\hat{\tau})\big{)} \boldsymbol{\partial}_{\hat{\tau}}-\kappa\boldsymbol{\partial}_{\rho},\] \[\hat{\boldsymbol{e}}_{\mathbf{0}\boldsymbol{1}^{\prime}} =-\frac{1}{\sqrt{2}}\mu\boldsymbol{X}_{+},\] \[\hat{\boldsymbol{e}}_{\mathbf{1}\boldsymbol{0}^{\prime}} =-\frac{1}{\sqrt{2}}\mu\boldsymbol{X}_{-},\] with associated non-vanishing spin connection coefficients \[\hat{\Gamma}_{\mathbf{0}\boldsymbol{0}^{\prime}\mathbf{0}\mathbf{1}}=\hat{ \Gamma}_{\mathbf{1}\boldsymbol{1}^{\prime}\mathbf{0}\mathbf{1}}=-\frac{1}{2 \sqrt{2}}\kappa^{\prime}\quad\text{and}\quad\hat{\Gamma}_{\mathbf{0} \boldsymbol{1}^{\prime}\mathbf{1}\mathbf{1}}=\hat{\Gamma}_{\mathbf{1} \boldsymbol{0}^{\prime}\mathbf{0}\mathbf{0}}=\frac{1}{\sqrt{2}}\rho\mu^{\prime}.\] The above is equivalent to the expression \[\Gamma_{\boldsymbol{A}\boldsymbol{A}^{\prime}\boldsymbol{C}\boldsymbol{D}}= \frac{1}{\sqrt{2}}\rho\mu^{\prime}\tau^{\boldsymbol{B}}{}_{\boldsymbol{A}^{ \prime}}\epsilon_{\boldsymbol{A}\boldsymbol{C}}\sigma^{1}_{\boldsymbol{B} \boldsymbol{D}}+\frac{1}{\sqrt{2}}\rho\mu^{\prime}\tau_{\boldsymbol{D} \boldsymbol{A}^{\prime}}\sigma^{1}_{\boldsymbol{A}\boldsymbol{C}}-\frac{1}{ \sqrt{2}}(\mu+\rho\mu^{\prime})\tau_{\boldsymbol{A}\boldsymbol{A}^{\prime}} \sigma^{1}_{\boldsymbol{C}\boldsymbol{D}}.\] The space spinor counterpart of the above expressions is given by \[\boldsymbol{\partial}=\sqrt{2}\boldsymbol{\partial}_{\tau},\] \[\boldsymbol{\partial}_{\boldsymbol{A}\boldsymbol{B}}=\sqrt{2} \sigma^{1}_{\boldsymbol{A}\boldsymbol{B}}\left(-\tau\kappa^{\prime}\boldsymbol {\partial}_{\tau}+\kappa\boldsymbol{\partial}_{\rho}\right)+\frac{1}{\sqrt{2} }\sigma^{0}_{\boldsymbol{A}\boldsymbol{B}}\boldsymbol{X}_{+}-\frac{1}{\sqrt{2 }}\sigma^{2}_{\boldsymbol{A}\boldsymbol{B}}\boldsymbol{X}_{-},\] \[\Gamma_{\boldsymbol{A}\boldsymbol{B}\boldsymbol{C}\boldsymbol{D} }=-\frac{1}{\sqrt{2}}\rho\mu^{\prime}(\epsilon_{\boldsymbol{A}\boldsymbol{C}} \sigma^{1}_{\boldsymbol{B}\boldsymbol{D}}+\epsilon_{\boldsymbol{B}\boldsymbol{D }}\sigma^{1}_{\boldsymbol{A}\boldsymbol{C}})-\frac{1}{\sqrt{2}}(\mu+\rho\mu^{ \prime})\epsilon_{\boldsymbol{A}\boldsymbol{B}}\sigma^{1}_{\boldsymbol{C} \boldsymbol{D}}.\] In particular, the acceleration is given by \[f_{\boldsymbol{A}\boldsymbol{B}}=\sqrt{2}(\mu+\rho\mu^{\prime})\sigma^{1}_{ \boldsymbol{A}\boldsymbol{B}}.\] Now, recalling that the conformal factor used in Section 2.2 is given by \(\Theta=\rho^{-1}\Xi\), it follows by comparison with equation (37) that \[\hat{\boldsymbol{\eta}}=\varpi^{2}\boldsymbol{\eta},\qquad\varpi\equiv\frac{ 1}{\mu}.\] The associated transformation of the antisymmetric spinor is then given by \[\epsilon_{AB}=\mu\hat{\epsilon}_{AB},\] with a scaling of the spin basis given by \[o_{A}=\mu^{1/2}\hat{o}_{A},\qquad\iota_{A}=\mu^{1/2}\hat{\iota}_{A}. \tag{38}\] The unphysical spin-\(s\) fields are related to the physical one by \[\phi_{A_{1}\cdots A_{2s}}=\Theta^{-1}\tilde{\phi}_{A_{1}\cdots A_{2s}},\qquad \hat{\phi}_{A_{1}\cdots A_{2s}}=\hat{\Theta}^{-1}\tilde{\phi}_{A_{1}\cdots A_{ 2s}},\] so that, in fact, one has \[\hat{\phi}_{A_{1}\cdots A_{2s}}=\mu\phi_{A_{1}\cdots A_{2s}}.\] The scaling (38) then implies that \[\hat{\phi}_{i}=\mu^{s+1}\phi_{i},\qquad i=0,\,\ldots,\,2s.\] _Remark 16_.: Observe that given that \(\mu\) is assumed to be a smooth function of \(\rho\), it follows that the regularity of the components of the spin-\(s\) field is not affected by the rescaling. Following an approach similar to the one used in Appendix B.2, one obtains the equations \[E_{0}\equiv(1-\kappa^{\prime}\tau)\partial_{\tau}\phi_{0}+\kappa \partial_{\rho}\phi_{0}-\mu\mathbf{X}_{-}\phi_{1}-s(\mu+\rho\mu^{\prime})\phi_{0}=0, \tag{39a}\] \[E_{k}\equiv\partial_{\tau}\phi_{k}-\frac{1}{2}\mu\mathbf{X}_{+}\phi_ {k-1}-\frac{1}{2}\mu\mathbf{X}_{-}\phi_{k+1}+(k-s)(\mu+\rho\mu^{\prime})\phi_{k}=0, \qquad k=1,\,\ldots,\,2s-1,\] (39b) \[E_{2s}\equiv(1+\kappa^{\prime}\tau)\partial_{\tau}\phi_{2s}- \kappa\partial_{\rho}\phi_{2s}-\mu\mathbf{X}_{+}\phi_{2s-1}+s(\mu+\rho\mu^{\prime} )\phi_{2s}=0, \tag{39c}\] and \[C_{k}\equiv\kappa^{\prime}\tau\partial_{\tau}\phi_{k}-\kappa\partial_{\rho} \phi_{k}+\frac{1}{2}\mu\mathbf{X}_{-}\phi_{k+1}-\frac{1}{2}\mu\mathbf{X}_{+}\phi_{k-1} +s\mu^{\prime}\phi_{k}=0,\qquad k=1,\,\ldots,\,2s-1.\] ## Appendix C F-expansions In this appendix we provide a detailed overview of the construction of _F-expansions_ for the solutions to the spin-\(s\) field equations: expansions which exploit the cylinder at spatial infinity \(\mathcal{I}\) being a total characteristic of the evolution equations [10, 11, 12]. ### Interior equations on \(\mathcal{I}\) The total characteristic nature of the cylinder at spatial infinity for the spin-\(s\) equations is reflected in the fact that the reduced wave operator \(\blacksquare\)--essentially the principal and sub-principal parts of the full wave operator \(\Box\) acting on \(\phi_{A_{1}\ldots A_{2s}}\) (see Appendix B.3)--reduces, upon evaluation on \(\mathcal{I}\), to the interior operator \(\stackrel{{\circ}}{{\blacksquare}}\equiv\blacksquare\)\(|_{\mathcal{I}}\), which acts on a scalar \(\zeta\) by \[\stackrel{{\circ}}{{\blacksquare}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Expansions in terms of \(T_{m}{}^{j}{}_{k}\) Given that the spin weight of component \(\phi_{k}\) is \((k-s)\), and writing \(m=2q\), we look, for \(p\geqslant|s-k|\), for solutions \(\phi_{k}^{(p)}\) to (40) of the form \[\phi_{k}^{(p)}=\sum_{q=|s-k|}^{p}\sum_{j=0}^{2q}a_{k,p;q,j}T_{2q}{}^{j}{}_{q+s- k}, \tag{41}\] where the coefficients \(a_{k,p;q,j}\) are functions of \(\tau\). _Remark 17_.: This Ansatz for the coefficients \(\phi_{k}^{(p)}\) is motivated by analogy to the analysis in [11, 12] of time symmetric initial data sets for the Einstein field equations admitting a conformal metric which is analytic at spatial infinity. Depending on the particular application at hand, the Ansatz can be suitably generalised. For example, the analysis in [10] of BMS charges for spin-1 and spin-2 fields considers, e.g. for the coefficient \(\phi_{2}^{(0)}\) of the spin-2 field, the expansion \[\phi_{2}^{(0)}=\sum_{q=0}^{\infty}\sum_{j=0}^{2q}a_{2,0;q,j}T_{2q}{}^{j}{}_{q+ s-2}.\] In that particular case one finds that the coefficient \(a_{2,0;q,j}\) decomposes into a sum of a regular part and a part which has logarithmic divergences at \(\tau=\pm 1\). The part of the solution with logarithmic terms can be eliminated by fine-tuning the initial data. _Remark 18_.: In the case \(s=\frac{1}{2}\) the expansion (41) takes the particular form \[\phi_{k}^{(p)}=\sum_{q=|k-\frac{1}{2}|}^{p}\sum_{j=0}^{2q}a_{k,p;q,j}T_{2q}{}^ {j}{}_{q+\frac{1}{2}-k},\qquad k\in\{0,1\},\] with the understanding that \(q\) is a proper half-integer: that is, it does not simplify to an integer, \(s=\frac{n+1}{2}\), \(n\in\mathbb{N}\). In particular, the indices of \(T_{i}{}^{j}{}_{k}\) are always integers. Comparing the above expansion with the relation (27) shows that in this case one has expansions in term of the harmonics \({}_{\pm\frac{1}{2}}Y_{lm}\). The expansions for fields with higher half-integer spins are analogous. From the expansion (41) and equation (40), it follows then that the coefficient \(a_{k,p;q,j}\) satisfies the ODE \[(1-\tau^{2})\ddot{a}_{k,p;q,j}+2\big{(}(p-1)\tau+s-k\big{)}\dot{a}_{k,p;q,j}+ \big{(}q^{2}+q-p^{2}-p\big{)}a_{k,p;q,j}=0, \tag{42}\] where the integers \((k,p,q,j)\) are such that \[0\leqslant k\leqslant 2s,\qquad|s-k|\leqslant p,\qquad|s-k|\leqslant q \leqslant p,\quad\text{and}\quad 0\leqslant j\leqslant 2q.\] Equation (42) is an example of a _Jacobi ordinary differential equation_. Jacobi equations are usually parametrised in the form \[D_{(n,\alpha,\beta)}a\equiv(1-\tau^{2})\ddot{a}+\big{(}\beta-\alpha-(\alpha+ \beta+2)\tau\big{)}\dot{a}+n(n+\alpha+\beta)a=0. \tag{43}\] A direct comparison between equations (42) and (43) gives \[\alpha =-p+(k-s), \tag{44a}\] \[\beta =-p-(k-s),\] (44b) \[n =n_{1}\equiv p+q,\qquad\text{or}\qquad n=n_{2}\equiv p-q-1. \tag{44c}\] As we shall see in the following subsection, the qualitative nature of the solutions to equation (42) differs depending on whether \(|s-k|\leqslant q<p\) or \(q=p\); the case \(q=p\) corresponds to the harmonic which acquires logarithmic singularities at \(\tau=\pm 1\). #### c.2.1 Properties of the Jacobi differential equation An extensive discussion of the solutions of the Jacobi equation can be found in the monograph [12] from which we borrow a number of identities. The solutions to (43) are given by the _Jacobi polynomials_\(P_{n}^{(\alpha,\beta)}(\tau)\), of degree \(n\), defined by \[P_{n}^{(\alpha,\beta)}(\tau)\equiv\sum_{l=0}^{n}\binom{n+\alpha}{l}\binom{n+ \beta}{n-l}\left(\frac{\tau-1}{2}\right)^{n-l}\left(\frac{\tau+1}{2}\right)^{l},\] where for \(z\in\mathbb{C}\), \(r\in\mathbb{N}\) the binomial coefficient is defined by \[\binom{z}{r}=\left\{\frac{\Gamma(z+1)}{\Gamma(r+1)\Gamma(z-r+1)}\quad r\geqslant 0,\right.\] In particular, one has \[P_{0}^{(\alpha,\beta)}(\tau)=1,\] and \[P_{n}^{(\alpha,\beta)}(-\tau)=(-1)^{n}P_{n}^{(\beta,\alpha)}(\tau).\] The differential operator defined by (43) exhibits the following symmetries, \[D_{(n,\alpha,\beta)}\left(\left(\frac{1-\tau}{2}\right)^{-\alpha }a(\tau)\right) =\left(\frac{1-\tau}{2}\right)^{-\alpha}D_{(n+\alpha,-\alpha, \beta)}a(\tau), \tag{45a}\] \[D_{(n,\alpha,\beta)}\left(\left(\frac{1+\tau}{2}\right)^{-\beta }a(\tau)\right) =\left(\frac{1+\tau}{2}\right)^{-\beta}D_{(n+\beta,\alpha,-\beta) }a(\tau),\] (45b) \[D_{(n,\alpha,\beta)}\left(\left(\frac{1-\tau}{2}\right)^{-\alpha }\left(\frac{1+\tau}{2}\right)^{-\beta}a(\tau)\right)=\left(\frac{1-\tau}{2} \right)^{-\alpha}\left(\frac{1+\tau}{2}\right)^{-\beta}D_{(n+\alpha+\beta,- \alpha,-\beta)}a(\tau), \tag{45c}\] which hold for \(|\tau|<1\), arbitrary \(C^{2}\)-functions \(a(\tau)\), and arbitrary real values of the parameters \(\alpha\), \(\beta\), \(n\). An alternative definition of the Jacobi polynomials, convenient for verifying when the functions vanish identically, is given by \[P_{n}^{(\alpha,\beta)}(\tau)=\frac{1}{n!}\sum_{k=0}^{n}c_{k}\left(\frac{\tau- 1}{2}\right)^{k},\] with \[c_{0} \equiv(\alpha+1)(\alpha+2)\cdots(\alpha+n),\] \[\qquad\qquad\qquad\qquad\vdots\] \[c_{k} \equiv\frac{n!}{k!(n-k)!}(\alpha+k+1)(\alpha+k+2)\cdots(\alpha+n)\] \[\qquad\qquad\qquad\qquad\times(n+1+\alpha+\beta)(n+2+\alpha+\beta )\cdots(n+k+\alpha+\beta),\] \[\qquad\qquad\qquad\qquad\qquad\vdots\] \[c_{n} \equiv(n+1+\alpha+\beta)(n+2+\alpha+\beta)\cdots(2n+\alpha+\beta).\] #### c.2.2 Solutions for \(|s-k|\leqslant q<p\) In the case \(|s-k|\leqslant q<p\), one has from direct inspection of the formulae above, that the polynomial \(P_{n_{1}}^{(\alpha,\beta)}(\tau)\) with \((n_{1},\,\alpha,\,\beta)\) as given by (44) vanishes identically, while \[Q_{2}(\tau)\equiv P_{n_{2}}^{(\alpha,\beta)}(\tau)\] gives a polynomial of degree \(n_{2}\). A further non-trivial solution can be written down using the identity (45a); one finds a polynomial of degree \(n_{1}\) given by \[Q_{1}(\tau)\equiv\left(\frac{1-\tau}{2}\right)^{p-k+s}P_{q+k-s}^{(-\alpha,\beta )}(\tau).\] Since \(n_{2}<n_{1}\), the solutions \(Q_{1}\) and \(Q_{2}\) are linearly independent. Yet another solution can be obtained using identity (45b), namely \[Q_{3}(\tau)\equiv\left(\frac{1+\tau}{2}\right)^{p+k-s}P_{q-k+s}^{(\alpha,- \beta)}(\tau),\] which, again, is a polynomial of degree \(n_{1}\). It can be verified that \(Q_{1}\) and \(Q_{3}\) are also linearly independent. Making use of these solutions one can write the general solution to equation (42), for \(|s-k|\leqslant q<p\), in the symmetric form \[a_{k,p;q,j}(\tau)=\mathfrak{c}_{k,p;q,j}\left(\frac{1-\tau}{2}\right)^{p-k+s}P _{q+k-s}^{(-\alpha,\beta)}(\tau)+\mathfrak{d}_{k,p;q,j}\left(\frac{1+\tau}{2} \right)^{p+k-s}P_{q-k+s}^{(\alpha,-\beta)}(\tau), \tag{46}\] with \(\mathfrak{c}_{k,p;q,j}\) and \(\mathfrak{d}_{k,p;q,j}\) constants to be determined from the initial conditions. In particular, we have the following lemma: _Lemma C.1_.: For \(|s-k|\leqslant q<p\), the solutions to the Jacobi equation (42) are analytic at \(\tau=\pm 1\). _Remark 19_.: A direct inspection of the formulae given above show that they do not give non-vanishing solutions if \(p=q\). #### c.2.3 Solutions for \(p=q\) In order to obtain solutions in the case \(p=q\), we make use of identity (45c) with \(n=n_{1}\) and look for solutions of the form \[a_{k,p;p,j}(\tau)=\left(\frac{1-\tau}{2}\right)^{-\alpha}\left(\frac{1+\tau}{ 2}\right)^{-\beta}b(\tau),\] with \(b(\tau)\) satisfying the equation \[D_{(0,p-k+2,p+k-s)}b(\tau)=(1-\tau^{2})\ddot{b}(\tau)+2\big{(}k-s-(p+1)\tau \big{)}\dot{b}(\tau)=0.\] This can be integrated to give \[b(\tau)=\mathfrak{c}_{k,p;p,j}+\mathfrak{d}_{k,p;p,j}\int_{0}^{\tau}\frac{ \mathrm{d}\varsigma}{(1+\varsigma)^{p-s+k+1}(1-\varsigma)^{p+s-k+1}}, \tag{47}\] with \(\mathfrak{c}_{k,p;p,j}\) and \(\mathfrak{d}_{k,p;p,j}\) constants. Thus, the general solution to (42) for \(p=q\) can be written as \[a_{k,p;p,j}(\tau)=\left(\frac{1-\tau}{2}\right)^{p-k+s}\left(\frac{1+\tau}{2} \right)^{p+k-s}\left(\mathfrak{c}_{k,p;p,j}+\mathfrak{d}_{k,p;p,j}\int_{0}^{ \tau}\frac{\mathrm{d}\varsigma}{(1+\varsigma)^{p-s+k+1}(1-\varsigma)^{p+s-k+1 }}\right). \tag{48}\] Now, expanding the integrand of (47) in partial fractions, one sees that \(a_{k,p;p,j}(\tau)\) contains terms of the form \[(1-\tau)^{p-k+s}\ln(1-\tau)\quad\text{and}\quad(1+\tau)^{p+k-s}\ln(1+\tau),\] which are, respectively, \(C^{p-k+s-1}\) and \(C^{p+k-s-1}\) at \(\tau=\pm 1\). These are the only singular terms in the solution (48). The rest of the solution is polynomial in \(\tau\), and thus analytic at \(\tau=\pm 1\). The solutions in the case \(p=q\) are therefore not smooth at the critical sets \(\mathcal{I}^{\pm}\) of the conformal boundary. _Remark 20_.: The logarithmic divergences in (48) can be set to vanish by fine-tuning initial data so that \(\mathfrak{d}_{k,p;p,j}=0\). Expansions near \(\mathscr{I}^{-}\) The approach in the main text makes use of expansions not only near \(\mathcal{I}\) but also near \(\mathscr{I}^{-}\). The expansions near \(\mathscr{I}^{-}\) are computed from characteristic data. While the construction of the expansions near \(\mathcal{I}\) in Appendix C is completely general, it turns out that to construct the expansions near \(\mathscr{I}^{-}\) it is convenient to require the fields to possess a certain amount of regularity at \(\mathscr{I}^{-}\). ### Leading order terms We begin by observing that the component \(\phi_{0}\) encodes the freely specifiable characteristic initial data on \(\mathscr{I}^{-}\). This can be easily seen by evaluating the equations \(A_{k}=0\) at \(\mathscr{I}^{-}=\{\tau=-1\}\). One gets \[\rho\partial_{\rho}\phi_{k+1}|_{\mathscr{I}^{-}}+\mathbf{X}_{+}\phi_{k}|_{\mathscr{ I}^{-}}-(k+1-s)\phi_{k+1}|_{\mathscr{I}^{-}}=0,\qquad 0\leqslant k\leqslant 2s-1,\] so that the components \(\phi_{k}\) for \(1\leqslant k\leqslant 2s\) can be computed in a hierarchical manner by solving ODEs along the generators of \(\mathscr{I}^{-}\). We assume from the outset that \(\phi_{0}|_{\mathscr{I}^{-}}\) is bounded as \(\rho\to 0\). In particular, \(\mathbf{X}_{+}\phi_{0}=\mathcal{O}(1)\) as \(\rho\to 0\). We then set \[\phi_{1}|_{\mathscr{I}^{-}}=-\rho^{-(s-1)}\int_{0}^{\rho}\varrho^{s-2}\mathbf{X}_{ +}\phi_{0}|_{\mathscr{I}^{-}}(\varrho)\,\mathrm{d}\varrho,\] which, by the above assumption, satisfies \(\phi_{1}|_{\mathscr{I}^{-}}=\mathcal{O}(1)\) as \(\rho\to 0\). This therefore gives a solution to the above equation with \(k=0\), which is continuous on \(\mathscr{I}^{-}_{\rho_{*}}\cup\mathcal{I}^{-}\). We write down the other components hierarchically to obtain \[\phi_{k+1}|_{\mathscr{I}^{-}}=-\rho^{-(s-k-1)}\int_{0}^{\rho}\varrho^{s-k-2}\bm {X}_{+}\phi_{k}|_{\mathscr{I}^{-}}(\varrho)\,\mathrm{d}\varrho\] for \(0\leqslant k\leqslant 2s-1\). We observe that all components \(\phi_{k}|_{\mathscr{I}^{-}}\) defined in this way inherit the decay rate towards \(\mathcal{I}^{-}\) prescribed for \(\phi_{0}\). For instance, if \(\phi_{0}|_{\mathscr{I}^{-}}\sim\rho^{\gamma}\) for some \(\gamma\geqslant 0\), then \(\phi_{k}|_{\mathscr{I}^{-}}\sim\rho^{\gamma}\) for all \(0\leqslant k\leqslant 2s\). With the knowledge of all components \(\phi_{k}|_{\mathscr{I}^{-}}\), \(0\leqslant k\leqslant 2s\), one can then proceed to compute \((\partial_{\tau}\phi_{0})|_{\mathscr{I}^{-}}\). For this we evaluate the equation \(B_{0}=0\) at \(\tau=-1\) to obtain \[2(\partial_{\tau}\phi_{0})|_{\mathscr{I}^{-}}+\rho\partial_{\rho}\phi_{0}|_{ \mathscr{I}^{-}}-\mathbf{X}_{-}\phi_{1}|_{\mathscr{I}^{-}}-s\phi_{0}|_{\mathscr{I} ^{-}}=0,\] which is just an algebraic equation for \((\partial_{\tau}\phi_{0})|_{\mathscr{I}^{-}}\). Note that, with reference to the decay \(\gamma\) assumed for \(\phi_{0}|_{\mathscr{I}^{-}}\), the time derivative term decays at the same rate, \(\partial_{\tau}\phi_{0}|_{\mathscr{I}^{-}}\sim\rho^{\gamma}\). ### Higher order terms More generally, suppose \((\partial_{\tau}^{q}\phi_{0})|_{\mathscr{I}^{-}}\) is known for some \(q\geqslant 1\). Taking \(q\)\(\tau\)-derivatives of \(A_{k}=0\) and evaluating at \(\mathscr{I}^{-}\), one finds \[\rho\partial_{\rho}(\partial_{\tau}^{q}\phi_{k+1})|_{\mathscr{I}^{-}}+\mathbf{X}_{ +}(\partial_{\tau}^{q}\phi_{k})|_{\mathscr{I}^{-}}-(k+1+q-s)(\partial_{\tau}^ {q}\phi_{k+1})|_{\mathscr{I}^{-}}=0,\] \(0\leqslant k\leqslant 2s-1\), and so one can compute the derivatives \((\partial_{\tau}^{q}\phi_{k+1})|_{\mathscr{I}^{-}}\) by solving ODEs along the generators of \(\mathscr{I}^{-}\). Specifically, we set \[\partial_{\tau}^{q}\phi_{k+1}|_{\mathscr{I}^{-}}=-\rho^{k+1+q-s}\int_{0}^{ \rho}\varrho^{-(k+2+q-s)}\mathbf{X}_{+}(\partial_{\tau}^{q}\phi_{k})|_{\mathscr{I }^{-}}(\varrho)\,\mathrm{d}\varrho\] for \(0\leqslant k\leqslant 2s-1\). Now, once \((\partial_{\tau}^{q}\phi_{k+1})|_{\mathscr{I}^{-}}\) for \(0\leqslant k\leqslant 2s\) are known, one may use the condition \[(\partial_{\tau}^{q}B_{0})|_{\mathscr{I}^{-}}=0\]
2305.08159
Altered Topological Properties of Functional Brain Network Associated with Alzheimer's Disease
Functional Magnetic Resonance Imaging (fMRI) is commonly utilized to study human brain activity, including abnormal functional properties related to neurodegenerative diseases. This study aims to investigate the differences in the topological properties of functional brain networks between individuals with Alzheimer's Disease (AD) and normal controls. A total of 590 subjects, consisting of 175 with AD dementia and 415 age-, gender-, and handedness-matched controls, were included. The topological properties of the brain network were quantified using graph-theory-based analyses. The results indicate abnormal network integration and segregation in the AD group. These findings enhance our understanding of AD pathophysiology from a functional brain network structure perspective and may aid in identifying AD biomarkers. Supplementary data to aid in the validation of this research are available at https://github.com/YongchengYAO/AD-FunctionalBrainNetwork.
Yongcheng Yao
2023-05-14T13:39:12Z
http://arxiv.org/abs/2305.08159v2
# Altered Topological Properties of Functional Brain Network Associated with Alzheimer's Disease ###### Abstract Functional Magnetic Resonance Imaging (fMRI) is commonly utilized to study human brain activity, including abnormal functional properties related to neurodegenerative diseases. This study aims to investigate the differences in the topological properties of functional brain networks between individuals with Alzheimer's Disease (AD) and normal controls. A total of 590 subjects, consisting of 175 with AD dementia and 415 age-, gender-, and handedness-matched controls, were included. The topological properties of the brain network were quantified using graph-theory-based analyses. The results indicate abnormal network integration and segregation in the AD group. These findings enhance our understanding of AD pathophysiology from a functional brain network structure perspective and may aid in identifying AD biomarkers. Supplementary data to aid in the validation of this research are available at [https://github.com/YongchengYAO/AD-FunctionalBrainNetwork](https://github.com/YongchengYAO/AD-FunctionalBrainNetwork). brain network network topology Alzheimer's disease ## 1 Introduction Alzheimer's Disease (AD) is a chronic neurodegenerative disease prevalent in the elderly population, characterized by cognitive decline, language problems, memory disturbances (especially short-term memory), and disorientation. With disease progression, severe bodily dysfunction and ultimately death can occur. AD is the most common form of dementia, accounting for approximately half of the cases. A rare form of AD is early-onset familial Alzheimer's disease, which is associated with the amyloid precursor protein and presenilin genes. Another form of AD is sporadic AD, affecting over 15 million people worldwide, with its cause primarily unknown. Risk factors for AD include decreased brain size, low education level, low mental ability, head injury, and vascular-disease-related factors [1]. The amyloid hypothesis proposes that extracellular amyloid beta deposits cause AD [2]. The tau hypothesis suggests that AD results from tau protein dysfunction, with neurofibrillary tangles formed by tau protein destroying the neuron's transport system [3]. Functional magnetic resonance imaging (fMRI) offers a non-invasive approach for diagnosing, evaluating therapeutic interventions, and investigating the mechanisms of AD. In brain imaging, a typical fMRI utilizes the Blood Oxygenation Level Dependent (BOLD) contrast to indirectly reflect brain activity through signal fluctuations. Early studies of brain function primarily relied on task-based fMRI, where fMRI brain activity was acquired during specific functional tasks [4, 5]. In 1995, Biswal demonstrated that resting-state fMRI signals could depict spontaneous neuronal activity without the need for external task experiments [6]. An increasing number of studies have utilized resting-state fMRI to investigate brain function and disease-related abnormalities. In recent years, resting-state fMRI has become the most widely used neuroimaging technique in AD-related studies [7, 8, 9, 10, 11]. Graph-theory-based analysis is a prevalent method in brain function studies. To some extent, the graph-based method is also a connectivity analysis, as the interactions among brain regions are described in a network (a graph). In contrast to connectivity analysis, a graph-based analysis investigates the network's topological properties instead of network connections. Topological properties are quantified by various network metrics [12]. Previous studies using functional MRI and connectivity analysis have reported disrupted functional connectivity in AD patients. Key regions related to neural degeneration in AD patients include the Hippocamp [8, 13], Prefrontal Cortex [14], parietal Lobe [15], Precuneus [16], and Posterior Cingulate Gyrus [17, 18, 16]. In a systematic review and meta-analysis of 43 studies with 1,363 subjects, regions within the default mode, salience, and limbic networks consistently showed abnormalities in connectivity [19]. The default mode network has been a focal point of AD studies since changes in functional connectivity in regions of the default mode network have been observed in preclinical AD (from very mild [20] to moderate AD [21]). These findings are even more compelling for subjects with severe AD. It has been reported that the map of decreased functional connectivity is similar to the amyloid deposition map and the tau-protein deposition map [22], indicating that amyloid deposition [23] and tau protein deposition may play a causative role in the dysfunction of AD patients. Dysfunction in the default mode network is a prominent feature of AD at all stages. In contrast to the disruption of functional connections in the default mode network, increased functional connections in the salience network have also been reported [24], which have been validated by subsequent studies [25, 22]. Brier _et al._[25] showed that the pattern of aberrant functional connectivity changes with AD progression. At the early stage of AD (with very mild symptoms), increased functional connectivity within the salience network and decreased functional connectivity within the default mode network, executive control network, and sensorimotor network were observed. However, at the mild to moderate stage, decreased functional connections were discovered in all networks. Brier and colleagues [22] have proposed a model of disease spread to account for changes in functional disruption in AD. ## 2 Graph-Theory-Based Analysis ### Related Works Graph TypeThe analysis of a graph depends on how it is defined. A graph can be defined as weighted or weightless (binary) based on whether a network connection has weight or not. Additionally, a graph can be directional or bidirectional based on its directionality. Therefore, there are four types of graphs: weighted directional graph, weighted bidirectional graph, weightless directional graph, and weightless bidirectional graph. A weightless graph is a special case of a weighted graph in which all connections have the same weight of 1. Similarly, a bidirectional graph is a unique form of a directional graph, where each bidirectional connection can be viewed as two opposite directional connections. In summary, a weighted and directional graph is the most common form, and the other three types of graphs are special cases under additional conditions. Network SegregationThe ability of a network to organize nodes into clusters or modules is termed network segregation. Clusters or modules are groups of nodes that are densely intra-connected and sparsely inter-connected. Measures of segregation [26] are primarily designed to identify and quantify these structures, such as the size and number of clusters, or other metrics reflecting the properties of connectivity within a cluster. From the perspective of the functional brain network, functional segregation is the ability to organize the brain into distinct functional subnetworks, each of which is responsible for a relatively unique brain function. Therefore, the functional segregation of the human brain is physiologically meaningful and interpretable. Network IntegrationThe degree of interconnectivity among nodes is reflected in network integration. Measures of integration [26] generally quantify how easily nodes communicate with one another, with many of these measures based on the concept of "path." In the context of the functional brain network, functional integration can be viewed as the ability to efficiently retrieve and combine information distributed across different functional subnetworks. Network MetricsIn this study, we first constructed weightless and bidirectional graphs (brain networks) from MRI data. Second, we quantified the networks' topological properties using graph-theory-based metrics. Finally, significant differences in metrics between groups can depict the topological changes of the networks. The graph-theory-based metrics used, as shown in Figure 1, are only for weightless bidirectional graphs, where each edge is binary and bidirectional. They are the same as those introduced in an previous review paper [12], which includes elaborate explanations and definitions. Additionally, the metric definitions for weighted and directional graphs are also listed (in Table A1 of paper [12]) and compared with their weightless and bidirectional counterparts. It is noteworthy that metrics of different types of networks cannot be directly compared due to their distinct definitions. ### Data #### 2.2.1 OASIS-3 Dataset The subjects involved in our study were obtained from the OASIS-3 public dataset [27], which is the latest release of the Open Access Series of Imaging Studies (OASIS). OASIS-3 is a large longitudinal dataset that provides the scientific community with open access not only to multi-modal neuroimaging data but also to various clinical data and cognitive assessments. All data in OASIS-3 are available on the OASIS Brains project website (www.oasis-brains.org). We used the OASIS-3 dataset in this study for several reasons: 1. Compared to other open-access datasets such as the Alzheimer's Disease Neuroimaging Initiative (ADNI) [28, 29, 30, 31] and the Harvard Aging Brain Study (HABS) [32], OASIS-3 is relatively larger in terms of the number of participants (over 1000) and MR sessions (over 2100). It is a retrospective compilation of neuroimaging and clinical data collected across multiple ongoing projects in the past 30 years, involving 609 cognitively normal (normal ageing) adults and 489 subjects at various stages of cognitive decline. 2. It is easy to explore and download imaging and clinical data. All data in OASIS-3 are hosted by XNAT (the extensible Neuroimaging Archive Toolkit) [33], an informatics platform for data management and sharing. 3. The dataset covers all common modalities of neuroimaging in the field of ageing and cognitive decline. Since the dataset is tailored for research on normal ageing and Alzheimer's disease, it provides open access to over 2100 MR sessions, including T1w, T2w, FLAIR, ASL, SWI, time-of-flight, resting-state BOLD fMRI, and DTI. One of the advantages of OASIS-3 is the degree of data integration, as for most sessions, the majority of modalities are available. #### 2.2.2 Inclusion Criteria The first step of this study is to select data from a large dataset. Since this is a between-group MRI study, the criteria are related to the parameters of MR images and demographic information. The inclusion criteria are as follows. 1. Only data from one session were downloaded for each individual. Although longitudinal data are available, we only chose one BOLD-fMRI and one T1w MRI for each subject. The reason is that this study mainly focuses on investigating the significant differences in topological properties of the functional brain network Figure 1: **Illustration of Graph Theory Metrics. (1) A visualization of graph \(N\) as interconnected nodes. (2-7) Illustrations of degree / cost, clustering coefficient, shortest path length, global efficiency, local efficiency, and betweenness centrality.** between the two groups, rather than on longitudinal changes such as disease progression. This rule ensures that the subjects enrolled in this study are independent, which is an assumption of parametric statistical analyses. 2. The acquisition protocols of BOLD-fMRIs should be the same. Multi-site and multi-scanner MR studies are challenging because of the variabilities introduced by the differences in imaging protocols and scanners. To minimize such variabilities, we exerted restrictions that all fMRI data should be collected from Siemens scanners under the same imaging protocol. However, such a restriction was not applied to T1w MRI data, because structural MRI data were used for brain tissue segmentation to facilitate the removal of BOLD signal artefacts, and the segmentation algorithm used in this study performs well on T1w images with various acquisition parameters. 3. For each individual, one BOLD-fMRI should be matched with one T1w MRI from the same session. If there is no T1w MRI available in the same session, then the BOLD-fMRI should be discarded, and the subject should be excluded. Since functional MRI data have a low spatial resolution (usually on the scale of 3 mm) while T1-weighted structural MRI data have a relatively high spatial resolution (on the scale of 1 mm), T1w MRI can better delineate the structure of the brain, serving as a spatial reference for BOLD-fMRI. Therefore, T1w MRI and BOLD-fMRI are usually collected in the same session. In other words, the perfect structural reference is the one from the same scanning session. 4. There should be no significant difference in age, gender, and handedness between the normal control and Alzheimer's Disease group. Significant alterations of brain structure and function were discovered in normal ageing adults. Gender differences and the effect of handedness were also found in functional activation and connectivity. Thus, it is critical to ensure that the two groups are age-, gender-, and handedness-matched. #### 2.2.3 MR Image Acquisition Parameters Resting-state BOLD MR images were acquired using a single-shot FID EPI sequence on a 3-Tesla scanner (Siemens, TrioTim or Biograph_mMR), with the following parameters: TR = 2200 \(ms\); TE = 27 \(ms\); FA = \(90^{\circ}\); slice thickness = 4 \(mm\); slice gap = 0 \(mm\); number of slices (z) = 36; in-plane resolution = 4 x 4 \(mm^{2}\); in-plane matrix size (x, y) = 64 x 64; number of time points = 164. T1-weighted MR images were acquired using a single-shot TurboFLASH sequence on the same 3-Tesla scanner (Siemens, TrioTim or Biograph_mMR), with the following parameters: TR = 2400 \(ms\); TE = 3 \(ms\); FA = \(8^{\circ}\); slice thickness = 1 \(mm\); slice gap = 0 \(mm\); number of slices (z) = 176; in-plane resolution = 1 x 1 \(mm^{2}\); in-plane matrix size (y, z) = 256 x 256. #### 2.2.4 MR Image Labeling Clinical data, including the Clinical Dementia Rating (CDR) [34][35] and diagnoses, can be downloaded from the "ADRC Clinical Data" field in the OASIS-3 data browser. The CDR is a 5-point scale used to assess the severity of dementia by characterizing 6 categories of cognitive and functional performance relevant to Alzheimer's Disease and related demenities. These categories are memory, orientation, judgment and problem-solving, community affairs, home and hobbies, and personal care. The rating for each category is obtained through a semi-structured interview, known as the CDR Assessment Protocol. By combining the ratings for each category, a global CDR score can be calculated using the CDR-assignment algorithm [35]. Since clinicians base their diagnoses not only on the global CDR score but also on other clinical tests, we used these professional diagnoses to categorize each BOLD-fMRI into either an AD group or a normal controls (NC) group. Specifically, we used diagnoses from the "dx1" field to identify AD and NC data. A complete list of unique diagnosis remarks and their corresponding labels can be found in Table 1. However, the dates of clinical assessments and MRI scanning are not the same, so it is necessary to link the appropriate clinical diagnoses to each MR image. To accomplish this, we first calculated the relative date of clinical screening from the clinical data identifier. Similarly, the relative date of MRI scanning can be calculated from the name of the MR session. For instance, a clinical data ID "OAS30001_ClinicalData_d0339" indicates that the clinical assessments of subject "OAS30001" were performed on the \(339^{th}\) day after their first visit. Similarly, an MR session name "OAS30001_MR_d0129" indicates that the MRI data of subject "OAS30001" were collected on the \(129^{th}\) day after their first visit. Secondly, we linked each MRI data with its nearest clinical results. #### 2.2.5 Demographic Information A total of 590 subjects are involved in the current study, including 175 subjects with AD dementia and 415 normal controls. As shown in Table 2, there is no significant difference on age (\(t=1.5125\), \(p>0.05\)), gender (\(\chi^{2}=2.1782\), \(p>0.05\)), and handness (\(\chi^{2}=0.3926\), \(p>0.05\)) between the two groups. ### MR Images Processing MRI data were processed using the SPM12 and functional connectivity toolbox (CONN 18.b) [36]. The processing pipeline (Figure 2) used in this study is mainly based on methods from SPM12. However, some other methods are applied in place of the ones from SPM12. For instance, a components-based method with anatomical noise ROI, aCompCor [37], is adopted to reduce the effects of noise from white matter (WM) and cerebrospinal fluid (CSF), instead of the global signal regression used in SPM12. #### 2.3.1 Normalization and Segmentation of T1-weighted MRI Normalization of raw MR images into a common space is necessary for inter-subject comparison, and the MNI-152 space is the most widely used standard space. The MNI-152 space is defined by the prior tissue probability map (TPM) generated from structural MR images of 152 subjects. Specifically, first, all structural images are registered together; \begin{table} \begin{tabular}{c|c c c} \hline \hline **Diagnosis** & **Label** & **Diagnosis** & **Label** \\ \hline \hline “AD Dementia” & AD & “AD dem w/oth unusual features” & AD \\ “AD dem Language dysf after” & AD & “AD dem w/oth unusual features/dent on” & AD \\ “AD dem Language dysf prior” & AD & “AD dem/FLD prior to AD dem” & AD \\ “AD dem Language dysf with” & AD & “Cognitively normal” & NC \\ “AD dem cannot be primary” & AD & “DAT” & - \\ “AD dem disturbed social- after” & AD & “DAT w/depresss not contribu” & - \\ “AD dem distrubed social- prior” & AD & “DLBD, primary” & - \\ “AD dem distrubed social- with” & AD & “DLBD- primary” & - \\ “AD dem visuospatial, after” & AD & “DLBD- secondary” & - \\ “AD dem visuospatial- prior” & AD & “Dementia/PD- primary” & - \\ “AD dem visuospatial- with” & AD & “Frontotemporal demt. prim” & - \\ “AD dem w/CVD contribu” & AD & “Incipient Non-AD dem” & - \\ “AD dem w/CVD not contribu” & AD & “Incipient dent PTP” & - \\ “AD dem w/Frontal lobe/dent at onset” & AD & “No dementia” & - \\ “AD dem w/PDI after AD dem contribu” & AD & “Non AD dem- Other primary” & - \\ “AD dem w/PDI after AD dem not contribu” & AD & “ProAph w/o dement” & - \\ “AD dem w/depresss contribution” & AD & “Unc: impair reversible” & - \\ “AD dem w/depresss not contribu” & AD & “Unc: ques. Impairment” & - \\ “AD dem w/depresss not contribu” & AD & “Vascular Demt primary” & - \\ “AD dem w/depresss- contributifu” & AD & “Vascular Demt- primary” & - \\ “AD dem w/depresss- not contributifu” & AD & “Vascular Demt- secondary” & - \\ “AD dem w/oth (list B) contributifu” & AD & “uncertain possible NON AD dem” & - \\ “AD dem w/oth (list B) not contributifu & AD & “uncertain dementia” & - \\ “AD dem w/oth unusual feat/subs demt” & AD & “uncertain- possible NON AD dem” & - \\ \hline \hline \end{tabular} \end{table} Table 1: Clinical Diagnoses and Labels \begin{table} \begin{tabular}{c c c c} \hline \hline & AD Group & NC Group & Statistics & P-value \\ \hline \hline Gender & \(95/80\) & \(196/219\) & \(2.1782^{a}\) & \(0.1400\) \\ (Male / Female) & & & \\ \hline Age & \(75.11\pm 7.67\) & \(74.12\pm 6.12\) & \(1.5125^{b}\) & \(0.1316\) \\ (Mean \(\pm\) SD) & & & \\ \hline Handness & \(18/154/3\) & \(34/370/11\) & \(0.3926^{a}\) & \(0.5309\) \\ (Left / Right / NA) & & & \\ \hline \hline a: Welch’s two-sample t-test & & & \\ b: Pearson’s chi-squared test with Yates’ continuity correction & & & \\ NA: Not applicable (indicating missing data) & & & \\ \end{tabular} \end{table} Table 2: Demographic Information Figure 2: **Processing Pipeline of Graph-theory-based Analysis.** T1-weighted MR images are normalized into the MNI-152 space, and then segmented into grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF). For BOLD fMRI, realignment (head motion estimation and correction), slice timing correction, outlier scans detection, and normalization are applied. Various regressors are used to remove confounding effects: (1) Principal components (PCs) from WM and CSF, (2) head motion parameters, (3) scrubbing variables, and (4) linear regressor. Finally, band-pass filtering (\(0.01-0.1\)) is applied. then, each voxel is assigned to one tissue type, resulting in individual tissue masks for each brain tissue; finally, the binary masks are averaged across all subjects to generate the tissue probability map. Therefore, various prior tissue probability maps (for each brain tissue), as a whole, represent the average shape and position of the brain in an MR image, which is the definition of a "standard space". The tissue probability map used is provided by SPM12 (Figure 3), referred to as the TPM atlas. For T1-weighted MR images, the first step applied is spatial normalization, which wraps the MRI volumes in individual spaces into the standard MNI-152 space. The spatial normalization consists of two steps: (1) estimating a non-linear deformation field that best overlays the TPM on the individual T1-weighted image; (2) wrapping the raw image with the inverse deformation estimated in step (1). For segmentation, individual brain tissue probability maps for grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF) are estimated. Then, a threshold is applied to each tissue probability map to generate a binary mask, which is further refined by morphology erosion. The eroded WM and CSF masks can serve as noise ROI masks, while the eroded GM mask is useful in restricting the subsequent analyses within the grey matter area. #### 2.3.2 Head Motion Estimation and Correlation Head motion can introduce systematic co-variation across voxels, which increases the estimations of functional connections. More importantly, the distance-dependent signal modulation effect is reported [38][39][40] in functional connectivity studies. It indicates that the variance added by motion artefact is similar in nearby voxels, resulting in a stronger short-distance correlation. Additionally, due to the application of head-motion-reduction processing methods, the observed long-distance correlation would often be decreased [41]. Even a small head movement would cause signal disruption in BOLD fMRI [41] and add spurious variance that can increase or decrease the observed functional connections. Thus, the estimation of head motion and the removal of head-motion-induced effects are crucial. Among various artefacts, head motion is unique since it can be estimated from MR images via the realignment process (unlike, for example, the cardiac and respiratory artefacts that requires external recordings). Over the years, effort has been made to measure head motion, and considerable progress has been achieved. The simplest method is rigid body estimation, which measures the translation and rotation in the x, y, and z axes, producing 6 motion parameters. In 1996, an expansion of this motion estimation method was proposed by Karl Friston [42]. The expansion takes the form \([RR^{2}R_{t-1}R_{t-1}^{2}\ldots R_{t-k}R_{t-k}^{2}]\), where R denotes the 6 rigid body estimates, \(R^{2}\) denotes their squares, and \(R_{t-k}\) represents 6 parameters of the \(k^{th}\) exceeding time points. For instance, [\(RR^{2}R_{t-1}R_{t-1}^{2}\)] stands for a 24-parameter estimate, and [\(RR^{2}R_{t-1}R_{t-1}^{2}R_{t-3}R_{t-3}^{2}\)] refers to a 36-parameter estimate. In this study, the 6-parameter rigid body realignment method is adopted, which is the strategy followed in other studies [38][39][40]. These 6 parameters and their first-order derivatives, 12 parameters taking the form [\(RR^{\prime}\)] in total, are Figure 3: **Tissue Probability Map.** Defined in the MNI-152 space, the tissue probability map gives the prior probability for a voxel of being a specific brain tissue, including (1) grey matter, (2) white matter, (3) cerebrospinal fluid, (4) skull, (5) scalp, and (6) non-brain area. The values of these maps range \([0,1]\). For one voxel, the probabilities sum up to 1. included in a linear regression model to regress out the head-motion-related variance in the following artefacts removal step. Although subsequent studies [43] have pointed out the inadequacy of these parameters, research by Jonathan Power and colleagues [44] shows that more motion parameters cannot capture all head-movement-induced variance, and similar results were observed when 12, 24, and 36 motion parameters are included as regressors. In the pre-processing pipeline of BOLD MR images, realignment is always the first step. Realignment is actually the head motion estimation and correction. It registers all other MR volumes in time series to a reference volume and exports the head motion parameters. The reference volume can be the volume at a specific time point, or the mean image averaged across time. In this study, the first volume is chosen as the reference. B-spline interpolation is used in the registration process. Head-motion-related variance can be partially reduced by realignment. #### 2.3.3 Slice Timing Correction During the acquisition of a BOLD MR image with EPI sequence, a 3-D volume is actually a stack of 2-D slices collected one at a time. Therefore, for an fMRI volume at a time point, the voxel activations of each slice are not at the same time point. However, the ideal situation is that we can observe the activation of the whole brain at the same time. To resolve the conflict, slice timing correction can be used to interpolate slices to a reference slice. It has been shown to be an effective solution that can reliably increase sensitivity and effect power. The implementation details are as follows: (1) the acquisition time for each slice is read from the BIDS sidecar that comes with each NITI file; (2) all slices in a volume are interpolated to the slice acquired in the middle of time. #### 2.3.4 Outlier Scans Detection The Artifact Detection Tools (ART) are utilized for the automatic detection of volumes with severe head motion and global mean intensity change. The volumes that exceed the specified threshold are considered outlier scans and labelled by scrubbing variables. The ART-based outlier detection function is conveniently integrated into the CONN toolbox. The "intermediate settings," which treat the \(97^{th}\) percentiles in the normative sample as the threshold, are selected. At the end of this step, a set of scrubbing variables is stored for further use as regressors in the general linear model. #### 2.3.5 Normalization of BOLD MRI In the CONN toolbox, spatial normalization of BOLD MR images can be implemented using two methods: direct and indirect normalization. Direct normalizationIt involves normalizing the T1-weighted MR images and BOLD MR images separately. It is implemented in the same way as spatial normalization of T1w MR images: (1) first, a nonlinear deformation field that wraps the TPM atlas to the individual BOLD MR image is estimated; (2) then, the inverse transformation is applied to normalize the BOLD MR image into MNI-152 space. Indirect normalizationIt involves two steps: (1) first, the BOLD MR image is co-registered to the T1-weighted MR image, generating a transformation matrix \(T_{1}\); (2) then, the obtained transformation matrix \(T_{1}\) is multiplied by the transformation matrix \(T_{2}\), which wraps the T1-weighted image to MNI-152 space. Finally, the BOLD image can be transformed into the standard space with the matrix \(T_{2}T_{1}\). As a step in the default pre-processing pipeline of the CONN toolbox, the direct normalization scheme is applied. #### 2.3.6 Spatial Smoothing of BOLD MRI Spatial smoothing with a Gaussian kernel is typically the final step in a spatial processing pipeline for a BOLD MR image. However, it is not utilized in this study because the current study only involves ROI-based graph analyses. That is, the final processed and de-noised signal is the mean time series of an ROI. Spatial smoothing prior to averaging would not significantly alter the resulting mean signal. We only consider spatial smoothing to be a necessary image processing step when it is followed by voxel-based analyses. This approach is also used by the CONN toolbox. #### 2.3.7 Artefacts Removal The removal of artefacts is a critical step in BOLD signal processing, aimed at mitigating or eliminating the confounding effects of non-neuronal oscillations caused by head movement, cardiac pulsation, respiratory motion, and other systematic noises. Without this step, it is challenging for researchers to determine whether the findings are genuine or simply driven by artefacts. A general linear model (GLM) is used for artefacts removal, with mean BOLD signals extracted from ROIs defined by a prior brain atlas, and a variety of variables defined as regressors of the GLM (as shown in Figure 2). In the previous realignment step, the nuisance head motion effect can be partially mitigated by interpolation. Further, this confounding effect is reduced by regressing out 12 head motion parameters (including 3 translation parameters, 3 rotation parameters, and their first-order derivatives) and scrubbing variables. One effect of head motion is increased short-distance correlations and decreased long-distance correlations. Interestingly, it has been reported that scrubbing high-motion frames from fMRI data can decrease short-term correlations and increase long-term correlations [38], indicating that the variance originating from head motion can be explained by scrubbing variables. To remove other nuisance effects, the aCompCor method [37] is utilized, which is a component-based method with anatomical noise ROIs. Specifically, WM and CSF masks are used to define the WM and CSF areas as noise ROIs. Then, five principal components (PCs) for each noise ROI are calculated via principal component analysis. Lastly, the five PCs from WM and five PCs from CSF are entered into the linear model as regressors. Additionally, the linear trend is removed by adding a linear regressor into the GLM. Finally, the residual time series are band-pass filtered at [\(0.01-0.1\)] Hz to retain neuroactivity-related intrinsic signal fluctuations. ### Functional Brain Network Construction #### 2.4.1 Nodes Definition In functional brain network analyses, nodes are typically defined as brain regions that are structurally or functionally distinct. However, delineating the boundaries of brain regions is a complex and continually evolving research field. Over the past few decades, numerous atlases have been developed, including the Automated Anatomical Labeling (AAL) atlas [45, 46], the 7- and 17-Network Parcellation [47], the 400-, 600-, 800-, and 1000-area parcellation [48], the Atlas of Intrinsic Connectivity of Homotopic Areas (AICHA) [49], the Human Connectome Project Multi-Modal Parcellation (HCP-MMP) [50], the Cortical Area Parcellation [51] released by a group in Washington University, and the Human Brainnetome Atlas [52]. In the current study, we use a customized atlas named the "Harvard-Oxford-AAL" atlas, which includes 91 cortical ROIs and 15 subcortical ROIs from the Harvard-Oxford Atlas distributed with the FMRIB Software Library (FSL), as well as 26 cerebellum ROIs from the AAL atlas. An illustration of all 132 ROIs can be found in Figure 4, and Table 3 provides additional details. Figure 5 displays the contours of cortical regions. \begin{table} \begin{tabular}{l|l l} \hline \hline **ID** & **Abbrev.** & **Brain Region** \\ \hline \hline 1 & FP r & Frontal Pole Right \\ 2 & FP l & Frontal Pole Left \\ 3 & IC r & Insular Cortex Right \\ 4 & IC l & Insular Cortex Left \\ 5 & SFG r & Superior Frontal Gyrus Right \\ 6 & SFG l & Superior Frontal Gyrus Left \\ 7 & MidFG r & Middle Frontal Gyrus Right \\ 8 & MidFG l & Middle Frontal Gyrus Left \\ 9 & IFG tri r & Inferion Frontal Gyrus, pars triangularis Right \\ 10 & IFG tri l & Inferion Frontal Gyrus, pars triangularis Left \\ 11 & IFG oper r & Inferion Frontal Gyrus, pars opercularis Right \\ 12 & IFG oper l & Inferion Frontal Gyrus, pars opercularis Left \\ 13 & PreCG r & Precentral Gyrus Right \\ 14 & PreCG l & Precentral Gyrus Left \\ 15 & TP r & Temporal Pole Right \\ 16 & TP l & Temporal Pole Left \\ 17 & aSTG r & Superior Temporal Gyrus, anterior division Right \\ 18 & aSTG l & Superior Temporal Gyrus, anterior division Left \\ 19 & pSTG r & Superior Temporal Gyrus, posterior division Right \\ 20 & pSTG l & Superior Temporal Gyrus, posterior division Left \\ 21 & aMTG r & Middle Temporal Gyrus, anterior division Right \\ 22 & aMTG l & Middle Temporal Gyrus, anterior division Left \\ 23 & pMTG r & Middle Temporal Gyrus, posterior division Right \\ 24 & pMTG l & Middle Temporal Gyrus, posterior division Left \\ 25 & toMTG r & Middle Temporal Gyrus, temporoorcipital part Right \\ 26 & toMTG l & Middle Temporal Gyrus, temporoorcipital part Left \\ \hline \hline \end{tabular} \end{table} Table 3: ID, Names, and Abbreviations of Brain Regions \begin{table} \begin{tabular}{l|l l} \hline **ID** & **Abbrev.** & **Brain Region** \\ \hline 27 & aITG r & Inferior Temporal Gyrus, anterior division Right \\ 28 & aITG l & Inferior Temporal Gyrus, anterior division Left \\ 29 & pITG r & Inferior Temporal Gyrus, posterior division Right \\ 30 & pITG l & Inferior Temporal Gyrus, posterior division Left \\ 31 & toITG r & Inferior Temporal Gyrus, temporoco occipital part Right \\ 32 & toITG l & Inferior Temporal Gyrus, temporoco occipital part Left \\ 33 & PostCG r & Postcentral Gyrus Right \\ 34 & PostCG l & Postcentral Gyrus Left \\ 35 & SPL r & Superior Partiel Lobule Right \\ 36 & SPL l & Superior Partiel Lobule Left \\ 37 & aSMG r & Supramarginal Gyrus, anterior division Right \\ 38 & aSMG l & Supramarginal Gyrus, anterior division Left \\ 39 & pSMG r & Supramarginal Gyrus, posterior division Right \\ 40 & pSMG l & Supramarginal Gyrus, posterior division Left \\ 41 & AG r & Angular Gyrus Right \\ 42 & AG l & Angular Gyrus Left \\ 43 & sLOC r & Lateral Occipital Cortex, superior division Right \\ 44 & sLOC l & Lateral Occipital Cortex, superior division Left \\ 45 & iLOC r & Lateral Occipital Cortex, inferior division Right \\ 46 & iLOC l & Lateral Occipital Cortex, inferior division Left \\ 47 & ICC r & Intraclcarine Cortex Right \\ 48 & ICC l & Intraclcarine Cortex Left \\ 49 & MedFC & Frontal Medial Cortex \\ 50 & SMA r & Supplementary Motor Cortex Right \\ 51 & SMA L & Supplementary Motor Cortex Left \\ 52 & SubCalC & Subcallosal Cortex \\ 53 & PaGiG r & Paracingulate Gyrus Right \\ 54 & PaGiG l & Paracingulate Gyrus Left \\ 55 & AC & Cingulate Gyrus, anterior division \\ 56 & PC & Cingulate Gyrus, posterior division \\ 57 & Precuneous & Precuneous Cortex \\ 58 & Cuneal r & Cuneal Cortex Right \\ 59 & Cuneal l & Cuneal Cortex Left \\ 60 & FOrb r & Frontal Orbital Cortex Right \\ 61 & FOrb l & Frontal Orbital Cortex Left \\ 62 & aPaHC r & Parabinopocampal Gyrus, anterior division Right \\ 63 & aPaHC r & Parabinipocampal Gyrus, anterior division Left \\ 64 & pPaHC r & Parabinipocampal Gyrus, posterior division Right \\ 65 & pPaHC l & Parabinipocampal Gyrus, posterior division Left \\ 66 & LG r &ingual Gyrus Right \\ 67 & LG l &ingual Gyrus Left \\ 68 & aITeN c & Temporal Fusiform Cortex, anterior division Right \\ 69 & aITeN c & Temporal Fusiform Cortex, anterior division Left \\ 70 & pITeN c & Temporal Fusiform Cortex, posterior division Right \\ 71 & pTFNc l & Temporal Fusiform Cortex, posterior division Left \\ 72 & TOFuFc r & Temporal Occipital Fusiform Cortex Right \\ 73 & TOFuFc l & Temporal Occipital Fusiform Cortex Left \\ 74 & OFuS Gr & Occipital Fusiform Gyrus Right \\ 75 & OFuS G1 & Occipital Fusiform Gyrus Left \\ 76 & FOr r & Frontal Operculum Cortex Right \\ 77 & FO l & Frontal Operculum Cortex Left \\ 78 & CO r & Central Opercular Cortex Right \\ 79 & CO l & Central Opercular Cortex Left \\ 80 & PO r & Partiel Operculum Cortex Right \\ 81 & PO l & Partiel Operculum Cortex Left \\ 82 & PP r & Plannum Polare Right \\ 83 & PP l & Plannum Polare Left \\ 84 & HG r & Heschl’s Gyrus Right \\ 85 & HG l & Heschl’s Gyrus Left \\ 86 & PT r & Plannum Tempraic Right \\ 87 & PT l & Plannum Tempraic Left \\ 88 & SCC r & Supracalcarine Cortex Right \\ \hline \end{tabular} \end{table} Table 3: continued from previous page #### 2.4.2 Edges Definition In weightless bidirectional graph analysis, defining the edges involves two steps: (1) determining the connectivity measure, and (2) applying a threshold to create weightless connections. In functional brain network analysis using BOLD fMRI data, there are two major types of connectivity measures: correlation measures and regression measures. Correlation measures utilize a correlation coefficient to quantify the statistical relationship between two BOLD signals, such as Pearson's linear correlation coefficient, intra-class correlation, or rank-based correlation coefficients like Spearman's rank correlation coefficient and Kendall tau rank correlation coefficient. Various correlation measures have their own characteristics and usability, but they all assume values ranging from \(-1\) to \(+1\), where \(\pm 1\) indicates the strongest relationship, and 0 represents the weakest relationship. Regression measures, on the other hand, use a regression coefficient to measure the relationship between variables. Regression measures offer more flexibility, such as when analyzing the relationship between two variables using bivariate regression measures or controlling for all other factors using multivariate regression measures. \begin{table} \begin{tabular}{l|l l} \hline **ID** & **Abbrev.** & **Brain Region** \\ \hline 89 & SCC 1 & Supracalcarine Cortex Left \\ 90 & OP r & Occipital Pole Right \\ 91 & OP 1 & Occipital Pole Left \\ 92 & Thalamus r & Thalamus Right \\ 93 & Thalamus l & Thalamus Left \\ 94 & Caudate r & Caudate Right \\ 95 & Caudate l & Caudate Left \\ 96 & Putamen r & Putamen Right \\ 97 & Putamen l & Putamen Left \\ 98 & Pallidum r & Pallidum Right \\ 99 & Pallidum l & Pallidum Left \\ 100 & Hippocamp r & Hippocamp Right \\ 101 & Hippocamp l & Hippocamp Left \\ 102 & Anygdala r & Amygdala Right \\ 103 & Amygdala l & Amygdala Left \\ 104 & Accumbens r & Accumbens Right \\ 105 & Accumbens l & Accumbens Left \\ 106 & Brain-Stem & Brain-Stem \\ 107 & Cereb1 l & Cerebellum Cruss1 Left \\ 108 & Cereb1 r & Cerebellum Cruss1 Right \\ 109 & Cereb2 l & Cerebellum Cruss2 Left \\ 110 & Cereb2 r & Cerebellum Cruss2 Right \\ 111 & Cereb3 l & Cerebellum 3 Left \\ 112 & Cereb3 r & Cerebellum 3 Right \\ 113 & Cereb45 l & Cerebellum 4 5 Left \\ 114 & Cereb45 r & Cerebellum 4 5 Right \\ 115 & Cereb1 l & Cerebellum 6 Left \\ 116 & Cereb6 r & Cerebellum 6 Right \\ 117 & Cereb7 l & Cerebellum 7b Left \\ 118 & Cereb7 r & Cerebellum 7b Right \\ 119 & Cereb8 l & Cerebellum 8 Left \\ 120 & Cereb8 r & Cerebellum 8 Right \\ 121 & Cereb9 l & Cerebellum 9 Left \\ 122 & Cereb9 r & Cerebellum 9 Right \\ 123 & Cereb10 l & Cerebellum 10 Left \\ 124 & Cereb10 r & Cerebellum 10 Right \\ 125 & Ver12 & Vermis 1 \& Vermis 2 \\ 126 & Ver3 & Vermis 3 \\ 127 & Ver45 & Vermis 4 \& Vermis 5 \\ 128 & Ver6 & Vermis 6 \\ 129 & Ver7 & Vermis 7 \\ 130 & Ver8 & Vermis 8 \\ 131 & Ver9 & Vermis 9 \\ 132 & Ver10 & Vermis 10 \\ \hline \hline \end{tabular} \end{table} Table 3: continued from previous page Figure 4: **Brain Atlas.** It is a customized brain regions parcellation scheme, combining (1) 91 cortical regions (from the Harvard-Oxford Atlas), (2) 15 subcortical structures (from the Harvard-Oxford Atlas), and (3) 26 cerebellum regions (from the AAL Atlas). In the 3-D rendering of subcortical and cerebellum regions, spatial smoothing is applied for better visualization. (A: anterior; P: posterior; L: left; R: right; S: superior) In this study, the functional connectivity between two ROIs is defined using Pearson's linear correlation coefficient, which is further subjected to Fisher z-transformation, a standard step in functional connectivity analyses for hypothesis testing of the population correlation coefficient. To construct a weightless graph from these weighted connections, thresholding is necessary. Various thresholding methods exist, categorized into absolute and relative thresholding methods. Absolute thresholding applies a fixed threshold to connectivity measures, while relative thresholding retains a specific proportion of connections within a graph. In this study, relative thresholding is used, retaining only the top \(15\%\) connections. In fMRI connectivity analysis, negative connections (anti-correlations) can be observed using linear correlation coefficients. Whether these negative connections are spurious or physiologically meaningful is an open question. To investigate their effects on the results, three parallel experiments were conducted: exploring the properties of the negative functional network by retaining only anti-correlations, exploring the properties of the positive functional network by eliminating all anti-correlations and exploring the properties of the mixed functional network using absolute correlation values. ### Graph-Theory-Based Network Properties #### 2.5.1 Degree and Cost Nodal Degree and CostTwo ways to quantify the centrality of each node in a graph are measuring the number and proportion of edges directly linked to that node - that is, calculating the degree and cost respectively. Degree and cost are similarly defined and related, both of which are meant to characterize the local connectivity of each node. Here, local connectivity depicts how densely one node is connected to all other nodes in the same graph. If one node is densely connected to others, then its degree or cost will be high. On the contrary, if one node is sparsely connected, its degree or cost will be low. * _Nodal degree_ represents the number of edges directly connected to a node and is defined as \[K_{i}=\sum_{i\neq j\in N}a_{ij},\] (1) Figure 5: **Contours of Cortical Regions. The contours of 91 cortical regions defined in the Harvard-Oxford Atlas. Number is the ID for each region (Table 3).** where \(K_{i}\) is the degree of node \(i\), \(N\) stands for the set of all nodes in a graph, and \(a\) is a binary value (0 for non-connection, 1 for connection) indicating the connectivity, in other words, the connection status. * _Nodal cost_ represents the proportion of direct edges among all possible connections and is defined as \[C_{i}=\frac{1}{n-1}\sum_{i\neq j\in N}a_{ij},\] (2) where \(C_{i}\) is the cost of node \(i\), \(N\) is the set of all nodes, \(a_{ij}\) represents connectivity, and \(n\) is the number of nodes in the graph. Actually, it is obvious that the equation \(C_{i}=\frac{K_{i}}{n-1}\) holds. The major reason why cost is defined in addition to degree is that the variation derived from the size of the graph needs to be corrected when the cross-graph comparison is performed. For example, directly comparing the degree of a node in a 90-nodes-graph and that in a 264-nodes-graph is meaningless since the former is most likely smaller than the latter. Yet, the cost of a node can be used for direct comparison in this scenario. When degree and cost refer to the individual properties of a node, we usually call it nodal degree and nodal cost, or the degree of a node and the cost of a node. Mean Degree and CostFrom the nodal property, we can calculate the global property of the whole graph - the degree or cost of a graph, which is simply the average of nodal degree or nodal cost. The degree and cost of a graph are also called mean degree and mean cost. While nodal degree and cost are metrics for measuring nodal centrality, mean degree and cost are measures of network integration, with larger values denoting a higher level of integration, and vice versa. * _Mean degree_ is defined as \[K=\frac{1}{n}\sum_{i\in N}K_{i}=\frac{1}{n}\sum_{i\in N}\sum_{i\neq j\in N}a_{ ij}.\] (3) * _Mean cost_ is defined as \[C=\frac{1}{n}\sum_{i\in N}C_{i}=\frac{1}{n}\sum_{i\in N}\frac{\sum_{i\neq j\in N }a_{ij}}{n-1}.\] (4) #### 2.5.2 Shortest Path Length Average Shortest Path LengthIn a binary bidirectional graph, all edges (_i.e._, connections or links) are weightless and bidirectional. Two nodes within a network can be directly or indirectly connected, with many possible paths between them. The shortest path length between two nodes is defined as the minimum distance of all possible paths connecting them. If no path exists between the two nodes, the shortest path length is defined as \(+\infty\). The average shortest path length of all paths emitted from a node quantitatively describes the centrality of that node and can be used as a measure of integration. It is calculated as \[L_{i}=\frac{1}{n-1}\sum_{i\neq j\in N}d_{ij}, \tag{5}\] where \(L_{i}\) is the average shortest path length of node \(i\), \(d_{ij}\) is the shortest path length between node \(i\) and \(j\), \(N\) is the set of all nodes, and \(n\) is the number of nodes in the graph. Like nodal degree and cost, the average shortest path length is a measure of nodal centrality, with a smaller value indicating a higher level of centrality. However, the definition of infinite distance means that if a node is disconnected from node \(i\), the average shortest path length of node \(i\) becomes infinite. This is referred to as "the infinity property of the average shortest path length" in this article. It should be noted that this is a drawback of the definition of the average shortest path length because any disconnected node will render this metric infinite. Characteristic Path LengthThe average shortest path length is a measure of nodal property in a graph, and its corresponding global property is called the characteristic path length of a graph [53]. It is defined as \[L=\frac{1}{n}\sum_{i\in N}, \tag{6}\] \[L_{i}=\frac{1}{n}\sum_{i\in N}\frac{\sum_{i\neq j\in N}d_{ij}}{n-1}, \tag{7}\] which is simply the mean of the average shortest path length across all nodes in a graph. Like mean degree and cost, the characteristic path length is a measure of network integration. The smaller its value, the more integrated the network is. An integrated network implies that, on average, each node has short paths to all other nodes, meaning that the nodes are closely linked to one another. However, due to the infinity property of the average shortest path length, any isolated node in the graph will also make the characteristic path length infinite. Therefore, its application is restricted to connected graphs. To overcome this limitation, researchers have defined metrics not based on the shortest path length, such as global efficiency. #### 2.5.3 Clustering Coefficient Nodal Clustering CoefficientIn a graph, the neighbours of node \(i\) are defined as all nodes that directly connect to node \(i\). In the analysis of small-world networks, the clustering coefficient of node \(i\) is defined by Watts and colleagues [53] as the fraction of edges among all the possible edges in the subgraph of its neighbours. It is calculated as \[CC_{i}=\frac{\sum_{p,q\in Ni}a_{pq}}{K_{i}\left(K_{i}-1\right)}, \tag{8}\] where \(CC_{i}\) denotes the clustering coefficient of node \(i\), \(N_{i}\) denotes the subgraph of its neighbours, \(a_{pq}\) indicates connectedness, and \(K_{i}\) is the degree of node \(i\). As a measure of nodal property, the clustering coefficient of a node is also referred to as the nodal clustering coefficient. It characterizes the cliquishness or clustering property of a node. Mean Clustering CoefficientSimilarly, the clustering coefficient of a graph is defined as the average of all nodal clustering coefficients: \[CC=\frac{1}{n}\sum_{i\in N}CC_{i}=\frac{1}{n}\sum_{i\in N}\frac{\sum_{p,q\in N _{i}}a_{pq}}{K_{i}\left(K_{i}-1\right)}. \tag{9}\] It is sometimes referred to as the mean clustering coefficient. Unlike mean degree, mean cost, and characteristic path length, the mean clustering coefficient is used to measure the segregation of a network. It reflects, on average, the extent of local connectivity around a node. #### 2.5.4 Efficiency The existence of the infinity property means that the rational application of the average shortest path length or characteristic path length is subject to the connectivity of the graph, which requires that all nodes in the graph are connected. To address this limitation, Latora and colleagues [54] proposed the concept of efficiency to describe the behaviour of a network. As its name suggests, efficiency is meant to quantify how efficient the communication or information exchange between nodes is. Different papers use various terminologies to refer to the concept of efficiency in graph-theory-based metrics, which can lead to confusion. In this article, we provide a brief explanation of the terms used. First, there are two classes of metrics - one for nodal property and the other for the overall property of the whole graph. Some may use "local" and "global" to indicate nodal and graph-level properties. However, "local" and "global" have specific meanings in the definitions of efficiency - local efficiency and global efficiency are two different metrics. Therefore, when discussing efficiency, the terms "local" and "global" solely indicate the types of efficiency, but not the scope. Second, to clarify the scope of a metric, we state it clearly in the name. For example, "global efficiency of a node" indicates that it is a nodal property, and the measure here is "global efficiency". Another example is that "local efficiency of a graph" refers to graph-level local efficiency. We avoid using terms like "nodal local efficiency" or "nodal global efficiency" to prevent confusion. Global Efficiency of a NodeEfficiency is closely related to the shortest path length by definition. Let us assume that the transfer of information in a network is parallel, i.e., the signal from a node is sent concurrently through all edges. As a result, the efficiency of communication between node \(i\) and \(j\) is determined by the shortest path between them. The efficiency is then defined as the inverse of the shortest path length: \(\epsilon_{ij}=\frac{1}{d_{ij}}\), where \(\epsilon_{ij}\) is the efficiency of information transfer and \(d_{ij}\) is the shortest path length. The global efficiency of node \(i\) is defined as \[E_{i,glob}=\frac{1}{n-1}\sum_{i\neq j\in N}\epsilon_{ij}. \tag{10}\] With this definition, the global efficiency of a node can be meaningfully calculated on a disconnected graph, where isolated nodes exist. For example, when node \(i\) and \(j\) are disconnected, \(d_{ij}=+\infty\) and \(\epsilon_{ij}=0\), and therefore, \(E_{i,glob}\) or \(E_{j,glob}\) is not affected by the lack of connection. Global Efficiency of a GraphWe can calculate the global efficiency of a graph, which is a measure of the whole network, by averaging the global efficiency of all nodes. The global efficiency of a node is the reciprocal of the average shortest path length between the node and all other nodes in the network. If a graph is disconnected, the global efficiency is still valid as it is not affected by disconnection. Specifically, the global efficiency of a graph is defined as follows: \[E_{glob}=\frac{1}{n}\sum_{i\in N}E_{i,glob}=\frac{1}{n}\sum_{i\in N}\frac{\sum _{i\neq j\in N}\epsilon_{ij}}{n-1}=\frac{1}{n}\sum_{i\in N}\frac{\sum_{i\neq j \in N}d_{ij}^{-1}}{n-1}. \tag{11}\] The global efficiency and characteristic path length are both based on the shortest path length between nodes and serve as evaluation indices of network integration. A highly integrated network has a high global efficiency or a short characteristic path length. The difference between global efficiency and characteristic path length can be understood through an analogy provided by the original paper [54]. Global efficiency is like a parallel system where information is sent simultaneously through all paths, while the characteristic path length is like a sequential system where information is transferred through one path at a time. To further illustrate the analogy, we can define efficiency as the number of packets transferred in a time unit. In a parallel system (left column in Figure 6), packets are sent concurrently through all paths, and the shortest path length determines the efficiency. In a sequential system (right column in Figure 6), packets are transferred through one path at a time, and the efficiency is the number of packets successfully transferred through an imaginary "average path" in a time unit. Long paths decrease the efficiency by lengthening the "average path". Local Efficiency of a NodeThe global efficiency of a node measures its communication with all other nodes in the graph, while the local efficiency quantifies the global efficiency of a subgraph consisting only of its neighbours. Specifically, the local efficiency of node \(i\) is defined as the global efficiency of its subgraph of neighbours: \[E_{i,local}=\frac{1}{K_{i}}\sum_{p\in N_{i}}\frac{\sum_{p\neq q\in N_{i}} \epsilon_{pq}}{K_{i}-1}=\frac{\sum_{p\neq q\in N_{i}}d_{pq}^{-1}}{K_{i}\left(K _{i}-1\right)}, \tag{12}\] where \(K_{i}\) is the degree of node \(i\), \(N_{i}\) is the subgraph of its neighbours, \(\epsilon_{pq}\) represents the efficiency of the shortest path between node \(p\) and \(q\), and \(d_{pq}\) is the shortest path length. Local Efficiency of a GraphThe local efficiency of a graph is the average local efficiency of all subgraphs, which is the mean of the local efficiencies of all nodes: \[E_{local}=\frac{1}{n}\sum_{i\in N}E_{i,local}=\frac{1}{n}\sum_{i\in N}\frac{ \sum_{p\neq q\in N_{i}}d_{pq}^{-1}}{K_{i}\left(K_{i}-1\right)}, \tag{13}\] where \(n\) is the number of nodes in the graph, \(N\) is the set of all nodes, \(N_{i}\) is the subgraph of neighbours of node \(i\), \(K_{i}\) is the degree of node \(i\), and \(d_{pq}\) is the shortest path length between nodes \(p\) and \(q\). Similar to the clustering Figure 6: **Parallel and Sequential System. In a parallel system (left column), the delivery of information from node \(i\) to node \(j\) occurs concurrently through every path, making the shortest path length the bottleneck of system efficiency. In a sequential system (right column), the transmission of information can be considered as sending a packet through one random path at a time. Therefore, the system efficiency is determined by the imaginary “average path”. A parallel system is not affected by long paths while a sequential system is.** coefficient of a graph, the local efficiency of a graph measures network segregation. As each node is not included in its own subgraph, the local efficiency of a graph reveals the fault tolerance of the network by showing how efficiently communication within the network is maintained when a node is removed, on average. #### 2.5.5 Betweenness Nodal BetweennessBetweenness is a measure of centrality, similar to nodal degree and cost, that captures the importance of nodes in controlling information flow. It quantifies the proportion of all shortest paths in a graph that pass through a given node: \[B_{i}=\frac{1}{\left(n-1\right)\left(n-2\right)}\sum_{i\neq p\neq q\in N} \frac{\rho_{pq}^{(i)}}{\rho_{pq}}, \tag{14}\] where \(B_{i}\) is the betweenness of node \(i\), \(n\) is the number of nodes in the graph, \(\rho_{pq}^{(i)}\) represents the number of shortest paths between nodes \(p\) and \(q\) that pass through node \(i\), and \(\rho_{pq}\) refers to the total number of shortest paths between nodes \(p\) and \(q\). Nodal betweenness is an important metric for identifying connector hubs in a network. Nodes with high betweenness centrality can be thought of as the busiest ports in real-world networks. Removing such nodes can significantly decrease the global efficiency of the network by breaking down many short paths. Mean BetweennessMean betweenness refers to the betweenness of a graph. It is defined as the average nodal betweenness: \[B=\frac{1}{n}\sum_{i\in N}B_{i}=\frac{1}{n\left(n-1\right)\left(n-2\right)} \sum_{i\neq p\neq q\in N}\frac{\rho_{pq}^{(i)}}{\rho_{pq}}. \tag{15}\] ### Statistical Analysis Initially, six nodal metrics, comprising degree (or cost), shortest path length, clustering coefficient, global efficiency, local efficiency, and betweenness, are computed for each region of interest (ROI) delineated in a brain atlas. Subsequently, two-tailed two-sample Student's t-tests are utilized to compare each nodal metric between two groups, namely AD and NC. Finally, the False Discovery Rate (FDR) correction is executed to rectify for numerous comparisons across 132 ROIs. Outcomes with FDR-corrected p-value less than 0.05 are deemed statistically significant. ### Results #### 2.7.1 Quality Assessment for Image Processing The initial phase in deriving dependable conclusions is rational and high-quality image processing. Quality evaluation and control of image processing are therefore essential. The quality assessment encompasses the manual, semi-automatic, or automatic inspection of the quality of preprocessed images. Quality control, on the other hand, refers to the actions taken in response to quality assessment. In certain situations, researchers may need to modify parameters and rerun specific experiments to obtain better image processing outcomes. In the most extreme cases, the exclusion of subjects from subsequent analyses may be necessary. SegmentationFigure 7 presents a T1-weighted MR image segmentation illustration. Precise brain tissue probability maps (sub-figures 2-4 and 6-8 in Figure 7) can be produced through this segmentation process. The combination of segmentation and normalization also generates tissue masks in standard space (sub-figures 10-12 in Figure 7). NormalizationThe assessment of normalization quality can be accomplished through visual inspection, wherein the contour of the template is superimposed on the normalized image. Figure 8 displays the normalized T1-weighted image, CSF probability map, GM probability map, and WM probability map, which closely align with the template contour. Figure 9 presents the BOLD fMRI in both standard space (Figure 9 (1)) and native space (Figure 9 (2)). It demonstrates that direct normalization of the BOLD MR image, which has relatively low spatial resolution, is satisfactory. Figure 8: **Normalization of T1-weighted MRI.** (1) The normalized T1-weighted MR image. (2) The normalized CSF probability map. (3) The normalized GM probability map. (4) The normalized WM probability map. All are overlapped with the contour (in yellow) of template. Figure 7: **Segmentation of T1-weighted MRI.** (1) Raw T1-weighted MR image in native space. (2-4) CSF, GM, WM probability maps in native space. (5) Normalized T1-weighted image in MNI-152 space. (6-8) Normalized CSF, GM, WM probability maps in MNI-152 space. (9) Skull-stripped normalized T1 image. (10-12) Normalized and eroded CSF, GM, WM masks. 10). Alongside the BOLD signal visualization, three variables are plotted: (1) the global BOLD signal changes in z-values, (2) the integrated head motion estimator, calculated from 6 movement parameters, and (3) the scrubbing variable exported from the Artifact Detection Toolbox (ART). For the global signal changes, the mean BOLD signal (the global mean) across all GM voxels is first computed and then converted to z-values. The integrated head motion estimator from ART is a composite motion measure that estimates the maximum voxel displacement resulting from the combined effect of translation and rotation displacement measures. The scrubbing variable is a frame identifier pointing out the outlier scans. Note that two types of global signal variances are highlighted in Figure 10 and Figure 11. In each red box, we can see that sudden global signal changes always occur with severe head motion, particularly in box 6 and 7 (Figure 11), where MR scans are identified as outliers at the same time point. Therefore, we can conclude that the red boxes (boxes 4 and 5 in Figure 10, boxes 6-9 in Figure 11) indicate global signal changes potentially related to subject motion. The purple boxes (boxes 1 and 2 in Figure 10, box 3 in Figure 11) highlight global signal changes that may originate from sources other than head movement. The impact of artefacts removal on the distribution of functional connections (FCs) can be observed from Figure 12. First, GM voxels are segmented into 1000 clusters, defining 1000 nodes. Second, mean BOLD signals are computed from each cluster. Third, Pearson's linear correlation coefficient (\(r\)) is used to measure FC. Finally, the distribution of FCs is plotted. Similar effects can be observed in four subjects. The FCs distribution before artefacts removal exhibits varying degrees of positive skewness, indicating the presence of a systematic bias towards positive FCs. However, after removing artefacts, it becomes approximately normally distributed with a mean close to zero. Furthermore, Figure 13 provides additional information on FCs by incorporating distance-related data. The definition of FCs in Figure 13 is the same as in Figure 12. To draw the distance-FCs maps in Figure 13, an extra step is required: calculating the distance (in \(mm\)) between two nodes. As shown in the first row of Figure 13, each scattered dot represents the mean FC at a specific distance, and the shaded area indicates the standard deviation. Compared to the FCs distributions in Figure 12, the FCs are shifted towards positive values before removing artefacts. After artefacts removal, most of the mean FCs are near zero, except for short-distance FCs. The distance-FCs maps for four subjects (Figure 13) demonstrate similar distance-related FCs properties before and after artefacts removal. They also indicate that removing artefacts can reduce the bias towards positive values in mediate and long distance FCs. This effect is confirmed in Figure 14, where the distance-FCs maps for all subjects are superimposed on top of each other. It shows that, after removing artefacts, FCs are less divergent and more centred around zero at mediate to long distances, while short-distance FCs are stronger than their mediate and long-distance counterparts. #### 2.8.1 Abnormalities in Positive Network Table 4 illustrates significant between-group differences (FDR-p \(<0.05\)) found in six nodal metrics, namely local efficiency, clustering coefficient, degree, cost, average shortest path length, and global efficiency. The results consistently indicate a singular direction of abnormality. A summary of the statistically significant findings is presented below. * Compared with the NC group, decreased local efficiencies are discovered in the bilateral Central Operular Cortex (CO), the bilateral Planum Temporal (PT), the left Cuneal, the right Heschl's Gyrus (HG), the right posterior Middle Temporal Gyrus (pMTG), and the right anterior Superior Temporal Gyrus (aSTG) (Figure 15) in AD group. * Compared with the NC group, the decreased clustering coefficient is observed in the left Planum Temporal (PT) (Figure 15) in the AD group. * Compared with the NC group, increased degrees and costs are only found in the cerebellum, including the area 3 and area 6 of the right Cerebellum (Cereb3 & Cereb6) (Figure 16) in the AD group. * Compared with the NC group, decreased average shortest path lengths are observed in area 3 of the bilateral Cerebellum (Cereb3) and area 8 of the Vermis (Ver 8) (Figure 16) in the AD group. * Compared with the NC group, increased global efficiencies are solely found in the Cerebellum, including area 3 of the bilateral Cerebellum (Cereb3), the area 4 & 5, area 6 and area 8 of the right Cerebellum (Cereb45, Figure 10: **BOLD Signal Before and After Artefacts Removal (1).** The BOLD signals (before and after artefacts removal) of 2 subjects are shown in time series boxes, along with which are the BOLD global signal (GS) changes in z-values, the head motion estimator, and the scrubbing variable (“outlier”). Blue boxes (boxes A & B) highlights the removal of global signal changes shown as stripes pattern. Purple boxes (boxes 1 & 2) point out the appearances of global signal changes that maybe originate from sources other than head movement. Red boxes (boxes 4 & 5) indicate the occurrences of global signal changes that are potentially related to the subject motion. Cereb6 and Cereb8), and the area 3, 4 & 5, and 8 of the Vermis (Ver3, Ver45, and Ver8) (Figure 16) in the AD group. #### 2.8.2 Abnormalities in Mixed Network As shown in Table 5, similar but less group-differences (FDR-p \(<0.05\)) are observed. Below is a summary of the results. * Compared with the NC group, decreased local efficiencies are discovered in the AD group in the bilateral posterior Middle Temporal Gyrus (pMTG), the left Planum Temporale (PT), the left anterior Superior Temporal Gyrus (aSTG), and the right Central Opercular Cortex (CO) (Figure 17). * Compared with the NC group, decreased clustering coefficients are observed in the AD group in the left Planum Temporale (PT), the right Central Opercular Cortex (CO), and the right posterior Middle Temporal Gyrus (pMTG) (Figure 17). * Compared with the NC group, increased global efficiency is only found in the AD group in area 3 of the left Cerebellum (Cereb3) (Figure 18). Figure 11: **BOLD Signal Before and After Artefacts Removal (2).** The BOLD signals (before and after artefacts removal) of another 2 subjects are shown in time series boxes, along with which are the BOLD global signal (GS) changes in z-values, the head motion estimator, and the scrubbing variable (“outlier”). Purple boxes (box 3) point out the appearances of global signal changes that maybe originate from sources other than head movement. Red boxes (box 6-9) indicate the occurrences of global signal changes that are potentially related to the subject motion. ### Discussion #### 2.9.1 Removal of Global Effects Global SignalIn functional neuroimaging, the global signal is defined as the mean time series averaged across all brain voxels, including those in gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). GM is typically the region of interest (ROI) in brain fMRI studies because the dense neurons allow for indirect reflection of neural activity through fluctuations in the BOLD signal. However, the BOLD contrast is an indirect measure resulting from complex interactions between mainly three factors: cerebral metabolic rate of oxygen (CMRO2), blood volume, and blood flow. Thus, any factor that disrupts the balance between these three parameters can alter the BOLD signal. For example, head motion [44], cardiac and respiratory cycles [55], arterial CO2 concentration, cerebral autoregulation, blood pressure, and vasomotion [56] can introduce non-neuronal variances to all BOLD time series. Consequently, the global effects of artefacts can cause two BOLD signals to be more statistically correlated, and the distribution of correlations between BOLD signals is heavily skewed towards positive values. Many non-neuronal confounds can contribute to the global signal. However, it has also been demonstrated that the global signal can be tightly linked to neural activity [57]. Global Signal RegressionGlobal Signal Regression (GSR) is a method proposed to remove the confounding effects of non-neuronal sources [58, 59]. GSR employs linear regression for each voxel to remove variances explained by the global signal, and the resulting residuals can be considered as the de-noised signal. One of the most significant and widely discussed effects of GSR is the emergence of more anti-correlations. The mathematical details of the algebraic operation of GSR and the impact of mandating anti-correlations have been published [60, 61]. It is explained that mathematically, the seed-to-voxel regression coefficients (\(\beta\)) have a mean of exactly zero, and the distribution of Pearson's linear correlation coefficients is approximately zero-centred with GSR. Murphy and colleagues [60] were the first to mathematically demonstrate that GSR forces approximately half of the correlations to become anti-correlations and concluded that the anti-correlations between task-negative regions (i.e., DMN regions) and task-positive regions are most likely an outcome of GSR. Conversely, other researchers [61, 62] support the idea that the Figure 12: **Distribution of FCs Before and After Artefacts Removal.** The distribution of functional connections (FCs) of 4 subjects are shown. Before artefacts removal, the distributions are right-skewed and sometimes non-bell-shaped. After artefacts removal, the FCs distributions become normal with approximate zero-mean and bell-shape. Figure 14: **Distance-FCs Map Before and After Artefacts Removal (All Subjects).** The distributions and distance-related properties of FCs before and after artefacts removal are shown. It suggests that FCs are less divergent and more zero-centred in mediate to long distances after artefacts removal. Short-distance FCs are stronger compared with their long-distance counterparts. Figure 13: **Distance-FCs Map Before and After Artefacts Removal.** The figure shows the functional connectivity plotted against distance (in \(mm\)). Each scattering dot represents the mean functional connection (FC) at a specific distance, and the shadow area indicates the standard deviation. Similar to the findings in Figure 12, FCs are shifted to the positive side before artefacts removal. After artefacts removal, most of the mean FCs are near zero except the short-distance ones. anti-correlations after GSR have biological origins. To date, whether GSR should be applied remains an open question, and contradictory recommendations have been made [63]. Without a "gold standard," it is difficult to determine whether anti-correlation is merely an artefact of the pre-processing strategy or physiologically meaningful. Drawbacks of GSROn the one hand, it has been reported that GSR can not only eliminate non-neuronal artefacts but also enhance the specificity of positive correlations and their correspondence to structural connectivity [61]. On the other hand, the drawbacks of GSR have been discussed. GSR can introduce spurious negative correlations between brain regions [60, 64, 65] and alter the group comparison of inter-regional correlations [65]. Murphy and colleagues [60] used simulated data to demonstrate that GSR is not effective in removing nuisance confounds, and the locations of anti-correlated areas depend on the relative phases of the global signal and seed voxel time series. Saad and colleagues [65] used an illustrative model to show that GSR distorts the inter-regional correlations in a way that depends on the true correlation structure of the entire network and regional size distribution. Anderson and colleagues [64] also demonstrated that anti-correlations after GSR can be introduced even in completely uncorrelated networks. \begin{table} \begin{tabular}{l|l c c c} \hline \hline \multicolumn{1}{c}{**Metrics**} & \multicolumn{1}{c}{**Region**} & \multicolumn{1}{c}{**T Statistics**} & \multicolumn{1}{c}{**p-value**} & \multicolumn{1}{c}{**FDR-p**} \\ \hline \hline \multirow{6}{*}{Local Efficiency} & PT 1 & -4.59 & 0.000005 & 0.000701 \\ & pMTG r & -3.70 & 0.000234 & 0.014016 \\ & PT r & -3.62 & 0.000319 & 0.014016 \\ & CO r & -3.28 & 0.001113 & 0.036481 \\ & aSTG r & -3.21 & 0.001382 & 0.036481 \\ & HG r & -3.12 & 0.001882 & 0.041105 \\ & Cuneal l & -3.08 & 0.002180 & 0.041105 \\ & CO 1 & -2.99 & 0.002913 & 0.048057 \\ \hline Clustering Coefficient & PT 1 & -4.26 & 0.000024 & 0.003168 \\ \hline \multirow{2}{*}{Average Shortest Path Length} & Cereb3 r & -3.96 & 0.000083 & 0.010937 \\ & Ver8 & -3.69 & 0.000246 & 0.012355 \\ & Cereb3 l & -3.65 & 0.000281 & 0.012355 \\ \hline \multirow{6}{*}{Global Efficiency} & Cereb3 r & 4.53 & 0.00007 & 0.000948 \\ & Cereb3 l & 3.64 & 0.000293 & 0.018702 \\ & Cereb6 r & 3.54 & 0.000425 & 0.018702 \\ & Cereb45 r & 3.32 & 0.000965 & 0.026026 \\ & Ver45 & 3.31 & 0.000986 & 0.026026 \\ & Ver3 & 3.24 & 0.001274 & 0.028018 \\ & Ver8 & 3.12 & 0.001900 & 0.035825 \\ & Cereb8 r & 3.06 & 0.002285 & 0.037695 \\ \hline \multirow{2}{*}{Degree / Cost} & Cereb3 r & 4.39 & 0.000013 & 0.001744 \\ & Cereb6 r & 3.47 & 0.000550 & 0.036294 \\ \hline \hline \end{tabular} \end{table} Table 4: Abnormal Graph Metrics in AD (Results from Positive Network) \begin{table} \begin{tabular}{l|l c c c} \hline \hline \multicolumn{1}{c}{**Metrics**} & \multicolumn{1}{c}{**Region**} & \multicolumn{1}{c}{**T Statistics**} & \multicolumn{1}{c}{**p-value**} & \multicolumn{1}{c}{**FDR-p**} \\ \hline \hline \multirow{6}{*}{Local Efficiency} & pMTG r & -4.88 & 0.000001 & 0.000179 \\ & CO r & -4.32 & 0.000018 & 0.001188 \\ & PT l & -3.68 & 0.000250 & 0.010996 \\ & aSTG l & -3.49 & 0.000519 & 0.017134 \\ & pMTG l & -3.15 & 0.001714 & 0.045251 \\ \hline \multirow{6}{*}{Clustering Coefficient} & pMTG r & -3.97 & 0.000081 & 0.010721 \\ & CO r & -3.67 & 0.000265 & 0.017475 \\ & PT l & -3.45 & 0.000601 & 0.026451 \\ \hline Global Efficiency & Cereb3 l & 3.69 & 0.000242 & 0.031955 \\ \hline \hline \end{tabular} \end{table} Table 5: Abnormal Graph Metrics in AD (Results from Mixed Network) Alternatives to GSRince GSR adds uncertainty and interpretive complexity to scientific findings, various alternatives to GSR have been explored and proposed. All of these alternatives share the same motivation as GSR, which is to remove the common non-neuronal BOLD signal fluctuations and reveal the true functional connectivity. One type of technique involves the inclusion of physiological data. Physiological signals must be recorded simultaneously with fMRI data acquisition. When these physiological data are not available, there are still methods to remove global effects. For example, the aCompCor technique [37] uses anatomical masks to increase spatial specificity and principal components instead of mean signal to eliminate confounding artefacts. More complex methods, such as the Random Subspace Method for Functional Connectivity (RSMFC) and the Affine Parameterization of Physiological Large-Scale Error Correction (APPLECOOR), are summarized in the review paper by Kevin Murphy and Michael Fox [63]. Reasons for Using aCompCorThe current study is a functional network study based on graph theory that aims to investigate topological reconfiguration. Anti-correlations are treated differently in three network construction strategies: (1) only positive correlations are used for network construction in the analysis of positive networks; (2) similarly, only anti-correlations are used for negative networks; and (3) the absolute values of positive and negative correlations are used for mixed networks. By comparing the results of these three experiments, the effects of anti-correlations can be investigated. Furthermore, it has been reported that both local and global topological properties, quantified by local and global graph metrics, have higher reliability when GSR is not applied compared to when it is applied [66]. #### 2.9.2 Comparison of Findings in Positive and Mixed Network For each network metric, consistent results are observed for both positive and mixed networks. Here, the consistency is reflected not only by the same direction of alteration, but not also by the rationality of network metrics alterations. A summary of the consistency of findings from the positive and mixed network are as followed. The finding are not contradictory to each others.Results (Table 4 and Table 5) show that only decreased local efficiency, decreased clustering coefficient, decreased average shortest path length, increased global efficiency, and increased degree (or cost) are found. Figure 15: **Abnormal Graph Metrics in Cortical Cortices and Subcortical Regions (Results from Positive Network).** l: left; r: right; aSTG: anterior Superior Temporal Gyrus; PT: Planum Temporal; pMTG: posterior Middle Temporal Gyrus; CO: Central Operular Cortex; Cuneal: Cuneal Cortex; HG: Heschl’s Gyrus. * Decreased local efficiency and decreased clustering coefficient are consistent. By definition, the local efficiency of node \(i\) is the global efficiency of the subgraph (\(N_{i}\)) of its neighbouring nodes. The clustering coefficient of node \(i\) is defined as the proportion of edges (connections) among all the possible edges in the subgraph (\(N_{i}\)). Importantly, the global efficiency is directly and inversely related to the shortest path length: it is defined as the harmonic mean of the shortest path lengths. The logical connection between local efficiency and clustering efficient can be seen in the following example. If the neighbours of node \(i\) are densely interconnected which is equivalent to a high clustering coefficient for node \(i\), the shortest path length for each pair of nodes in \(N_{i}\) would be low which would lead to high global efficiency for subgraph \(N_{i}\). And high global efficiency for subgraph \(N_{i}\) is equal to high local efficiency for node \(i\), by definition. Therefore, the clustering coefficient and local efficiency are positively correlated. For example, decreased local efficiency and decreased clustering coefficient are found in left Planum Temporale (PT) (Table 4 & Table 5). * Decreased average shortest path length and increased global efficiency are consistent. As explained above, the global efficiency of node \(i\) is defined as the harmonic mean of all shortest path length between node \(i\) and all other nodes in the same graph. Thus, the shortest path length and global efficiency are negatively correlated. Figure 16: **Abnormal Graph Metrics in Cerebellum (Results from Positive Network).** l: left; r: right; Cereb: Cerebellum; Ver: Vermis. Figure 17: **Abnormal Graph Metrics in Cortical Cortices and Subcortical Regions (Results from Mixed Network).** l: left; r: right; aSTG: anterior Superior Temporal Gyrus; PT: Planum Temporale; pMTG: posterior Middle Temporal Gyrus; CO: Central Opercular Cortex. Figure 18: **Abnormal Graph Metrics in Cerebellum (Results from Mixed Network).** l: left; r: right; Cereb: Cerebellum. * Increased degree (or cost), increased global efficiency and decreased average shortest path length are consistent. Although degree and cost are not directly related to global efficiency in their definitions, they are more likely to be positively correlated to the average shortest path length and global efficiency. Supposed node \(i\) has a high degree which means node \(i\) is directly connected with many nodes, a great deal of shortest paths emitted from node \(i\) will have length 1, decreasing the average shortest path length for node \(i\). For example, decreased average shortest path length, increased global efficiency, and increased degree and cost are found in area 3 of the right Cerebellum (Cereb3) (Table 4). * Global efficiency quantifies network integration, while local efficiency quantifies network segregation. Though the altered global efficiencies and local efficiencies found in the AD group are in opposite direction, the author holds the opinion that they depict distinct aspects of network properties, and are not strongly correlated. **The altered network properties found in the mixed network is consistent with that of the positive network, but less in the amount.** * Decreased local efficiencies are found in the left Planum Temperale (PT), right posterior Middle Temporal Gyrus (pMTG), and right Central Operular Cortex (CO) in both positive and mixed network (Table 4 & Table 5). * Decreased clustering coefficient is found in left Planum Temperale (PT) in the both positive and mixed network (Table 4 & Table 5). * Increased global efficiency is found in left area 3 of right Cerebellum (Cereb3) in the both positive and mixed network (Table 4 & Table 5). * Altered metrics found only in the positive network include: decreased local efficiency (in right PT, right aSTG, right HG, left Cuneal, and left CO), decreased average shortest path length (in bilateral Cereb3 and Ver8), increased global efficiency (in right Cereb3, right Cereb6, right Cereb45, Ver45, Ver3, Ver8, and right Cereb8), and increased degree and cost (in right Cereb3 and right Cereb6) (Table 4). * Altered metrics found only in the mixed network include decreased local efficiency (in left aSTG and left pMTG), decreased clustering coefficient (in right pMTG and right CO) (Table 5). **Compared with NC, weaker functional segregation is discovered in the AD group in only cortical regions.** Decreased local efficiencies and clustering coefficients, which are two measures of network segregation, are only observed in cortical cortices, but not in any region of subcortical structures and cerebellum. **Compared with NC, stronger functional integration is discovered in the AD group in only regions of subcortical structures and cerebellum.** Decreased average shortest path length, increased global efficiency, and increased degree, which are measures of network integration, are only observed in subcortical structures and cerebellum.
2304.07143
Car-Following Models: A Multidisciplinary Review
Car-following (CF) algorithms are crucial components of traffic simulations and have been integrated into many production vehicles equipped with Advanced Driving Assistance Systems (ADAS). Insights from the model of car-following behavior help us understand the causes of various macro phenomena that arise from interactions between pairs of vehicles. Car-following models encompass multiple disciplines, including traffic engineering, physics, dynamic system control, cognitive science, machine learning, and reinforcement learning. This paper presents an extensive survey that highlights the differences, complementarities, and overlaps among microscopic traffic flow and control models based on their underlying principles and design logic. It reviews representative algorithms, ranging from theory-based kinematic models, Psycho-Physical Models, and Adaptive cruise control models to data-driven algorithms like Reinforcement Learning (RL) and Imitation Learning (IL). The manuscript discusses the strengths and limitations of these models and explores their applications in different contexts. This review synthesizes existing researches across different domains to fill knowledge gaps and offer guidance for future research by identifying the latest trends in car following models and their applications.
Tianya Terry Zhang, Ph. D., Peter J. Jin, Ph. D., Sean T. McQuade, Ph. D., Alexandre Bayen, Ph. D., Benedetto Piccoli
2023-04-14T14:06:33Z
http://arxiv.org/abs/2304.07143v4
# A Review on Longitudinal Car-Following Model ###### Abstract The car-following (CF) model is the core component for traffic simulations and has been built-in in many production vehicles with Advanced Driving Assistance Systems (ADAS). Research of CF behavior allows us to identify the sources of different macro phenomena induced by the basic process of pairwise vehicle interaction. The CF behavior and control model encompasses various fields, such as traffic engineering, physics, cognitive science, machine learning, and reinforcement learning. This paper provides a comprehensive survey highlighting differences, complementarities, and overlaps among various CF models according to their underlying logic and principles. We reviewed representative algorithms, ranging from the theory-based kinematic models, stimulus-response models, and cruise control models to data-driven Behavior Cloning (BC) and Imitation Learning (IL) and outlined their strengths and limitations. This review categorizes CF models that are conceptualized in varying principles and summarize the vast literature with a holistic framework. Car-Following Behavior, Cruise Control ## I Introduction To furnish a complete explanation of traffic flow dynamics, vehicles on the road are studied as a whole - not merely as isolated particles. Car-following (CF) is perhaps the most fundamental driving behavior that forcefully constrains the ego-vehicle from adopting a specific pattern in the presence of other vehicles. Since the 1950s or even earlier, many researchers have worked on CF modeling and applied such models to the simulation of Human Driver Vehicles (HDVs) to investigate micro- and macro-traffic phenomena [1-2]. Due to the expansion of autonomy and connectivity in the transportation system, connected and automated vehicle (CAV) technology has escalated the hype for car-following research. Autonomous Vehicles (AVs) with different parameter settings or objective functions become a new source of variety and will add more complexities to the existing roadway systems. Vehicle-to-Everything (V2X) is a critical component of collaborative intelligent transportation systems (ITS) and can significantly improve traffic flow, safety, and equity. With shared information, it's of great interest and importance to study how Connected Vehicles (CV) could change the throughput and reduce bottlenecks. The heterogeneities of vehicle types and driver behaviors give rise to various challenges, which need to be comprehensively analyzed before we upgrade infrastructure or adopt a new traffic operation strategy. Car following models become a powerful tool to address the challenges mentioned above and answer those critical questions in the upcoming decades of the connected roadway system and self-driving technology. Car following model has been re-discovered by various researchers from different disciplines, including Traffic Engineering, Applied Math, Physics, and Computer & Electronic Engineering, thus leading to perplexities in terms of categorization. Furthermore, researchers from different fields often focus on different types of outcomes with different conceptualizations. Existing literature reviews [3-10] have significant limitations. For example, a frequently cited paper [3] did not consider Intelligent Driver Model (IDM) and Cellular Automaton (CA) models. Other literature reviews [8,9] are limited to theory-based models, inadequate to reflect the state-of-the-art. The unsatisfactory categorization and outdated scope of survey necessitate a more effective catalog for model classification. This paper classifies existing car-following models by proposing a new taxonomy according to the underlying logic and theoretical principle (Figure 1). ## II Theory-based Models ### _Kinematic Models_ The kinematic model is one of the oldest and most important car-following models that analyze traffic as a continuous stream with changing density. Unlike the Stimulus-Response models, Kinematic models use the basic dynamic wave theory to reason the relationship between the leading vehicle and the following vehicle and assume each driver-vehicle like an automated particle or automaton. In 1945, Herrey and Herrey [11] proposed the concept of influence space considering the minimum safety distance and used the time-space diagram from field data to verify their hypothesis. In 1953, following rules suggested in California Motor Vehicle Code, Pipes [12] proposed a minimum safety distance headway model by analyzing the longitudinal movements of vehicles. In 1959, Herman et al. [13] relate ac/deceleration, speed, and spacing to the law of motion. The advantage of these models is that parameters usually have a descriptive physical meaning. The parameters used in point-mass Kinematic models have no apparent connection with identifiable characteristics of the driver-vehicle-unit. #### Ii-1 Gipps' Model Figure 1: Proposed Taxonomy of Car Following Models Kometani and Sasaki [14] attempted to seek a safe avoidance model for car-following behavior when the leading vehicle acts unpredictably. This model only use the basic Newtonian equations of motion to pursue a safe distance between the leading and following vehicles. Gipps [15] developed a more advanced version of these explicit physics-based car-following models. Gipps put two constraints on the CF behavior: the vehicle will not exceed the driver's desired speed or a vehicle-specific free-flow speed, and the following driver selects a speed that can bring his vehicle to a safe stop. In addition, while the follower vehicle decelerates, a safety margin should be added to the driver's reaction time. The formulation is shown as follows: \[\begin{split}& v_{n}(t+\tau)=\\ & min\left\{\begin{matrix}v_{n}(t)+2.5a_{n}\tau(1-v_{n}(t)/v_{n})( 0.025+v_{n}(t)/v_{n})^{1/2},\\ b_{n}\tau+\sqrt{\begin{matrix}(b_{n}^{2}\tau^{2}-b_{n}[2[x_{n-1}(t)-s_{n-1}-x_ {n}(t)]\\ -v_{n}(t)\tau-v_{n-1}(t)^{2}/\hat{b}]\end{matrix})\end{matrix}\right\}\end{split} \tag{1}\] In the formulation, \(a_{n}\) is the maximum acceleration, \(b_{n}\)is the most severe braking, \(V_{n}\) is the speed at which the driver of vehicle n wishes to travel, \(\hat{b}\) is the estimate of \(b_{n-1}\). Gipps did not offer the calibration of this model. However, much traffic simulation software used this model to mimic microscopic traffic flow because of its explicit physic meaning. #### Iii-A2 Newell's Model Newell [16] proposed a simple CF model for a homogeneous highway traffic flow. In this model, the time-space trajectory of the \(n\)\(th\) vehicle is essentially the same as the \((n-1)\)\(th\) vehicle except for a translation in space and in time. This was a very simple rule for car following behavior compared with other models; however, it is much more accurate than many of them. Newell's Model is defined as: \[x_{n}(t+\tau_{n})=x_{n-1}(t)-d_{n}\;(2)\] Where, \(x_{n}(t)\) is a piecewise linear trajectory of vehicle \(n\), which will be a simple translation of piecewise linear \(x_{n-1}(t)\) by a distance \(d_{n}\) and a time \(\tau_{n}\). This model implies that drivers can manage to adjust the distance to the preceding vehicle at the same velocity, which can meet both safety and comfort requirements. #### Iii-A3 Cellular Automata Cellular automata models were first introduced in the study [17], which is defined by a one-dimensional string of cells that have two states (occupied and empty). The cell updating policy is known as Rule 184. Rule 184 can also be expressed as a particle-based updating rule, described as: \[v_{\alpha}(t+1)=\begin{matrix}1,&if\;x_{\alpha-1}(t)-x_{\alpha}(t)>1\\ 0&otherwise\end{matrix} \tag{3}\] \[x_{\alpha}(t+1)=x_{\alpha}(t)+v_{\alpha}(t+1) \tag{4}\] Where \(\alpha\) is the vehicle index. \(x_{\alpha}\) is the location of the vehicle \(\alpha\), \(v_{\alpha}\) is the speed of vehicle \(\alpha\). The most popular and simplest Cellular Automata (CA) model is Nagel-Schreckenberg Model (NSM). The NSM model can only update speed and acceleration on multiples of \(7.5m/s\) and \(7.5m/s^{2}\) due to the coarse discretization of space and time, one cell corresponds to effective vehicle length and one-time step under car-following mode. Some refined CA models include Barlovic Model, which considers the slow-to-start rule, and the KKW model proposed by Kerner, Klenov, and Wolf, which allows for smaller interval states. However, the stochastic nature of the KKW model could be unstable and lead to collisions. Daganzo [18] proved that the CA model matches a triangular fundamental diagram kinematic wave model and the CA model agrees with Newell's "lower order" car-following model. This material is based upon work supported in part by the National Science Foundation under Grant 1952096. ### _Stimulus-Response Models_ The Stimulus-Response models are based on the concept that the following vehicle adjusts acceleration (or deceleration) in response to the observation of the motion of the leading vehicle to maintain a desired speed and safety distance. The stimulus-response models are widely used for analyzing factors that might affect the car-following process by formulating in different ways, such as adding stochastic factors from additional stimuli, variations in sensitivity parameters, and drivers' desired time gaps. #### Iii-B1 GM Model GM model family is also known as GHR (Gazis-Herman-Rothery) model, which was first developed by Chandler et al. [19] and Herman et al. [20] at the General Motors research laboratory. GM model family is based on the stimulus-response process of humans. The model formulation is shown as follows: \[M\frac{a_{\text{tr}}(t+\alpha)}{dt}=\lambda[u_{k-1}(t)-u_{k}(t)] \tag{5}\] M denotes the mass of each identical vehicle, \(\lambda\) denotes the sensitivity of the driver, \(\lambda\) denotes the response time of the driver. The basic theory of this model is that the driver of the following vehicle responds to the velocity change of the leading vehicle by accelerating or decelerating his vehicle. Many researchers contributed to the development of the GM model, especially to the form of sensitivity \(\lambda\). Gazis et al. [21] claimed the sensitivity \(\lambda\) was inversely proportional to the space headway. Gazis et al. [22] summarized a general form of sensitivity \(\lambda\): \[\lambda=\frac{\alpha x_{\alpha\alpha}^{m}(t+\tau)}{[(x_{n}(t)-x_{n+1}(t))^{1}]} \tag{6}\] In the following years, researchers focused in calibrating the model to attain the value of \(m\) and \(l\) using traffic data. It was found that different data and different experiments offered a different combination of \(m\) and \(l\) with the same car following model [23 -25]. Various analyses contradict each other, which hugely blocks the application of the GM model. Some researchers expanded the GM model to other directions. Beexelius [26] assumed that drivers traveling on the route focus on several vehicles ahead. From this opinion, he presented a "multi-following model." Some researchers focused their work on the asymmetry characteristics of CF behavior. Treiterer and Myers [27] suggested that the car following the model should distinguish the difference between acceleration and deceleration. #### Iii-B2 Optimal Velocity Model Optimal Velocity (OV) Models are a set of models based on the assumption that each vehicle pursues a legal velocity, which depends on the following distance of the preceding vehicle. OV models are also ruled by the stimulus-response theory. The stimulus is the difference between the current and optimal velocity, and the response is also acceleration/braking, just as in GM models. Bando et al. [28] proposed the first Optimal Velocity Model as a dynamic model of traffic flow that induces traffic congestion. Each driver controls acceleration or deceleration to achieve an optimal velocity \(V(\Delta x_{n})\) according to headway distance \(\Delta x\). The original formulation is expressed as follows: \[\frac{dv_{n}}{dt}=a[V(\Delta x_{n})-v_{n}]\;\;(7)\] In this model, \(\Delta x_{n}=x_{n+1}-x_{n}\) denotes the distance headway between car \(n+1\) and car \(n\) at time \(t\). \(a\) denotes the sensitivity constant of a driver. The author chose this function as the realistic function of legal velocity: \[V(dx)=tanh(\Delta x-2)+tanh\;2\;\;\;(8)\] In this case, the following vehicle accelerates gradually. It never passes the preceding vehicle, which can reflect the expected behavior, reproduce the congestion phenomena spontaneously, and remain stable [29]. To address the instance that if the leading car is much faster, the car will not brake, even if its headway is smaller than the safe distance. The optimal velocity function is revised as below: \[V\left(\Delta x_{n}(t)\right)=\frac{v_{max}[\tanh(\Delta x_{n}(t)-h_{0})+\tanh(h_ {k})]}{2} \tag{9}\] However, the comparison with field data suggests that high acceleration and unrealistic deceleration appear in the OVM. Helbing and Tilch [30] deem the driver's behavior is not only given by the motivation to reach a certain desired velocity \(v_{a}^{0}\), but also by the motivation to keep a safe distance from other cars. So, their model determines the total acceleration of the following vehicle by both an acceleration force and repulsive interaction forces. A generalized force model is proposed, which is found to have some improvements when evaluated on field data. Lenza et al. [31] considered two or more nearest vehicles ahead in CF behavior, which led to a multi-vehicle CF model. However, based on the model built by [30], Jiang et al. [32] found that their model cannot describe the delay time and demonstrates unrealistic results of the kinematic wave speed at jam density. Therefore, they took both positive and negative velocity differences into account, which is called a full velocity difference model (FVDM). Nakayama et al. [33] assume the driver should look back on travel and incorporate this effect into the Optimal Velocity Model. Coupled Map car-following model [34] was proposed by discretizing the OV model. Hasebe et al. [35] expanded their model to consider an arbitrary number of vehicles that precede or follow. Gong et al. [36] developed an asymmetric full-velocity difference, car-following model. Jin et al. [37] took into account the lateral separation characteristics between the followers and leaders on a single-lane highway based on the full optimal velocity model. Yu et al. [38] proposed a confined Full Velocity Difference (c-FVD) model that limits the accelerations or decelerations generated by the existing FVD models to a reasonable level. Chiarello et al. [39] superimposed the Follow-The-Lead (FTL) interactions and relaxation to build a traffic-dependent OVM. Their results show that OV interactions occur more frequently than FTL interactions when macroscopic description switches from the inhomogeneous Aw-Rascle-Zhang (ARZ) model to the Lighthill-Whitham-Richards (LWR) model. #### Iii-B3 Intelligent Driver Model (IDM) The Intelligent Driver Model (IDM) was proposed by Treiber et al. [40], which considers a complete and accident-free model derived from the first principles with a list of basic assumptions. IDM seems to be a reasonable basis for the Adaptive Cruise Control (ACC) system with a smooth transition between acceleration and deceleration behavior. For IDM models, the acceleration of driver \(i\) is described as a function of the distance \(s_{i}(t)\), speed \(v_{i}(t)\), and the relative speed \(\Delta v_{i}(t)\) as below: \[\frac{d}{dt}v_{i}=a\left(1-\left(\frac{v_{i}(t)}{v^{0}}\right)^{4}-\left( \frac{s^{*}(v_{i}\Delta v_{i})}{s_{i}}\right)^{2}\right) \tag{10}\] The acceleration term consists of a free acceleration term \(a^{free}=a[1-\left(\frac{v}{v_{0}}\right)^{4}]\) and a breaking interaction term \(a^{int}=-a\left(\frac{s^{*}}{s}\right)^{2}\). The desired minimum gap \(s^{*}\) is given by: \[s^{*}(v_{i},\Delta v_{i})=s_{0}+Tv+\frac{v_{i}\Delta v}{2\sqrt{ab}} \tag{11}\] Which is a summation of the minimum distance \(s_{0}\), safety distance depending on velocity \(v\) and time headway \(T\). The last term \(\frac{v_{i}\Delta v}{2\sqrt{ab}}\) is only active in non-stationary traffic flow conditions. When the following vehicle's speed is greater than the preceding vehicle, \(\Delta v\geq 0\), this term is devised to stabilize the platoon vehicle in terms of velocity. Parameters from IDM have intuitive physical interpretation, which can be tuned to simulate gentle, average, or aggressive driving styles. The Multi-anticipative IDM models [41] consider essential aspects of driver behaviors, including finite reaction times, estimation errors, and spatial and temporal anticipation. Ksting et al. [42] enhanced the IDM model to assess the impact of adaptive cruise control vehicles on traffic capacity. Liebner et al. [43] applied IDM for ADAS to incorporate the spatially varying velocity profile to represent both car-following and turning behavior. Derbel et al. [44] modified the IDM to improve driver safety and respect vehicle capability. Their results show good performance in traffic safety and string stability. Li et al. [45] modified IDM with power cooperation to strengthen the power of each vehicle in proportion to the immediately preceding vehicle. To build a realistic agent model, Eggert et al. [46] proposed the Foresighted Driver Model (FDM), which assumes that a driver acts in a way that balances predictive risk with utility. Hoermann et al. [47] extended the IDM for autonomous vehicles to describe a probabilistic motion prediction applicable to long-term trajectory planning. Treiber and Kesting [48] added external noise and action points to the IDM to study mechanisms behind traffic flow instabilities, indifference regions of finite human perception thresholds, and external noise. Hubmann et al. [49] extend the IDM model for complex urban scenarios by incorporating a behavior prediction framework with context-dependent upper and lower bounds on acceleration. A recent paper [50] analyzed the limitations of the IDM model and provided suggestions for improvements. #### Iii-B4 Psycho-physical Models Psycho-physical models are developed to approximate drivers' reasoning and decision process, considering the uncertainty and boundary effects in recognizing relative speeds and distances influencing their behavior. Forbes [51] introduced the human factor in car-following behavior and found that time headway should always be equal to or greater than perception-reaction time. Action Point Model [52] analyzed the perceiving threshold that distinguished the boundary of whether the driver could perceive the velocity changing relative to the vehicle ahead was proposed. This threshold was given as the visual angle to the rear width of the vehicle ahead. The visual angel (\(\theta\)) and its rate of change or angular velocity are calculated with the equations: \[\theta=2\tan^{-1}(\frac{w}{2\mu}) \tag{12}\] \[\frac{d\theta}{dt}=-w*\frac{v_{r}}{\mu^{2}} \tag{13}\] Where H is the gap between leading and following vehicles. \(Vr\) is the relative speed. \(w\) is the width of the leading vehicle. Todosiev [53] adopted a fundamental psycho-physical approach and investigated the Action point (AP) thresholds where drivers change their behavior at an action point. Evans and Rothery [54] conducted experiments to quantify the thresholds between different phases. Wiedemann and Reiter [55] developed the well-known psycho-physical car-following model found in VISSIM. Another popular simulation program, PARAMICS, has incorporated the Fritzsche car-following model [56], an acceleration-based model from the psycho-physical car-following framework. As described in [57], once the absolute value of angular velocity exceeds its threshold, a driver notices that ego-vehicle speed is different from that of the leading vehicle and reacts with an ac/deceleration opposite in sign to that of the rate of change of visual angle. Winsum [58] integrated preferred time headway and Time-to-Collision (TTC) in a mathematical car-following model based on psychological evidence that human drivers regulate available time as a control mechanism. Wang et al. [59] devised ac/deceleration model based on the driver's cognitive mechanism using the Just-Noticeable Distance (JND) concept. Lochrane et al. [60] developed a multidimensional framework for modeling thresholds to account for the different behavior in the work and nonwork zones. Durrani and Lee [61] calibrated the eight Calibration Constants (CCs) built into Wiedemann model used by VISSIM to quantify the thresholding values for drivers' APs. Wagner et al. [62] found that the APs are not induced around certain thresholds, as is claimed by psycho-physical car-following models. Instead, small distances indicate a slightly higher probability of finding an AP. To model vagueness, uncertainty, and subjectivity of human decision-making process, fuzzy logic was added to the stimulus-response framework [63-67]. Hao et al. [68] incorporated fuzzy logic into the five-layer structure: Perception-Anticipation-Inference-Strategy-Action. Bennajieh et al. [69] employed four strategic variables, including gap distances, the velocity of followers and leaders, and the action of followers, to build the fuzzified anticipation car-following model. However, construct of fuzzy sets and sensitivity of parameters for membership functions greatly impact the model accuracy. ### _Cruise Control Model_ Unlike the previous models designed to simulate human driver behavior and reveal the disturbances of natural traffic, cruise control models are developed to dampen the instability from the mechanistic view. Cruise control has been investigated since the 1970s [70-74]. Adaptive Cruise Control (ACC) was on the market when Mitsubishi launched the "Preview Distance Control" driver assistance system in 1995. ACC and CACC (Cooperative Adaptive Cruise Control) are heavily studied CAV-based applications. Cruise control models are designed for collaborative and autonomous car following, which significantly reduces the feasible time headway among vehicles [75-76]. Shladower et al. [77] reviewed different gap regulation strategies of ACC/CACC systems, including Constant Distance Gap, Constant Time Gap, and Constant-Safety-Factor Criterion. Some studies suggest that the Constant Time Gap policy is more robust against error propagation through traffic than the Constant Distance Gap policy [78-79]. #### Ii-C1 Linear Model Linear feedback and feedforward controller assume acceleration is proportional to the deviation from target spacing and relative speed. The first linear controller model is generally attributed to Helly [80]. He proposed a model that included the adaptation of the acceleration according to whether the vehicle in front was braking. A simple linear ACC/CACC controller with gains of speed error and spacing error is proposed by [81]. Milanes et al. [82] compared four different control techniques: PI, intelligent PI, fuzzy controller, and adaptive-network-based fuzzy control, showing that the intelligent PI has the best performance. Dias et al. [83] used a linear longitudinal force model as a feedforward controller or PID controller to compensate for the powertrain non-linearity. The LQ-based controller has been applied on ACC and CACC systems and tested under rural and urban roads in a CV environment, considering both communication delay and driver reaction time [84-85]. #### Ii-C2 Nonlinear Model Fuzzy logic-based ACC/CACC cruise control was applied to address nonlinear issues in reference [86]. As part of the AUTOPIA Program, Naranjo et al. [87] developed a fuzzy logic controller for speed and distance vehicle control, offering the capability of performing Adaptive cruise control to permit it to follow a leader. Perez et al. [88] developed cooperative adaptive cruise control (CACC) through fuzzy controllers and human driver experience. Zhang and Orosz [89] developed a distributed nonlinear controller for cooperation in multi-agent networks considering consensus and disturbance attenuation. To address the difficulties of obtaining parameters of the detailed longitudinal car-following model and account for nonlinear effects, Azziaghdam and Alankus [90] designed a two-loop non-linear controller that only uses parameters known to OEMs (Original Equipment Manufacturer). The outputs of the Inner loop are the accelerator pedal position sent to the vehicle CAN-Bus. The outer loop controls the relative distance and the relative velocity. #### Ii-C3 Model Predictive Control (MPC) This type of control algorithm is flexible and usually defined by multiple objective functions or cost functions that can directly incorporate key constraints. Bageshwar et al. [91] applied the MPC in a vehicle cut-in scenario for transitional maneuvers of ACC vehicles according to the spacing-control laws. Based on radar measurements and V2X communication, Moser et al. [92] developed a stochastic MPC with a linear cost function using a piecewise linear approximation of the fuel consumption map. Cheng et al. [93] developed a multiple-objective MPC model integrated with direct yaw moment control (DYC) to ensure vehicle dynamic stability and improve driving comfort. A generic control framework proposed in studies [94-95] assumes that accelerations of ACC/CACC vehicles are controlled to optimize a cost function reflecting different objectives, i.e., safety, comfort, efficiency, and sustainability. ## III Data-driven Models Instead of an analytic modeling framework, more recently, data-driven car-following models have achieved human-level or beyond human-level performances under complex driving environments. Unfortunately, for theory-based car-following models, it is unattainable to devise a parametric model considering all heterogeneous traffic compositions, infrastructure conditions, and varying data inputs. However, Artificial Intelligence (AI) models are adaptable to general settings. ### _Behavior Cloning Models_ Behavior Cloning (BC) models try to learn the maneuver of the human driver by extracting patterns from large amounts of vehicle driving data. The data-driven model can discover complex car-following strategies for nonlinear and high-dimensional systems. #### Iii-A1 Machine Learning Model With the availability of high-resolution vehicle trajectory data and the rampant development of machine learning (ML) algorithms and tools, novel machine learning and deep learning algorithms have been proposed. Kehtarnavaz et al. [96] developed a time-delay neural network (TDNN) autonomous vehicle following module to replace a conventional proportional and derivative/proportional and integral (PD/PI) controller and implemented it on actual vehicles. Panwai and Dia [97] developed a neural agent car-following model for mapping perceptions to actions. They compared the speed and position of individual vehicles from data to model output trajectory profiles. Khodayari et al. [98] also developed an Artificial Neural Network (ANN) to simulate and predict the car-following behavior based on the driver-vehicle units reaction delay, and input parameters include relative speed, relative distance, and follower's speed. Zheng et al. [99] developed ANN to output the following vehicle's speed by considering the driver-vehicle reaction delay in relative speed and acceleration, the gap, and the vehicle's speed. Wei and Liu [100] proposed a Support Vector Machine (SVM) approach to investigate the asymmetric characteristics of car-following behavior and its impact on traffic flow evolution. The Support Vector Machine model takes Space headway, velocity, and relative speed as the inputs and output the follower's speed. He et al. [101] developed a simplified k-nearest mean model that only takes the position as input and reproduces traffic dynamics without premises and calibration. This approach selects the most similar historical records and outputs the average of the k nearest neighbors. Another study [102] proposed a data-driven approach for optimizing car-following model estimation and demonstrated model performance with four vehicle platoon experiment in real world traffic condition. Yang et al. [103] developed a hybrid method by integrating theoretical-based and machine-learning models to obtain better accuracy while maintaining the physical meaning. Kamjoo et al. [104] implemented Support Vector Machine and Long-Short-Term-Memory (LSTM) models to investigate the winter operation's impact on car-following behavior, showing that the presence of snowplows leads to significantly different car-following behaviors. 2 Deep Learning Model Driving behaviors can be treated as sequential data since the driver's following action is conditioned on their previous action, often referred to as memory effects in the car-following model. The Deep Learning (DL) model, particularly Recurrent Neural Networks (RNN), is similar to humans in memory-based decision-making. Zhou et al. [105] developed an RNN with inputs of the gap, relative speed, and follower's speed to predict acceleration, speed, and gap that can be used for car-following control. Wang et al. [106] developed deep learning-based car-following with Gated Recurrent Unit (GRU) that embeds model outputs' prediction and memory effects. Wu et al. [107] built a deep learning-based model to mimic human drivers' memory, attention, and prediction (MAP) mechanisms. Wang et al. [108] analyzed the long-term memory effect in the car-following model using a deep learning model by taking various time-horizon historical information as inputs. Zhang et al. [109] built an LSTM model with hybrid retraining constrained (HRC) training method to model both car-following and lane-changing behaviors. Given that CF behaviors (acceleration, deceleration, and cruising) are made for continuous time steps rather than step-by-step with memory effects and delayed reaction time, Ma and Qu [110] developed a seq2seq LSTM model for predicting multistep car-following behaviors. Their model inputs include gap distance, relative speed difference, and the speed of the subject vehicle, and the model output is the multiple-step accelerations/decelerations. Mo et al. [111] designed Physic Informed Deep Learning architectures encoded with IDM and OVM to predict accelerations in four traffic regimes: acceleration, deceleration, cruising, and emergency braking. Zhou et al. [112] applied a transfer learning-based LSTM car-following model for adaptive cruise control to address the ACC data scarcity problem by transferring useful features from human-driven data. ### _Imitation Learning Models_ BC models are prone to cascade errors, which is a well-known problem in the literature. Minor inaccuracies in model predictions could accumulate, leading to even poorer decisions downstream. Reinforcement Learning (RL) and Inverse Reinforcement Learning (IRL) were considered efficient tools to overcome these drawbacks. Human driving practice is a continuous learning process; therefore, reinforcement learning can naturally support the human learning pattern with continual improvement. Imitation learning (IL) has several advantages, 1. It can meet the preferences of driving styles; 2. Automated CF maneuvers would be easily understood by other human drivers. #### Iii-B1 Reinforcement Learning Model In 1993, Ioannou and Chien [113] applied RL to develop an autonomous intelligent adaptive cruise control system. Desjardins and Chab-dra [114] introduced a policy gradient algorithm estimation and used BPNN for longitudinal vehicle control. Their RL-based car-following control was implemented in the simulator to realize the efficient behavior of CACC. Kuefler et al. [115] proposed Generative Adversarial Imitation Learning (GAIL) framework to imitate human car-following behavior with three non-linearity in mapping from states to actions, high-dimensional state representation, and stochasticity. Zhu et al. [116] proposed a framework for developing human-like car-following based on deep reinforcement learning using speed deviations as a reward function. Zhang et al. [117] developed a deterministic promotion RL algorithm (DPRL) based on an actor-critic frame and the policy gradient method for longitudinal velocity control, which was implemented in CarSim simulated environment and real-world experiment. Li et al. [118] proposed a deep deterministic policy gradient (DDPG)-based driving strategy taking information from in-vehicle sensors for system performance optimization. Yen et al. [119] developed a proactive car-following model using deep-RL considering road efficiency, safety, and comfort, which shows better performances in terms of time heady, time to collision (TTC), and jerk (rate of change in acceleration). Yavas et al. [120] developed a Model-Based Reinforcement Learning (MBRL) for ACC control with a hybrid policy that combines the Intelligent Driver Model following policy with the Deep Reinforcement Learning policy. Wang et al. [121] devised an RL model for longitudinal velocity control in a three-vehicle mode based on the safety level, consisting of the collision avoidance priority of the leading and following vehicles and the expected acceleration/deceleration decision. Most CF models take relative speed, acceleration, and gap as inputs, while some studies [122-123] directly use video object detection to learn the driving strategy. Such models demonstrate promising results, which can efficiently be deployed on embedded devices for advanced driver assistance systems. #### Iii-B2 Inverse Reinforcement Learning Model Instead of learning a policy from a reward function, IRL is trying to learn a reward function from a policy or demonstration of a task [124]. With naturalistic driving behavior data, the IRL would extract an approximation of that human's reward function for the CF task. Shimosaka et al. [125] proposed an IRL framework to model risk anticipation and defensive driving, which has a better long-term prediction of driver maneuvers. Shimosaka et al. [126] developed multiple reward functions with clustering of environments based on the driving behavior of professional drivers. Gao et al. [127] proposed a car-following algorithm with a reward function based on IRL under different conditions. The experimental verification is conducted on a dynamic driving simulation test bench. Zhou [128] developed a Maximum Entropy Deep IRL approach using a neural network to estimate the rewards. Model input features include time headway, relative speed, and maximum speed, and model output is speed control. Zhao et al. [129] developed a Personalized-ACC framework using IRL, which classified the driver and weather conditions for the pretrained off-line IRL model and compared it with the IDM model in terms of safety and comfort measured by takeover percentage. ## IV Summary and Outlook Analyzing car-following behavior and developing intelligent vehicle control algorithms have great value in road safety and energy efficiency. It can be used to show the growing pattern of traffic oscillations to perform local and string stability analysis. The profound impacts of the different car following strategies (e.g., IDM, OVM, model-predictive control, and data-driven control) and various connected environments have set the stage for a new era of traffic flow dynamics. The main challenges and opportunities for future longitudinal car following research are Data and Models. A great deal of theory-based driving behavior research is now merging with data-driven opportunities. Figure 2 shows a storyline of surveyed car-following models. Although the data-driven model will undoubtedly become critically important, the traditional driver behavior expertise is still relevant. It is auspicable to integrate theory-based models with the fast-evolving AI models, which can reason, learn, and fill in missing information to achieve better interpretability, generalizability, and reliability. Fig. 2: Storyline of Surveyed Car-Following Models
2306.11158
Mind the Cap! -- Constrained Portfolio Optimisation in Heston's Stochastic Volatility Model
We consider a portfolio optimisation problem for a utility-maximising investor who faces convex constraints on his portfolio allocation in Heston's stochastic volatility model. We apply the duality methods developed in previous work to obtain a closed-form expression for the optimal portfolio allocation. In doing so, we observe that allocation constraints impact the optimal constrained portfolio allocation in a fundamentally different way in Heston's stochastic volatility model than in the Black Scholes model. In particular, the optimal constrained portfolio may be different from the naive capped portfolio, which caps off the optimal unconstrained portfolio at the boundaries of the constraints. Despite this difference, we illustrate by way of a numerical analysis that in most realistic scenarios the capped portfolio leads to slim annual wealth equivalent losses compared to the optimal constrained portfolio. During a financial crisis, however, a capped solution might lead to compelling annual wealth equivalent losses.
Marcos Escobar-Anel, Michel Kschonnek, Rudi Zagst
2023-06-19T20:54:27Z
http://arxiv.org/abs/2306.11158v1
# Mind the Cap! - Constrained Portfolio Optimisation in Heston's Stochastic Volatility Model ###### Abstract We consider a portfolio optimisation problem for a utility-maximising investor who faces convex constraints on his portfolio allocation in Heston's stochastic volatility model. We apply the duality methods developed in Escobar-Anel _et al._ (2023) to obtain a closed-form expression for the optimal portfolio allocation. In doing so, we observe that allocation constraints impact the optimal constrained portfolio allocation in a fundamentally different way in Heston's stochastic volatility model than in the Black Scholes model. In particular, the optimal constrained portfolio may be different from the naive 'capped' portfolio, which ones off the optimal unconstrained portfolio at the boundaries of the constraints. Despite this difference, we illustrate by way of a numerical analysis that in most realistic scenarios the capped portfolio leads to slim annual wealth equivalent losses compared to the optimal constrained portfolio. During a financial crisis, however, a capped solution might lead to compelling annual wealth equivalent losses. Portfolio Optimisation, Allocation Constraints, Dynamic Programming, Heston's Stochastic Volatility Model, Incomplete Markets JEL ClassificationG11, C61 ## 1 Introduction Despite its widespread use in both academia and the financial industry, it has been well-documented in the mathematical finance literature that a variety of properties of financial time series, so-called stylised facts, are not captured by the Black-Scholes model (see e.g. Cont (2001), Lux (2009), Weatherall (2018)). A major point of criticism is the constant volatility of modelled log returns, whereas empirical returns appear to be time-dependent and random (Taylor (1994)). The stochastic volatility model proposed by Heston (1993) aims to avoid this gap by modelling the volatility of log returns as a Cox-Ingersoll-Ross process (CIR process). While the Heston model was originally proposed in the context of option pricing, its analytical tractability has led to insightful applications in continuous-time portfolio optimisation (see e.g. Kraft (2005)), which is the subject of this article. Specifically, we consider the portfolio optimisation problem of an investor trading continuously in a financial market consisting of one risk-free asset and one risky asset with stochastic Heston volatility. The investor seeks to maximise his expected utility derived from terminal wealth at a finite time point \(T>0\) under the condition that his portfolio allocation \(\pi\) abides by given convex allocation constraints \(K.\) The considered optimisation problem involves two major facets, which differ from the original portfolio optimisation setting of Merton (1971): stochasticity of the volatility of risky asset log returns ('stochastic volatility') and the presence of convex allocation constraints. The following paragraphs present a brief overview of the relevant literature, with respect to both of these facets. _(i) Portfolio Optimisation in Financial Markets with Stochastic Volatility._ Our continuous time set-up can be traced back to the seminal work of Merton (1971), who used dynamic programming methods to derive explicit solutions to the unconstrained dynamic portfolio optimisation problem for an investor with HARA utility function in a Black-Scholes model with constant volatility. Generalisations for different utility functions and more complex financial markets were achieved by employing martingale methods (Pliska (1986) and Karatzas _et al._ (1987)), which rely heavily on the assumption that all contingent claims in the financial market are replicable. However, this assumption is not generally satisfied for financial markets with stochastic volatility, which means that martingale methods are not directly applicable, unless the financial market is artificially completed by the addition of fictitious, volatility-dependent assets (see e.g. Liu and Pan (2003), Branger _et al._ (2008), Egloff _et al._ (2010), Escobar-Anel _et al._ (2017), and Chen _et al._ (2021)). Without such completion, the solvability of the portfolio optimisation problem in financial markets with stochastic volatility is often directly linked to the solvability of the associated HJB PDE.1 Solutions to such portfolio optimisation problems were first characterised in terms of viscosity solutions to the associated HJB PDE for multi-factor models in Zariphopoulou (2001), and explicit closed-form solutions for Heston's stochastic volatility model were first derived and formally verified in Kraft (2005). Subsequently, Liu (2006) derived explicit solution formulae for an optimal consumption and portfolio allocation problem in a general multi-factor model, where the factor is a quadratic diffusion. Kallsen and Muhle-Karbe (2010) used the notion of an opportunity process and semi-martingale characteristics to develop an approach which leads to closed-form solutions for the optimal portfolio process in a range of exponentially affine stochastic volatility models, including the Heston model and the jump model of Geman _et al._ (2003). The PCSV model and the Wishart process were considered as multi-dimensional extensions of the Heston model for multidimensional asset universes with stochastic correlation in Escobar-Anel _et al._ (2017) and Bauerle and Li (2013). An extensive overview of related papers in the field of dynamic portfolio optimisation in stochastic factor models can be found in Zariphopoulou (2009). Footnote 1: Note that obtaining and formally verifying the optimality of a candidate portfolio process requires more than just a solution to the associated HJB PDE, as pointed out by Korn and Kraft (2004). _(ii) Portfolio Optimisation in Financial Markets with Convex Allocation Constraints._ Convex allocation constraints were first considered in the context of continuous-time dynamic portfolio optimisation in Karatzas _et al._ (1991) and Cvitanic and Karatzas (1992). The authors showed that a solution to the primal constrained portfolio optimisation problem could be derived by considering a family of 'auxiliary' unconstrained portfolio optimisation problems, determining the optimal portfolios for these problems, and selecting the least favourable portfolio among them. For CRRA-utility functions, Cvitanic and Karatzas (1992) determined the optimal constrained portfolio process in closed-form up to a deterministic minimiser of a real convex optimisation problem. Zariphopoulou (1994) considered a similar setting in a one-dimensional Black Scholes market, but did not employ any duality techniques. Rather, the author characterised the value function as the unique viscosity solution of the associated HJB PDE and gave a semi-explicit expression of the optimal portfolio allocation in terms of the value function. From both Cvitanic and Karatzas (1992) and Zariphopoulou (1994), one can easily see that the optimal constrained portfolio allocation for an investor with a CRRA utility function in the Black-Scholes model is equal to the unconstrained optimal solution if the constraints are satisfied, and is otherwise capped at the boundary of the constraint. Ever since, allocation constraints have been integrated into portfolio optimisation problems in a myriad of ways (see e.g. Cuoco (1997), Pham (2002), Lindberg (2006), Mnif (2007), Bian _et al._ (2011), Nutz (2012) and Dong and Zheng (2020)). Regardless, explicit solutions for the optimal portfolio allocation were obtained only on rare occasions. The availability of explicit expressions appears to strongly depend on the interplay between the chosen financial model, utility function and constraint. In this spirit, Escobar-Anel _et al._ (2023) recently developed a duality approach, which enabled them to state a condition under which the exponentially affine separability structure of the value function (as discussed in Liu (2006)) is retained under the addition of convex allocation constraints. If this condition is satisfied, then a candidate for the optimal constrained portfolio allocation is known up to the solution of a Riccati ODE and a deterministic minimiser of a convex optimisation problem. Further, the authors showed that this condition is satisfied for Heston's stochastic volatility model, but failed to determine explicit solutions to the associated ODEs and did not formally verify the optimality of the candidate portfolio allocation. In this paper, we make a threefold contribution to this literature: * We complement the work of Escobar-Anel _et al._ (2023) by deriving the first explicit, closed-form expression for the optimal portfolio allocation \(\pi^{*}\) in Heston's stochastic volatility model under the presence of convex allocation constraints. This optimality is verified formally in a verification theorem. * We show that the optimal portfolio allocation \(\pi^{*}\) may be different from the capped optimal unconstrained portfolio allocation \(\pi_{u}\) for Heston's stochastic volatility model. In particular, we prove an equivalent characterisation which describes when these two portfolio are different. We conduct a numerical study to show that this difference is slim for calm market scenarios, but can lead to significant annual wealth equivalent losses during turbulent market scenarios. * We extend these results from the one-dimensional Heston's stochastic volatility model to a multi-dimensional PCSV model with constraints on the exposures to individual stochastic market factors and to generalised financial markets with inverse volatility constraints. The remainder of this paper is structured as follows: In Section 2, we introduce and solve the constrained portfolio optimisation problem (**P**) in Heston's stochastic volatility model. Specifically, we derive a solution to the HJB PDE associated with (**P**) and the candidate optimal portfolio \(\pi^{*}\) in Section 2.1, verify its optimality formally by proving a verification theorem in Section 2.2 and discuss its relation to the optimal unconstrained portfolio \(\pi_{u}\) in Section 2.3. In Section 3, we consider a generalised financial market model which depends on a multi-dimensional CIR process and derive the optimal portfolio in the PCSV model under exposure constraints (Section 3.1) and in general financial markets with inverse volatility constraints (Section 3.2). In Section 4, we illustrate our theoretical results for Heston's stochastic volatility model in a numerical analysis, where we analyse the wealth equivalent loss of the optimal constrained portfolio for a Black Scholes model (Section 4.1) and the capped optimal unconstrained portfolio for Heston's stochastic volatility model (Section 4.2). Section 5 concludes the paper. All proofs relating to this paper can be found in Appendix A. ## 2 Heston's Stochastic Volatility Model We consider a finite time horizon \(T>0\) and a complete, filtered probability space \((\Omega,\mathcal{F}_{T},\mathbb{F}=(\mathcal{F}_{t})_{t\in[0,T]},Q)\), in which the filtration \(\mathbb{F}\) is generated by two independent Wiener processes \(\big{(}\tilde{W},W^{z}\big{)}=\big{(}\tilde{W}(t),W^{z}(t)\big{)}_{t\in[0,T]}\). We define a financial market \(\mathcal{M}_{H}\), consisting of one risk-free asset \(P_{0}\) and one risky asset \(P_{1}\), where the risky asset's instantaneous variance is driven by a CIR process \(z\). Specifically, we assume a market consisting of one risk-free asset \(P_{0}\) and one risky asset \(P_{1}\), which satisfy \(P_{0}(0)=P_{1}(0)=1\) and follow the dynamics \[dP_{0}(t) =P_{0}(t)rdt,\] \[dP_{1}(t) =P_{1}(t)\big{(}r+\eta\cdot z(t)\big{)}dt+\sqrt{z(t)}\underbrace{ \Big{(}\rho dW^{z}(t)+\sqrt{1-\rho^{2}}d\hat{W}(t)\Big{)}}_{=:d\hat{W}(t)},\] where \(z(0)=z_{0}>0\) and \[dz(t)=\kappa(\theta-z(t))dt+\sigma\sqrt{z(t)}dW^{z}(t).\] The market coefficients \(r,\eta,\kappa,\theta,\sigma,z_{0}\) are assumed to be positive constants, and \(\rho\in(-1,1)\). Lastly, it is assumed that Feller's condition holds for the parameters of \(z\), i.e. \(2\kappa\theta>\sigma^{2}\), and, therefore, \(z(t)\) is guaranteed to take only positive values with probability \(1\) (see Gikhman (2011)). The wealth process \(V^{v_{0},\pi}\) of an investor trading in \(\mathcal{M}_{H}\) according to a relative portfolio process \(\pi\) and initial wealth \(v_{0}>0\) now satisfies the usual SDE \[dV^{v_{0},\pi}(t)=V^{v_{0},\pi}(t)\Big{(}[r+(\eta z(t)\pi(t)]dt+\pi(t)\cdot \sqrt{z(t)}dW(t)\Big{)}. \tag{1}\] In this context, the portfolio process \(\pi=\big{(}\pi(t)\big{)}_{t\in[0,T]}\) is one-dimensional, and represents the fraction of wealth invested in the risky asset \(P_{1}\) at time \(t\). The remaining fraction \(1-\pi(t)\) is invested in the risk-free asset \(P_{0}\). We restrict our analysis to the portfolio processes \(\pi\), which guarantees that a unique solution to (1) exists, i.e. to \(\pi\) in \[\Lambda=\bigg{\{}\pi=\big{(}\pi(t)\big{)}_{t\in[0,T]}\text{ progr. measurable }\Big{|}\ \int_{0}^{T}z(t)\big{(}\pi(t)\big{)}^{2}dt<\infty\ Q-a.s.\bigg{\}} \tag{2}\] If \(\pi\in\Lambda\), it is straightforward to show that the unique solution \(V^{v_{0},\pi}(t)\) to (2) can be expressed in closed-form as \[V^{v_{0},\pi}(t)=v_{0}\exp\Big{(}\int_{0}^{t}r+\eta z(s)\pi(s)-\frac{1}{2}z(s )\pi(s)^{2}ds+\int_{0}^{t}\pi(s)\sqrt{z(s)}dW(s)\Big{)}. \tag{3}\] For a closed convex set \(K\subset\mathds{R}\cup\{\infty,-\infty\}=:\mathds{R}\) with non-empty interior and CRRA utility function \(U(v)=\frac{1}{b}v^{b}\) with \(b<1\) and \(b\neq 0\), we consider the portfolio optimisation problem \[(\mathbf{P})\begin{cases}\Phi(v_{0})&=\sup\limits_{\pi\in\Lambda_{K}}\mathbb{ E}\left[U(V^{v_{0},\pi}(T))\right]\\ \Lambda_{K}&=\big{\{}\pi\in\Lambda\ \big{|}\ \pi(t)\in K\ \mathcal{L}[0,T]\otimes Q- \text{a.e.}\big{\}}\,.\end{cases}\] As the considered financial market contains only one risky asset, the set of allocation constraints \(K\) is a subset of the extended real numbers \(\mathds{R}\). However, in this one-dimensional setting, any such closed convex set \(K\subset\bar{\mathds{R}}\) with non-empty interior can be expressed as an interval of the form1 Footnote 1: As any \(\pi\in\Lambda\) can only take finite values \(\mathcal{L}[0,T]\otimes Q\)-a.s., we do not need to distinguish between \((-\infty,\beta]\) and \([-\infty,\beta]\) or \([\alpha,\infty)\) and \([\alpha,\infty]\) for any \(-\infty\leq\alpha,\beta\leq\infty\). \[K=[\alpha,\beta],\quad\text{with}\quad-\infty\leq\alpha<\beta\leq\infty. \tag{4}\] We make substantial use of this in the subsequent analysis. ### Solution to HJB PDE We approach (**P**) using classic stochastic optimal control methods. For this purpose, let us introduce the generalised primal portfolio optimisation problem (\(\mathbf{P}^{(\mathbf{t},\mathbf{v},\mathbf{z})}\)) as \[(\mathbf{P}^{(\mathbf{t},\mathbf{v},\mathbf{z})})\begin{cases}\Phi(t,v,z)&= \sup_{\pi\in\Lambda_{K}(t)}\mathbb{E}\big{[}U(V^{v_{0},\pi}(T))\ \big{|}\ V^{v_{0},\pi}(t)=v,\ z(t)=z\big{]}\\ \Lambda_{K}(t)&=\left\{\left(\pi(s)\right)_{s\in[t,T]}\ \big{|}\ \pi\in\Lambda_{K} \right\}.\end{cases}\] Then, the HJB equation associated with (\(\mathbf{P}^{(\mathbf{t},\mathbf{v},\mathbf{z})}\)) is given by \[0 =\sup_{\pi\in K}\Big{(}G_{t}+v(r+\eta z\pi)G_{v}+\kappa(\theta-z)G _{z}+\frac{1}{2}v^{2}z\pi^{2}G_{vv}+\rho vz\pi\sigma G_{vz}+\frac{1}{2}\sigma^ {2}zG_{zz}\Big{)} \tag{5}\] \[G(T,v,z) =U(v), \tag{6}\] Any (sufficiently regular) solution \(G\) to (5) naturally yields a candidate optimal portfolio \(\pi^{*}\) to (\(\mathbf{P}^{(\mathbf{t},\mathbf{v},\mathbf{z})}\)) (and therefore (**P**)) as the maximising argument of (5). In their work on dynamic stochastic-factor models, Escobar-Anel _et al._ (2023) characterised \(G\) as an exponentially affine function, whose exponents satisfy certain Riccati ODEs. To recall their results, we need to introduce the support function \(\delta_{K}:\mathbb{R}\rightarrow\bar{\mathbb{R}}\) of \(K\) as \[\delta_{K}(x)=-\inf_{y\in K}\left(x\cdot y\right)\overset{K=[\alpha,\beta]}{=}- \alpha x1_{\{x>0\}}-\beta x1_{\{x<0\}}.\] **Lemma 2.1**.: _Let \(A\) and \(B\) be solutions to the system of ODEs_ \[A^{\prime}(\tau) =br+\kappa\theta B(\tau), \tag{7}\] \[B^{\prime}(\tau) =-\kappa B(\tau)+\frac{1}{2}\sigma^{2}\left(B(\tau)\right)^{2}+ \frac{1}{2}\frac{b}{1-b}\inf_{\lambda\in\mathbb{R}}\left(2(1-b)\delta_{K}( \lambda)+(\eta+\lambda+\sigma\rho B(\tau))^{2}\right), \tag{8}\] _with initial condition \(A(0)=B(0)=0\). Then, \(G(t,v,z)=\frac{1}{b}v^{b}\exp\left(A(T-t)+B(T-t)z\right)\) is a solution to (5)._ Given the minimising argument and the solution \(B\) from (8), a candidate optimal portfolio \(\pi^{*}\) for (**P**) is known. If we define the sequence of stopping times \(\tau_{n,t}\) as \(\tau_{n,t}=\min(T,\hat{\tau}_{n,t})\), with \[\hat{\tau}_{n,t}=\inf\Big{\{}t\leq u\leq T\ \Big{|} \int_{t}^{u}\Big{(}b\cdot\sqrt{z(s)}\cdot\pi(s)\cdot G(s,V^{v_{0 },\pi^{*}}(s),z(s))\Big{)}^{2}\,ds\geq n,\] \[\int_{t}^{u}\Big{(}\sigma\sqrt{z(s)}\cdot B(T-s)\cdot G(s,V^{v_{ 0},\pi^{*}}(s),z(s))\Big{)}^{2}\,ds\geq n\Big{\}},\] then we can give a uniform integrability condition which guarantees that the candidate optimal portfolio \(\pi^{*}\) is indeed optimal for (**P**). **Lemma 2.2**.: _Consider \(A,B\) and \(G\) from Lemma 2.1. Moreover, define_ \[\lambda^{*}(B)=\underset{\lambda\in\mathbb{R}}{\text{argmin}}\left\{2(1-b) \delta_{K}(\lambda)+(\eta+\lambda+\sigma\rho B)^{2}\right\} \tag{9}\] \[\pi^{*}(t)=\frac{1}{1-b}\Big{(}\eta+\lambda^{*}(B(T-t))+\sigma\rho B(T-t)\Big{)}. \tag{10}\] _If \(\big{(}G\left(\tau_{n,t},V^{v_{0},\pi^{*}}(\tau_{n,t}),z(\tau_{n,t})\right)\big{)}_{n \in\mathds{N}}\) is uniformly integrable for every \(t\in[0,T]\), then \(\pi^{*}\) is optimal for \((\mathbf{P})\) and \(\Phi(t,z,v)=G(t,v,z)\) for all \((t,v,z)\in[0,T]\times(0,\infty)\times(0,\infty)\)._ Lemma 2.1 and Lemma 2.2 naturally lead to a three-step procedure for finding the optimal portfolio \(\pi^{*}\) for \((\mathbf{P})\): 1. Determine the minimising argument \(\lambda^{*}\) in (9). 2. Determine the solution \(B\) to ODE (8) and thereby the candidate optimal portfolio \(\pi^{*}\) from (10). 3. Verify that \(\pi^{*}\) satisfies the uniform integrability condition from Lemma 2.2. In the following, we complete these steps consecutively and complete the results of Escobar-Anel _et al._ (2023) by providing a fully closed-form solution for the optimal allocation-constrained portfolio in Heston's stochastic volatility model via steps (i) and (ii) and formally verifying its optimality in step (iii). As \(K\) is an interval, as specified in (4), the ODE (8) can be written as a composition of three Riccati ODEs - each with constant coefficients. **Lemma 2.3**.: _Define \(B_{-}=\frac{(1-b)\alpha-\eta}{\sigma}\) and \(B_{+}=\frac{(1-b)\beta-\eta}{\sigma}\). Then, the minimising argument \(\lambda^{*}\), as in (9), is given as_ \[\lambda^{*}(B)= \big{[}(1-b)\alpha-(\eta+\sigma\rho B)\,\big{]}\mathbbm{1}_{\{ \rho B<B_{-}\}}+\big{[}(1-b)\beta-(\eta+\sigma\rho B)\,\big{]}\mathbbm{1}_{\{ \rho B>B_{+}\}}. \tag{11}\] _Moreover, \(B(\tau)\) is a solution to (8) if and only if \(B(0)=0\) and_ \[B^{\prime}(\tau)= \Big{(}-\underbrace{\frac{1}{2}b\alpha\big{(}(1-b)\alpha-2\eta \big{)}}_{=\tau_{0}^{-}}+\underbrace{\big{(}b\sigma\rho\alpha-\kappa\big{)}}_{ =\tau_{1}^{-}}B(\tau)+\frac{1}{2}\underbrace{\sigma^{2}}_{=\tau_{2}^{-}}\big{(} B(\tau)\big{)}^{2}\Big{)}\mathbbm{1}_{\{\rho B(\tau)<B_{-}\}}\] \[+\Big{(}-\underbrace{\frac{-b}{2(1-b)}\eta^{2}}_{=\tau_{0}}+ \underbrace{\big{(}\frac{b}{1-b}\eta\sigma\rho-\kappa\big{)}}_{=\tau_{1}}B( \tau)+\frac{1}{2}\underbrace{\sigma^{2}\big{(}1+\frac{b}{1-b}\rho^{2}\big{)}} _{=\tau_{2}}B(\tau)\big{)}^{2}\Big{)}\mathbbm{1}_{\{B_{-}\leq\rho B(\tau)\leq B _{+}\}}\] \[+\Big{(}-\underbrace{\frac{1}{2}b\beta\big{(}(1-b)\beta-2\eta \big{)}}_{=\tau_{0}^{+}}+\underbrace{\big{(}b\sigma\rho\beta-\kappa\big{)}}_{ =\tau_{1}^{+}}B(\tau)+\frac{1}{2}\underbrace{\sigma^{2}}_{=\tau_{2}^{+}}\big{(} B(\tau)\big{)}^{2}\Big{)}\mathbbm{1}_{\{B_{+}<\rho B(\tau)\}}\] \[= \Big{(}-r_{0}^{-}+r_{1}^{-}B(\tau)+\frac{1}{2}r_{2}^{-}+\big{(}B( \tau)\big{)}^{2}\Big{)}\mathbbm{1}_{\{\rho B(\tau)<B_{-}\}}\] \[+\Big{(}-r_{0}+r_{1}B(\tau)+\frac{1}{2}r_{2}\big{(}B(\tau)\big{)} ^{2}\Big{)}\mathbbm{1}_{\{B_{-}\leq\rho B(\tau)\leq B_{+}\}}\] \[+\Big{(}-r_{0}^{+}+r_{1}^{+}B(\tau)+\frac{1}{2}r_{2}^{+}\big{(}B( \tau)\big{)}^{2}\Big{)}\mathbbm{1}_{\{B_{+}<\rho B(\tau)\}}.\] _Remark 1_.: By restricting the minimisation in ODE (8) from \(\lambda\in\mathds{R}\) to one of the three optimal values \(\lambda\in\{(1-b)\alpha-(\eta+\sigma\rho B)\,,(1-b)\beta-(\eta+\sigma\rho B)\,,0\}\) (cf. (11)), we may use (12) to write \[B^{\prime}(\tau)=-\kappa B(\tau)+\frac{1}{2}\sigma^{2}\left(B(\tau )\right)^{2}+\frac{1}{2}\frac{b}{1-b}\inf_{\lambda\in\mathds{R}}\left(2(1-b) \delta_{K}(\lambda)+(\eta+\lambda+\sigma\rho B(\tau))^{2}\right)\] \[=\min\left(-r_{0}^{-}+r_{1}^{-}B(\tau)+\frac{1}{2}r_{2}^{-}\big{(} B(\tau)\big{)}^{2},-r_{0}+r_{1}B(\tau)+\frac{1}{2}r_{2}\big{(}B(\tau)\big{)}^{2},-r_{0}^{+} +r_{1}^{+}B(\tau)+\frac{1}{2}r_{2}^{+}\big{(}B(\tau)\big{)}^{2}\right)\] \[=:f(B(\tau)).\] The coefficients \(r_{2}^{-}\), \(r_{2}\) and \(r_{2}^{+}\) are non-negative, and therefore \(f\) is the minimum of three convex functions. As real convex functions are locally Lipschitz continuous and Lipschitz continuity is preserved when taking the minimum over a finite number of functions, \(f\) is locally Lipschitz continuous too. Hence, by the existence and uniqueness theorem of Picard-Lindelof, there exists a unique solution \(B\) to (8) for small \(\tau>0\). Moreover, as \(f\) does not depend on \(\tau\), the ODE for \(B\) is autonomous and its solution \(B\) is either constant (if \(f(0)=0\)) or strictly monotone in \(\tau\) (if \(f(0)\neq 0\)). Analogous arguments can be used to conclude the (strict) monotonicity of \(B_{u}(\tau)\) from Corollary 2.10 in Section 2.3. _Remark 2_ Note that if \(B\) is a solution to (8), then Lemma 2.2 and Lemma 2.3 imply \[\pi^{*}(t)=\frac{\eta+\lambda^{*}(B(T-t))+\sigma\rho B(T-t)}{1-b}=\begin{cases} \alpha,&\rho B(T-t)<B_{-}\\ \frac{\eta+\sigma\rho B(T-t)}{1-b},&B_{-}\leq\rho B(T-t)\leq B_{+}\\ \beta,&B_{+}<\rho B(T-t).\end{cases}\] Therefore, the zones \(Z_{-}=(-\infty,B_{-})\), \(Z_{0}=[B_{-},B_{+}]\) and \(Z_{+}=(B_{+},\infty)\) determine whether the allocation constraint \(K=[\alpha,\beta]\) is enforced for the candidate optimal portfolio process \(\pi^{*}\) from Lemma 2.2. Moreover, we may define \(\hat{\pi}^{*}(t):=\frac{1}{1-b}\left(\eta+\sigma\rho B(T-t)\right)\) and express \(\pi^{*}\) as a capped version of \(\hat{\pi}^{*}\), i.e. \[\pi^{*}(t)=\operatorname{Cap}(\hat{\pi}^{*}(t),\alpha,\beta):=\begin{cases} \alpha,&\hat{\pi}^{*}(t)<\alpha\\ \hat{\pi}^{*}(t),&\alpha\leq\hat{\pi}^{*}(t)\leq\beta\\ \beta,&\beta<\hat{\pi}^{*}(t).\end{cases}\] In a true constrained context, i.e. \(K\neq\mathbb{R}\), we may either determine an approximation of \(B\) by using a suitable numerical ODE solver (such as an Euler method) to solve the ODE (12) or directly derive an explicit expression for \(B\) by individually solving each of the three Riccati ODEs in (12) and merging the solutions at the transition points between the zones \(Z_{-}\), \(Z_{0}\) and \(Z_{+}\). To ensure that such solutions exist and do not explode before time \(T\), we need to make the following assumption on \(\mathcal{M}_{H}\) and the constraints \(K=[\alpha,\beta]\). **Assumption 2.4**: 1. _Existence of Solution:_ \[\max\left\{\begin{array}{l}\frac{b}{1-b}\eta\Big{(}\frac{\kappa\rho}{\sigma}+ \frac{1}{2}\Big{)},\\ b\alpha\left(\eta-\frac{1}{2}\alpha+\frac{\kappa\rho}{\sigma}+\frac{1}{2} \alpha b(1-\rho^{2})\right),\\ b\beta\left(\eta-\frac{1}{2}\beta+\frac{\kappa\rho}{\sigma}+\frac{1}{2} \beta b(1-\rho^{2})\right)\end{array}\right\}<\frac{\kappa^{2}}{2\sigma^{2}}\] 2. _No Blow-Up:_ _The coefficients of each of the three Riccati ODEs satisfy_ \(t_{+}(B_{0})>T\) _(cf. Lemma_ B.4_, (ii)) for each initial value_ \[B_{0}\in\left\{\left(\frac{B_{-}}{\rho}\right)\mathbb{1}_{\{\rho\neq 0\}}, \left(\frac{B_{+}}{\rho}\right)\mathbb{1}_{\{\rho\neq 0\}},0\right\}.\] Provided that Assumption 2.4 holds, the coefficients \[r_{3}^{-}=\sqrt{(r_{1}^{-})^{2}+2r_{0}^{-}r_{2}^{-}},\quad r_{3}=\sqrt{(r_{1} )^{2}+2r_{0}r_{2}},\quad r_{3}^{+}=\sqrt{(r_{1}^{+})^{2}+2r_{0}^{+}r_{2}^{+}} \tag{13}\] are well-defined and the solutions to each of the Riccati ODEs (12) do not blow up before time \(T\) when started at any of the transition points between the zones \(Z_{-}\), \(Z_{0}\) and \(Z_{+}\).1 For this reason, we define the following auxiliary functions: Footnote 1: Technically, one can formulate this assumption less restrictively by expressing ‘No Blow-Up’ in terms of the time spent in each of the zones \(Z_{-}\), \(Z_{0}\) and \(Z_{+}\). However, as this would significantly complicate the presentation without adding major additional insights, it is omitted here. * Let \(\hat{B}^{+}\), \(\hat{B}\), and \(\hat{B}^{-}\) be the solution to Riccati ODE (B4) with initial value \(0\) as well as coefficients \(r_{0}^{+}\), \(r_{1}^{+}\), \(r_{2}^{+}\), \(r_{0}\), \(r_{1}\), \(r_{2}\) and \(r_{0}^{-}\), \(r_{1}^{-}\), \(r_{2}^{-}\), respectively. * If \(\rho\neq 0\), let \(\hat{B}^{+}_{+}\), \(\hat{B}^{-}_{-}\) be the solution to Riccati ODE (B4) with initial value \(\frac{B_{+}}{\rho}\), \(\frac{B_{-}}{\rho}\) and coefficients \(r_{0}^{+}\), \(r_{1}^{+}\), \(r_{2}^{+}\), \(r_{0}^{-}\), \(r_{1}^{-}\), \(r_{2}^{-}\), respectively. * If \(\rho\neq 0\), let \(\hat{B}_{+}\), \(\hat{B}_{-}\) be the solution to Riccati ODE (B4) with initial value \(\frac{B_{+}}{\rho}\), \(\frac{B_{-}}{\rho}\), respectively, and coefficients \(r_{0}\), \(r_{1}\), \(r_{2}\). Moreover, if \(\rho\neq 0\), we define the transition times2 Footnote 2: If \(\rho=0\) all of these transition times will be infinite. \[\tau_{1}^{+}=\inf\left\{\tau\ |\ \hat{B}^{+}(\tau)=\frac{B^{+}}{ \rho}\right\},\quad\tau_{2}^{+}=\inf\left\{\tau\ |\ \hat{B}_{+}(\tau)=\frac{B^{-}}{\rho}\right\},\quad\tau_{1}=\inf\left\{\tau\ |\ \hat{B}(\tau)\in\left\{\frac{B^{-}}{\rho},\frac{B^{+}}{ \rho}\right\}\right\}\] \[\tau_{1}^{-}=\inf\left\{\tau\ |\ \hat{B}^{-}(\tau)=\frac{B^{-}}{ \rho}\right\}\quad\text{and}\quad\tau_{2}^{-}=\inf\left\{\tau\ |\ \hat{B}_{-}(\tau)=\frac{B^{+}}{ \rho}\right\}.\] Note that each of the above functions and transition times, if finite, admit a closed-form expression, which can be obtained via Lemma B.4 and Corollary B.5 in the supplementary material. Having introduced these auxiliary functions and transition times, we can finally express a closed-form solution for \(B\) in terms of these processes. **Theorem 2.5**: _Let Assumption 2.4 hold. Then,_ \[B(\tau)=\begin{cases}\hat{B}^{-}(\tau)\mathbbm{1}_{\{\tau\leq\tau_{1}^{-}\}}+ \hat{B}_{-}(\tau-\tau_{1}^{-})\mathbbm{1}_{\{\tau_{1}^{-}<\tau\leq\tau_{1}^{-}+ \tau_{2}^{-}\}}+\hat{B}_{+}^{+}(\tau-(\tau_{1}^{-}+\tau_{2}^{-}))\mathbbm{1}_{\{ \tau_{1}^{-}+\tau_{2}^{-}<\tau\}},&\text{if }0\in Z_{-}\\ \hat{B}(\tau)\mathbbm{1}_{\{\tau\leq\tau_{1}\}}+\hat{B}_{-}^{-}(\tau-\tau_{1}) \mathbbm{1}_{\{\tau>\tau_{1},\rho B(\tau_{1})=B_{-}\}}+\hat{B}_{+}^{+}(\tau- \tau_{1})\mathbbm{1}_{\{\tau>\tau_{1},\rho B(\tau_{1})=B_{+}\}},&\text{if }0\in Z_{0}\\ \hat{B}^{+}(\tau)\mathbbm{1}_{\{\tau\leq\tau_{1}^{+}\}}+\hat{B}_{+}(\tau-\tau_ {1}^{+})\mathbbm{1}_{\{\tau_{1}^{+}<\tau\leq\tau_{1}^{+}+\tau_{2}^{+}\}}+B_{- }^{-}(\tau-(\tau_{1}^{+}+\tau_{2}^{+}))\mathbbm{1}_{\{\tau_{1}^{+}+\tau_{2}^{ -}<\tau\}},&\text{if }0\in Z_{+}\end{cases}\] _satisfies ODE (8) for \(0\leq\tau\leq T\).1_ Footnote 1: Using a similar separation with respect to the zones \(Z_{-}\), \(Z_{0}\), and \(Z_{+}\) and equation (B6), it is also possible to determine a closed-form expression for \(A\) from Lemma 2.1. ### Verification Theorem Combining Remark 2 with Theorem 2.5 immediately yields a closed-form expression for the candidate optimal portfolio process \(\pi^{*}\). It now just remains to prove a verification theorem which verifies that this candidate is indeed the optimal portfolio process corresponding to \((\mathbf{P})\). This proof requires an additional assumption on the constraints \(K=[\alpha,\beta]\), which ensures a certain boundedness of \(\pi^{*}(t)\) for \(t\) close to maturity \(T\) as well as two auxiliary lemmas. **Assumption 2.6**: \[\max\left\{\frac{b\rho}{\kappa}\alpha,\ \frac{b\rho}{\kappa}\beta \right\}\leq\frac{\kappa}{\sigma^{2}},\] (14) **Lemma 2.7**: _Let Assumptions 2.4 and 2.6 hold and let \(B\) be given as in Theorem 2.5. Then, the following inequality holds for all \(t\in[0,T]\)_ \[\frac{b\rho}{\sigma}\pi^{*}(t)+B(T-t)\leq\frac{\kappa}{\sigma^{2 }}.\] **Lemma 2.8**: _Let Assumptions 2.4 and 2.6 hold and let \(B\) be given as in Theorem 2.5. Then the following inequality holds for all \(t\in[0,T]\)_ \[\frac{1}{2}\frac{b}{1-b}\eta^{2}-\frac{1}{2}\frac{b}{1-b}\left( \lambda^{*}\left(B(T-t)\right)+\sigma\rho B(T-t)\right)^{2}-\frac{1}{2}b^{2} \rho^{2}\left(\pi^{*}(s)\right)^{2}\] \[+b\frac{\rho\kappa}{\sigma}\pi^{*}(t)+\frac{b}{1-b}\frac{\rho}{ \sigma}\left[\left(\lambda^{*}\right)^{\prime}\left(B(T-t)\right)+\sigma\rho \right]B^{\prime}(T-t)\quad<\frac{1}{2}\frac{\kappa^{2}}{\sigma^{2}}.\] **Theorem 2.9** (Verification Theorem in \(\mathcal{M}_{H}\)): _Consider the financial market \(\mathcal{M}_{H}\), let Assumptions 2.4 and 2.6 hold and let \(B\) be given as in Theorem 2.5. Then,_ \[\pi^{*}(t)=\begin{cases}\alpha,&\rho B(T-t)<B_{-}\\ \frac{\eta+\sigma\rho B(T-t)}{1-b},&B_{-}\leq\rho B(T-t)\leq B_{+}\\ \beta,&B_{+}<\rho B(T-t).\end{cases} \tag{15}\] _is optimal for \((\mathbf{P})\)._ ### Comparison to Unconstrained Portfolio Unsurprisingly, we can immediately recover the solution to the unconstrained optimisation problem, as discussed in Kraft (2005), from Lemma 2.2 and Lemma 2.3. [Closed-form Unconstrained Optimal Portfolio as in Kraft (2005)] Let \(K=\mathds{R}\) (i.e. \(\alpha=-\infty\), \(\beta=\infty\)) and \(B_{u}:[0,T]\to\mathds{R}\) with \(B_{u}(0)=0\) satisfy \[B_{u}^{\prime}(\tau)=-r_{0}+r_{1}B_{u}(\tau)+\frac{1}{2}r_{2}B_{u}(\tau)^{2} \qquad\forall\tau\in[0,T]. \tag{16}\] Then, \(\lambda^{*}(B)=0\)\(\forall B\in\mathds{R}\) and the candidate optimal portfolio \(\pi^{*}\) is given by \[\pi_{u}(t):=\pi^{*}(t)=\frac{1}{1-b}\left(\eta+\sigma\rho B_{u}(T-t)\right).\] If the market parameters satisfy (cf. Assumption 2.4) \[\frac{b}{1-b}\eta\Big{(}\frac{\kappa\rho}{\sigma}+\frac{\eta}{2}\Big{)}<\frac {\kappa^{2}}{2\sigma^{2}}, \tag{17}\] then \[B_{u}(\tau)=\frac{2r_{0}(e^{r_{3}\tau}-1)}{(r_{1}-r_{3})(e^{r_{3}\tau}-1)-2r_{ 3}} \tag{18}\] and the optimality of \(\pi_{u}\) for the unconstrained portfolio optimisation problem can be verified formally (see e.g. Theorem 5.3 in Kraft (2005)).1 Footnote 1: Equation (17) corresponds to part (i) of Assumption 2.4. In the setting of Kraft (2005), part (ii) of Assumption 2.4 is also implied by (17) and so does not have to be mentioned explicitly. On an abstract level, when adding (allocation) constraints \(K=[\alpha,\beta]\) to a portfolio optimisation problem, the optimal constrained portfolio \(\pi^{*}\) for (**P**) will be given by a projection \(\mathcal{P}_{K}:\Lambda\to\Lambda_{K}\) which maps the optimal unconstrained portfolio \(\pi_{u}\) onto \(\Lambda_{K}\), i.e. \[\pi^{*}=\pi_{u}+(\pi^{*}-\pi_{u})=:\mathcal{P}_{K}\left(\pi_{u}\right).\] In a Black-Scholes financial market \(\mathcal{M}_{BS}\) with constant market coefficients (i.e. \(\mathcal{M}_{H}\) with \(\sigma=\kappa=\rho=0\)), the optimal unconstrained portfolio is a constant-mix strategy \(\pi_{u}(t):=\pi_{M}=\frac{1}{1-b}\eta\), the so-called 'Merton portfolio'. Setting \(\sigma=\rho=0\) and \(B\equiv 0\) in Remark 2, one can easily see that the projection \(\mathcal{P}=\mathcal{P}^{BS}\) in the Black-Scholes market simply caps off \(\pi_{M}\) at the boundaries if \(\pi_{M}\notin K=[\alpha,\beta]\), i.e. \[\mathcal{P}^{BS}_{K}(\pi_{M})=\operatorname{Cap}\left(\pi_{M},\alpha,\beta \right)=\begin{cases}\alpha,&\pi_{M}<\alpha\\ \pi_{M},&\alpha\leq\pi_{M}\leq\beta\\ \beta,&\beta<\pi_{M}.\end{cases}\] Given a solution \(B\) to (8) and considering Remark 2, it initially appears that the optimal constrained portfolio \(\pi^{*}\) in \(\mathcal{M}_{H}\) can be obtained from the same projection. However, if \(K\neq\mathds{R}\), then \(B_{u}\) as in Corollary 2.10 and \(B\) as in Theorem 2.5 are solutions to possibly different ODEs. In particular, this implies that the portfolios \(\pi_{u}\) and \(\hat{\pi}^{*}\) may not be identical, in which case the projection \(\mathcal{P}^{H}_{K}\) for the Heston market does not necessarily coincide with the projection \(\mathcal{P}^{BS}_{K}\) for the Black-Scholes market either. In other words, in a financial market with Heston stochastic volatility we in general have \[\pi^{*}=\mathcal{P}_{K}^{H}\left(\pi_{u}\right)=\text{Cap}\big{(}\pi_{u}+\underbrace {(\hat{\pi}^{*}-\pi_{u})}_{\neq 0},\alpha,\beta\big{)}\neq\text{Cap}\big{(}\pi_{u}, \alpha,\beta\big{)}=\mathcal{P}_{K}^{BS}(\pi_{u}).\] In the following, we render this observation more precise by providing both conditions under which \(\mathcal{P}_{K}^{H}=\mathcal{P}_{K}^{BS}\) and conditions under which \(\mathcal{P}_{K}^{H}\neq\mathcal{P}_{K}^{BS}\). The former case is true, whenever either \(\rho=0\) or \(\pi_{M}\in K\). **Lemma 2.11**: _Let \(\pi^{*}\) be as in Lemma 2.2, \(\hat{\pi}^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. If either_ \[\rho=0\qquad\text{or}\qquad\pi_{M}\in K,\] _then_ \[\pi^{*}=\mathcal{P}_{K}^{H}\left(\pi_{u}\right)=\text{Cap}\left(\pi_{u},\alpha,\beta\right)=\mathcal{P}_{K}^{BS}\left(\pi_{u}\right).\] If \(\rho=0\), the stochasticity of the volatility is completely unhedgeable in \(\mathcal{M}_{H}\). As a consequence, the optimal unconstrained portfolio processes coincide in \(\mathcal{M}_{BS}\) and in \(\mathcal{M}_{H}\). Thus, the projections \(\mathcal{P}_{K}^{BS}\) and \(\mathcal{P}_{K}^{H}\) are identical if \(\rho=0\) too. In contrast, if \(\rho\neq 0\), then the projections can only be different if the underlying ODE solutions \(B_{u}\) and \(B\) are different, specifically when \(\pi_{u}\) and \(\hat{\pi}^{*}\) begin taking values inside \(K\). This is the case if and only if \(\pi_{u}\) and \(\hat{\pi}^{*}\) begin taking values inside \(K\) at different time points. This observation leads to an equivalent characterisation of when the projections \(\mathcal{P}_{K}^{BS}\) and \(\mathcal{P}_{K}^{H}\) are different. **Lemma 2.12**: _Let \(\pi^{*}\) be as in Lemma 2.2, \(\hat{\pi}^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. The following statements are equivalent:_ 1. \[\pi^{*}=\mathcal{P}_{K}^{H}\left(\pi_{u},\alpha,\beta\right)\neq\mathcal{P}_{K} ^{BS}\left(\pi_{u},\alpha,\beta\right)=\text{Cap}(\pi_{u},\alpha,\beta)\] 2. \[\pi_{M}\notin[\alpha,\beta]\quad\text{and}\quad\exists t\in(0,T):\left|\left\{ \hat{\pi}^{*}(t),\pi_{u}(t)\right\}\cap(\alpha,\beta)\right|=1\] We can construct an extreme case which satisfies the requirements of Lemma 2.12 by choosing \(\alpha\) such that \(B(\tau)\) is constant and choosing the market parameters such that \(\pi_{u}\) changes sufficiently during the investment horizon to ensure that \(\pi_{u}(t^{*})\in(\alpha,\beta)\) for some \(t^{*}\in[0,T]\). **Corollary 2.13**: _Let \(\pi^{*}\) be as in Lemma 2.2, \(\hat{\pi}^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\text{sign}(x)\in\{-1,0,1\}\) denote the sign of \(x\in\mathbb{R}\)._ 1. _If_ \[0<\pi_{M}=\frac{\alpha}{2}<\alpha\quad\text{and}\quad\alpha<\pi_{u}(t^{*})< \beta\quad\text{for some }t^{*}\in[0,T],\] _then_ \(B(\tau)=\alpha\) _for all_ \(\tau\in[0,T]\)_,_ \(\pi^{*}(t)=\alpha\) _for all_ \(t\in[0,T]\) _and_ \(\pi^{*}=\mathcal{P}_{K}^{H}\left(\pi_{u},\alpha,\beta\right)\neq\mathcal{P}_{K }^{BS}\left(\pi_{u},\alpha,\beta\right)=\text{Cap}\left(\pi_{u},\alpha,\beta \right).\)__ 2. _If_ \[0<\pi_{M}<\frac{\alpha}{2}<\alpha\quad\text{and}\quad\alpha<\pi_{u}(t^{*})< \beta\quad\text{for some }t^{*}\in[0,T],\] _then_ \(B(\tau)=\alpha\) _for all_ \(\tau\in[0,T]\)_,_ \(\pi^{*}(t)=\alpha\) _for all_ \(t\in[0,T]\) _and_ \(\pi^{*}=\mathcal{P}_{K}^{H}\left(\pi_{u},\alpha,\beta\right)\neq\mathcal{P}_{K }^{BS}\left(\pi_{u},\alpha,\beta\right)=\text{Cap}\left(\pi_{u},\alpha,\beta \right).\)__ **Lemma 2.14**: _Let \(\pi^{*}\) be as in Lemma 2.2, \(\hat{\pi}^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Lemma 2.2, \(\hat{\pi}^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi_{u}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2 and \(\pi^{*}\) as in Corollary 2.10. Let \(\pi^{*}\) be as in Remark 2.10. Let \(\pi^{*}\) be as in Remark 2. _._ 2. _If_ \(\pi_{M}>\beta>0,\) _then_ \[\text{sign}\left(\frac{\partial}{\partial t}\hat{\pi}^{*}(t)\right)=\text{ sign}\left(\frac{\partial}{\partial t}\pi_{u}(t)\right)=-\text{sign}(\rho b) \quad\forall t\in[0,T].\] _Hence, if in addition_ \(b<0\) _and_ \(\rho<0,\) _then_ \(\mathcal{P}_{K}^{H}=\mathcal{P}_{K}^{BS}.\)__ Clearly, the requirements on \(\alpha\) in Corollary 2.13, (i) are quite restrictive, but they still provide a valuable insight into when we can expect to see a large difference between the projections \(\mathcal{P}_{K}^{BS}\) and \(\mathcal{P}_{K}^{H}.\) Namely, if * the optimal unconstrained portfolio \(\pi_{u}\) violates the constraint at maturity (i.e. \(\pi_{u}(T)=\pi_{M}\notin K\)) and there is sufficient change in \(\pi_{u}(t)\) during the investment period such that \(\pi_{u}(t^{*})\in K\) for some \(t^{*}\in K.\) * the derivatives of \(B(\tau)\) and \(B_{u}(\tau)\) are considerably different while \(\pi_{u}\notin K\) (constant \(B\) being the extreme case). As a matter of fact, we will later see in the numerical experiments in Section 4 that it is sufficient if \(\alpha\approx 2\pi_{M}\) (i.e. \(B(\tau)\) is nearly constant) to cause a considerable difference between the two projections. As evidenced by the majority of empirical calibrations of Heston's stochastic volatility model to financial time series (see e.g. Escobar-Anel and Gschnaidtner (2016) for an overview), the parameter \(\rho\) is negative for most realistic applications. In the empirical study on risk preferences of mutual fund managers by Koijen (2014), it is reported that the risk aversion parameter \(b\) has a median of \(b=-1.43\) and a mean of \(b=-4.8.\) For more risk averse investors, such as insurance companies, reinsurance companies or pension funds, one can thus realistically assume negative values for \(b.\) Thus, for most realistic parameter configurations of \(\mathcal{M}_{H}\) with \(\pi_{M}>\beta,\) the projections \(\mathcal{P}_{K}^{H}\) and \(\mathcal{P}_{K}^{BS}\) coincide for investors with a high degree of risk aversion (i.e. for a low value of \(b\)). ## 3 Implications for Related Models We now consider a generalised version of the financial market \(\mathcal{M}_{H}\) with \(d\in N\) risky assets, \(d\) independent CIR processes as risk drivers and a generalised dependence of market price of risk and risky asset volatility on these risk drivers. From now on, let \(W^{z}\) and \(\dot{W}\) denote independent \(d\)-dimensional Wiener processes and consider parameters \(\kappa,\)\(\theta,\)\(\sigma,\)\(z_{0}\in(0,\infty)^{d}\) such that their components satisfy \(2\kappa_{i}\theta_{i}>\sigma_{i}^{2}\) for \(i=1,...,d.\) Then, we define the \(d-\)dimensional CIR process \(z=(z_{1},...,z_{d})^{\prime}\) through the dynamics \[dz_{i}(t)=\kappa\left(\theta_{i}-z_{i}(t)\right)dt+\sigma_{i}\sqrt{z_{i}(t)} dW_{i}^{z}(t).\] In the following, let \(1\in\mathbb{R}^{d}\) be the \(d\)-dimensional vector of ones, let \(x\odot y\) denote the element-wise multiplication of \(x,y\in\mathbb{R}^{d}\) and let \(\sqrt{x}\) denote the element-wise square root of \(x\in\mathbb{R}^{d}.\) Consider a given correlation vector \(\rho\in(-1,1)^{d},\) market price of risk \(\gamma:(0,\infty)^{d}\rightarrow\mathbb{R}^{d}\) and volatility \(\Sigma:(0,\infty)^{d}\rightarrow\mathbb{R}^{d\times d}\) such that \(\Sigma(z)\) is non-singular for all \(z\in(0,\infty)^{d}.\) Then, we define the financial market \(\mathcal{M}_{H}^{\gamma,\Sigma},\) consisting of one risk-free asset \(P_{0}\) and \(d\) risky assets \(P=(P_{1},\ldots,P_{d})^{\prime}\) with dynamics \[dP_{0}(t) =P_{0}(t)rdt\] \[dP(t) =P(t)\odot\big{[}\underbrace{(r1+\Sigma(z(t))\gamma(z(t)))}_{=: \mu(z(t))}dt+\Sigma(z(t))\underbrace{\left(\rho\cdot dW^{z}(t)+\sqrt{1-\rho \odot\rho}d\dot{W}(t)\right)}_{=:dW(t)}\big{]}.\] Clearly, we can recover the financial market \(\mathcal{M}_{H}\), as considered in Section 2, by assuming \(d=1\) and choosing \(\gamma(z)=\eta\sqrt{z}\) and \(\Sigma(z)=\sqrt{z}\) and the Black Scholes model \(\mathcal{M}_{BS}\) if both \(\gamma\) and \(\Sigma\) are constants. Similar, but slightly more general financial market models than \(\mathcal{M}_{H}^{\gamma,\Sigma}\) have been considered in Liu (2006) or Escobar-Anel _et al._ (2023), for example. In \(\mathcal{M}_{H}^{\gamma,\Sigma}\), the wealth process \(V^{v_{0},\pi}\) of an investor with initial wealth \(v_{0}\) who trades continuously in time with \(\mathds{R}^{d}\)-valued relative portfolio process \(\pi\), satisfies the SDE \[dV^{v_{0},\pi}(t) =V^{v_{0},\pi}(t)\left[\left(r+\gamma(z(t))^{\prime}\Sigma(z(t))^ {\prime}\pi(t)\right)dt+\pi(t)^{\prime}\Sigma(z(t))dW(t)\right]\] \[=V^{v_{0},\pi}(t)\left[\left(r+\left(\mu(z(t))-r1\right)^{\prime }\pi(t)\right)dt+\pi(t)^{\prime}\Sigma(z(t))dW(t)\right].\] In \(\mathcal{M}_{H}^{\gamma,\Sigma}\), the set of admissible portfolio processes naturally generalises to \[\Lambda^{\gamma,\Sigma}=\left\{\pi=\left(\pi(t)\right)_{t\in[0,T]}\text{ progr. measurable }\Big{|}\ \int_{0}^{T}\|\Sigma(z(t))^{\prime}\pi(t)\|^{2}dt<\infty\ Q-a.s.\right\}\] For a closed convex set with non-empty interior \(K\subset\bar{\mathds{R}}^{d}\), the portfolio optimisation problem (**P**) in \(\mathcal{M}_{H}^{\gamma,\Sigma}\) is then defined as \[(\textbf{P})\begin{cases}\Phi(v_{0})&=\sup\limits_{\pi\in\Lambda_{K}}\mathbb{ E}\left[U(V^{v_{0},\pi}(T))\right]\\ \Lambda_{K}&=\left\{\pi\in\Lambda^{\gamma,\Sigma}\ \big{|}\ \pi(t)\in K\ \mathcal{L}[0,T] \otimes Q-\text{a.e.}\right\}.\end{cases}\] In the following two sections, we investigate the solvability of (**P**) for given choices of \(\gamma\), \(\Sigma\), and \(K\). In Section 3.1, we consider the PCSV Model, as discussed in Escobar-Anel _et al._ (2017), and in Section 3.2, we consider inverse volatility constraints \(K\), which impose stronger restrictions on an investor's portfolio during periods of high volatility. ### PCSV Model We recover the PCSV ('Principal Component Stochastic Volatility') model \(\mathcal{M}^{PCSV}\), as proposed in Escobar-Anel _et al._ (2010), from \(\mathcal{M}_{H}^{\gamma,\Sigma}\) by considering an orthogonal matrix \(A=\left(a_{1},...,a_{d}\right)\in\mathds{R}^{d\times d}\) and defining market price of risk and volatility as \[\gamma(z)=\text{diag}(\sqrt{z})A^{\prime}\eta=\Sigma(z)^{\prime}\eta,\quad \Sigma(z)=A\text{diag}(\sqrt{z})\quad\forall z\in(0,\infty)^{d}, \tag{19}\] where \(\text{diag}(x)\in\mathds{R}^{d\times d}\) denotes the diagonal matrix with entries \(x\in\mathds{R}^{d}\), and \(\eta\in\mathds{R}^{d}\) is a constant. If \(A=I\), then \(\mathcal{M}^{PCSV}\) can be regarded as a canonical generalisation of Heston's stochastic volatility model \(\mathcal{M}_{H}\) for a \(d\)-dimensional asset universe, where each risky asset's volatility is determined as the square root of one of the \(d\) independent CIR processes \(z_{i}\). However, in its general form, the independent components of the \(d\)-dimensional CIR process \(z\) are not directly regarded as volatilities. Instead, the instantaneous covariance matrix of risky asset returns \[\Sigma(z(t))\Sigma(z(t))^{\prime}=A\text{diag}(z(t))A^{\prime}\] is decomposed into its principal components, i.e. the columns \(a_{i}\) of the matrix \(A\) represent its eigenvectors, and the independent CIR processes \(z_{i}\) represent their (stochastic) eigenvalues. This approach not only enables the modelling of stochastic covariances of asset returns because of additional degrees of freedom in \(A\), but also allows for an interpretation of \(z\) as hidden risk factors determining the volatility level in the financial market. Moreover, Escobar-Anel _et al._ (2017) demonstrated that several stylised facts are captured by the PCSV, such as stochasticity of volatilities and correlation of risky asset returns, volatility and correlation leverage effect, volatility spillovers, and increasing correlation in periods of high market volatility. Let \(\Lambda^{PCSV}\) be the set of admissible portfolios in \(\mathcal{M}^{PCSV}\). Then, for any \(\pi\in\Lambda^{PCSV}\), the wealth process \(V^{v_{0},\pi}\) satisfies the SDE \[dV^{v_{0},\pi}(t)=V^{v_{0},\pi}(t)\left[\left(r+\eta^{\prime}A\mathrm{diag}(z(t ))A^{\prime}\pi(t)\right)dt+\pi(t)^{\prime}A\mathrm{diag}(\sqrt{z(t)})dW(t) \right],\] for \(t\in[0,T]\). The instantaneous variance of \(V^{v_{0},\pi}(t)\) can therefore be decomposed into a weighted sum of the risk factors \(z\), since \[\|\mathrm{diag}(\sqrt{z(t)})A^{\prime}\pi(t)\|=\left\|\begin{pmatrix}a_{1}^{ \prime}\pi(t)\sqrt{z_{1}(t)}\\ \vdots\\ a_{\mu}^{\prime}\pi(t)\sqrt{z_{d}(t)}\end{pmatrix}\right\|^{2}=\sum_{i=1}^{d} \left(a_{i}^{\prime}\pi(t)\right)^{2}z_{i}(t).\] In this sense, the portfolio weights determine a risk exposure \(\left(a_{i}^{\prime}\pi(t)\right)^{2}\) to the risk factor \(z_{i}\). Hence, it is very natural to impose risk limits on these exposures, i.e. for given upper bounds \(\beta_{1},...,\beta_{d}>0\) we require that \[\left(a_{i}^{\prime}\pi(t)\right)^{2}\leq\beta_{i}\quad\forall i=1,...,d\quad \Leftrightarrow\quad A^{\prime}\pi(t)\in\underset{i=1}{\overset{d}{\underset{i =1}{\overset{d}{\underset{i=1}{\overset{d}{\underset{i=1}{\overset{d}{ \underset{i=1}{\overset{d}{\underset{\underset{i=1}{\overset{d}{\underset{ \underset{i=1}{\overset{\underset{\overset{\overset{\overset{\overset{\overset \oversetoversetoversetoversetoversetoversetoversetoversetoversetoverset \overset { \overset \overset \overset { \overset \overset \ \overset { \overset \ \ \overset{\ \ \ \ }}}}}}}}}}}}}} \quad\pi(t)\in\underset{A\cdot\cdot\left(\underset{i=1}{ \overset{d}{\underset{i=1}{\overset{d}{\underset{i=1}{\overset{d}{\underset{i =1}{\overset{d}{\underset{i=1}{\overset{d}{\underset{i=1}{\overset{d}{ \underset{i=1}{\overset{d}{\underset{i=1}{\overset{\overset{d}{\underset{ \overset{\overset{\ where \(K:(0,\infty)\rightarrow\mathcal{B}(\mathbb{R})\) is a set-valued function, taking only closed-convex values in the Borel set \(\mathcal{B}(\mathbb{R})\). The motivation for such constraints is quite clear: Depending on the current state of the financial market \(\mathcal{M}_{H}^{\gamma,\Sigma}\), in particular the level of risky asset volatility \(\Sigma(z(t))\) and the market price of risk \(\gamma(z(t))\), investors may face different constraints on their portfolio, such as more relaxed bounds in periods of low volatility or stricter bounds in periods of high volatility. Further, in the spirit of mean-variance optimisation, we can think of an investor seeking an optimal portfolio allocation subject to constraints on his instantaneous portfolio volatility \[0\leq\pi(t)\Sigma(z(t))\leq\beta_{z}\quad\mathcal{L}[0,T]\otimes Q-a.e.\quad \Leftrightarrow\quad 0\leq\pi(t)\leq\frac{\beta_{z}}{\Sigma(z(t))}\quad\mathcal{L}[0,T] \otimes Q-a.e.,\] for a given volatility level \(\beta_{z}>0\).1 Footnote 1: Note that this is different from classic mean-variance optimisation, where the variance of the terminal portfolio wealth \(V^{v_{0},\pi}(T)\) is constrained. Keeping this motivation in mind, we thus define the portfolio optimisation problem with volatility-dependent constraints (\(\mathbf{P^{z}}\)) as \[(\mathbf{P^{z}})=\begin{cases}\Phi^{z}(v_{0})&=\sup_{\pi\in\Lambda_{K(\cdot)} }\mathbb{E}\left[U(V^{v_{0},\pi}(T))\right]\\ \Lambda_{K(\cdot)}&=\left\{\pi\in\Lambda^{\gamma,\Sigma}\ \big{|}\ \pi(t)\in K(z(t))\ \mathcal{L}[0,T]\otimes Q-\text{a.e.} \right\}.\end{cases}\] In its most general form, the portfolio optimisation (\(\mathbf{P^{z}}\)) is highly non-trivial, since closed-form solutions for its optimal portfolio process \(\pi_{z}^{*}\) can rarely be determined for general \(\gamma\) and \(\Sigma\), even in the absence of (stochastic) allocation constraints. In particular, the portfolio optimisation (\(\mathbf{P}\)) is included as a special case in the definition of (\(\mathbf{P^{z}}\)). However, due to the results of Cvitanic and Karatzas (1992) for the Black-Scholes model \(\mathcal{M}^{BS}\) with constant volatility, as well as the results from Section 2 for Heston's stochastic volatility model \(\mathcal{M}^{H}\), we know of at least two different models in which solutions to (\(\mathbf{P}\)) (respectively (\(\mathbf{P^{z}}\))) with static constraints can be obtained in closed form. Using another change of control argument, we can therefore derive conditions on the market parameters \(\gamma\), \(\Sigma\) and the constraints \(K\), under which we can transform (\(\mathbf{P^{z}}\)) into an equivalent, solvable portfolio optimisation problem (\(\mathbf{P}\)) in either \(\mathcal{M}^{BS}\) or \(\mathcal{M}^{H}\). Consider the financial market \(\mathcal{M}_{H}^{\gamma,\Sigma}\) and the portfolio optimisation problem \((\mathbf{P^{z}})\). Consider constants \(-\infty\leq\alpha<\beta\leq\infty\). * If \(\gamma(z)=\eta\) for some \(\eta>0\) and \(K(z)=\frac{1}{\Sigma(z)}[\alpha_{z},\beta_{z}]\), then the portfolio process \[\pi_{z}^{*}(t)=\frac{1}{\Sigma\left(z(t)\right)}\text{Cap}(\pi_{M},\alpha,\beta)\] is optimal for \((\mathbf{P^{z}})\). * If \(\gamma(z)=\eta\sqrt{z(t)}\) for some constant \(\eta>0\), \(K(z)=\frac{\sqrt{z}}{\Sigma(z)}[\alpha,\beta]\), and Assumptions 2.4 and 2.6 are satisfied, then for \(\pi^{*}\) defined as in Theorem 2.9, the portfolio process \[\pi_{z}^{*}(t)=\frac{\sqrt{z(t)}}{\Sigma\left(z(t)\right)}\pi^{*}(t)\] is optimal for \((\mathbf{P^{z}})\). The statements of Theorem 3 can easily be easily generalised to financial markets with \(d>1\) risky assets by an analogous change of control argument. Using the results for constant volatility markets from Example 15.2 in Cvitanic and Karatzas (1992), one can prove a multi-dimensional analogue to statement (i) and using the results for the PCSV model from Section 3.1, one can prove a multi-dimensional analogue to statement (ii). For ease of presentation, we refrain from a detailed discussion of this generalisation. ## 4 Numerical Studies In this section, we illustrate the properties of the optimal portfolio \(\pi^{*}\) for (**P**) in Heston's stochastic volatility model \(\mathcal{M}_{H}\), using a numerical example. In particular, we analyse the difference between \(\pi^{*}\) and two suboptimal naive portfolio processes \(\pi\), which either directly follow the optimal portfolio process in \(\mathcal{M}_{BS}\) (i.e. \(\pi=\mathrm{Cap}(\pi_{M},\alpha,\beta)\)) or apply the projection \(\mathcal{P}_{K}^{BS}\) from \(\mathcal{M}_{BS}\) to the optimal unconstrained portfolio \(\pi_{u}\) in \(\mathcal{M}_{H}\) (i.e. \(\pi=\mathrm{Cap}(\pi_{u},\alpha,\beta)\)). The suboptimality of these portfolios will be quantified using the concept of wealth-equivalent loss. Such an analysis is only meaningful if the differences between a financial market with stochastic (Heston) volatility \(\mathcal{M}_{H}\) and a financial market with constant volatility \(\mathcal{M}_{BS}\) are already reflected in the optimal unconstrained portfolios \(\pi_{u}\) for \(\mathcal{M}_{H}\) and \(\pi_{M}\) for \(\mathcal{M}_{BS}\). Since allocation constraints further restrict the set of admissible portfolio allocations, any existing differences between \(\pi_{u}\) and \(\pi_{M}\) tend to be diminished further when adding allocation constraints. From an investor's perspective, the distinction between \(\mathcal{M}_{BS}\) and \(\mathcal{M}_{H}\) is only relevant if the volatility of risky asset log returns \(\sqrt{z(t)}\) changes significantly and these changes are partially hedgeable through trading in the risky asset. This is the case if the volatility of the volatility (\(\sigma\)) is large, the mean reversion speed (\(\kappa\)) is small, and the correlation between risky asset and volatility diffusion (\(\rho\)) is close to either \(1\) or \(-1\) (i.e. \(|\rho|\) is large). Based on these requirements, we choose the market parameters (see Table 1) for our numerical example such that the resulting market dynamics resemble a financial crisis. The only volatility parameters which influence the optimal portfolio allocation are \(\sigma\), \(\kappa\), and \(\rho\). Our choices for these parameters in Table 1, follow the calibration results of Moyaert and Petitjean (2011), who calibrated Heston's stochastic volatility model using option prices on the Eurostox 50 during the 2008 financial crisis. The relatively short investment horizon of \(T=1\) year is chosen to reflect the limited duration of most financial crises.1 Footnote 1: Q.ai (2022) reported that the average length of an S&P500 bear market (defined as a period with drawdown in excess of 20%) was 289 days. We quantify the sub-optimality of both naive portfolio processes in comparison to the optimal constrained portfolio process using the concept of wealth-equivalent loss ('WEL'). For an arbitrary \begin{table} \begin{tabular}{l l||c|l} **Parameter** & & **Value** & **Explanation** \\ \hline End of Investment-Horizon & \(T\) & \(1\) & Limited duration of financial crises \\ Risk Aversion Parameter & \(b\) & \(-2.5\) & Within ranges estimated in Table 1, Koijen (2014) \\ Initial Wealth & \(v_{0}\) & \(1\) & For convenience \\ Risk-Free Interest Rate & \(r\) & \(0\) & For convenience \\ Market Price of Risk Driver & \(\eta\) & \(3.0071\) & Table 2, Cheng and Escobar-Anel (2021) \\ Mean Reversion Speed & \(\kappa\) & \(3.15\) & Table 3 \%MSE’, Moyaert and Petitjean (2011) \\ Volatility of Volatility & \(\sigma\) & \(0.76\) & Table 3 \%MSE’, Moyaert and Petitjean (2011) \\ Correlation & \(\rho\) & \(-0.81\) & Table 3 \%MSE’, Moyaert and Petitjean (2011) \\ Long-Term Mean & \(\theta\) & \(0.35\) & Feller’s Condition \\ Initial Variance & \(z_{0}\) & \(0.35\) & Chosen equal to \(\theta\) \\ \end{tabular} \end{table} Table 1: Base parameters for the financial market \(\mathcal{M}_{H}\). portfolio process \(\pi\in\Lambda_{K}\), we define the expected utility functional \(J^{\pi}:[0,T]\times(0,\infty)\times(0,\infty)\rightarrow\mathbb{R}\) as \[J^{\pi}(t,v,z)=\mathbb{E}\left[U\left(V^{\text{\tiny{tp}},\pi}(T) \right)\ \middle|\ V^{\text{\tiny{tp}},\pi}(t)=v,\ z(t)=z\right]. \tag{20}\] When considering the optimal portfolio process \(\pi^{*}\), the expected utility functional coincides with the value function of (**P**), i.e. \(J^{\pi^{*}}(t,v,z)=\Phi(t,v,z)\) for all \((t,v,z)\in[0,T]\times(0,\infty)\times(0,\infty)\). The WEL \(L^{\pi}=L^{\pi}(t,z)\) of \(\pi\) is then defined as the solution to the equation1 Footnote 1: Since we exclusively work with power utility functions in this paper, we may without loss of generality assume that the WEL is independent of wealth. \[\Phi(t,v(1-L^{\pi}(t,z)),z)=J^{\pi}(t,v,z).\] An investor following the optimal portfolio allocation \(\pi^{*}\) only needs \((1-L^{\pi}(t,z))\) as much capital to achieve the same average utility as an investor following the sub-optimal strategy \(\pi\). In this sense, \(L^{\pi}(t,z)\) can be interpreted as a relative loss incurred for investing sub-optimally.2 Footnote 2: If \(\pi\) is deterministic and \(J^{\pi}\) is the unique solution to the Feynman-Kac PDE, one can use an exponentially affine ansatz to characterise \(J^{\pi}\) in terms of the solutions to a system of ODEs. If the ODE solutions are given, then the WEL \(L^{\pi}(0,z_{0})\) is known in closed form. We provide a description of this approach in Lemma B.6 and Corollary B.7 in the supplementary material. In our studies, we approximated the corresponding ODE solutions by an Euler method. ### Optimal Constrained Merton Portfolio \(\pi=\text{Cap}\left(\pi_{M},\alpha,\beta\right)\) In this subsection, we compare \(\pi^{*}\) with the first naive portfolio process \(\pi=\text{Cap}(\pi_{M},\alpha,\beta)\). Although \(\pi\) is static, it is at least known that \(\pi\) is optimal for an allocation constrained portfolio optimisation problem in the Black-Scholes market \(\mathcal{M}_{BS}\), whereas no theoretical guarantees were available for the corresponding optimisation problem in \(\mathcal{M}_{H}\) prior to this work. During this analysis, we consider the allocation constraint \(K=[\alpha,\beta]=[0,1]\), which corresponds to a no-borrowing constraint that prevents short-selling in the risk-free and risky asset. Assuming the parameters in Table 1, the Merton portfolio satisfies the constraints, i.e. \(\pi_{M}\in[\alpha,\beta]\) and thus \(\text{Cap}(\pi_{M},\alpha,\beta)=\pi_{M}\). In contrast to this constant allocation in the interior of \([\alpha,\beta]\), \(\pi^{*}\) initially takes a constant value at the upper bound \(\beta\) and later decreases towards \(\pi_{M}\) at the end of the investment horizon. Therefore, other than \(\text{Cap}(\pi_{M},\alpha,\beta)\), \(\pi^{*}\) is able to realise and benefit from a higher allocation to the risky asset. Note that \(\pi_{M}\in[\alpha,\beta]\) implies \(\pi^{*}=\text{Cap}(\pi_{u},\alpha,\beta)\), as shown in Lemma 2.11. Figure 1: Portfolio weights \(\pi(t)\) for \(t\in[0,T]\), lower bound \(\alpha=0\), upper bound \(\beta=1\), and parameters as in Table 1. In the following, we quantify the impact of the suboptimal allocations \(\pi=\text{Cap}(\pi_{M},\alpha,\beta)\) by computing the annual WEL \(L^{\pi}(0,z)\) at the beginning of the investment horizon and analyse its sensitivity with respect to the risk-aversion parameter \(b\) and the volatility drivers \(\sigma\), \(\kappa\) and \(\rho\). The ranges of the volatility parameter are chosen to be within the minimum and maximum parameter values obtained in individual calibrations in Table 5 of Moyaert and Petitjean (2011). For small values of \(b\), where the allocation constraint \(K=[0,1]\) is largely satisfied by the unconstrained portfolios \(\pi_{M}\) and \(\pi_{u}\), the WELs displayed in Figure 2 are increasing in \(b\). However, as \(b\) increases past an inflection point of approximately \(b=-3\), the allocation constraint \(K\) becomes active. From this point onwards, \(K\) forces \(\pi^{*}\) and \(\text{Cap}(\pi_{M},\alpha,\beta)\) closer towards each other for increasing \(b\) and therefore leads to decreasing WELs. Ultimately, for \(b\geq-2\), we have \(\pi^{*}(t)=\text{Cap}(\pi_{M},\alpha,\beta)=\beta=1\) for all \(t\in[0,T]\) and thus the WEL is zero. Figures 2, 2 and 2 display WELs which are increasing in \(\sigma\) as well as decreasing in \(\kappa\) and \(\rho\). This confirms the intuition voiced at the beginning of Section 4, in which we argued that \(\mathcal{M}_{H}\) is'more different' to \(\mathcal{M}_{BS}\) under these circumstances. Within the chosen parameter ranges, we observe the largest annual WEL of \(3.2\%\) for small \(\kappa\), whereas increasing \(\sigma\) and decreasing \(\rho\) leads to WELs of \(3.0\%\) and \(2.5\%\), respectively. Changing any one of the values of the volatility parameters \(\sigma\), \(\kappa\) or \(\rho\) to less extreme levels, which are obtained during calibrations on long-term data sets1 leads to significant decreases in annual WELs. Even if only \(\sigma<0.5\), while \(\kappa\) and \(\rho\) remain at crisis level, the annual WEL still drops to values of \(0.75\%\) or lower. Note that Feller's condition is satisfied for all parameter values that were considered in our analysis. ### Capped Optimal Unconstrained Heston Portfolio \(\pi=\text{Cap}\left(\pi_{u},\alpha,\beta\right)\) In this subsection, we compare \(\pi^{*}\) to the second naive portfolio process, the capped optimal unconstrained Heston portfolio \(\text{Cap}\left(\pi_{u},\alpha,\beta\right).\) In particular, we would like to illustrate the phenomenon described in Section 2.3. Despite having a theoretical guarantee that \(\text{Cap}\left(\pi_{u},\alpha,\beta\right)\) is indeed different from \(\pi^{*}\) for certain parameter settings, these differences appear to be mostly meaningful in terms of WEL for extreme market scenarios in combination with specific, large lower bounds \(\alpha\). According to Lemma 2.12 and Corollary 2.13, we know that \(\text{Cap}\left(\pi_{u},\alpha,\beta\right)\) and \(\pi^{*}\) are identical in the parameter setting of Table 1, unless \(\pi_{M}<\alpha.\) Further, Corollary 2.13 suggests that we should consider market parameters which ensure \(\pi_{u}(t)>2\pi_{M}\) for some \(t\in[0,T].\) For these reasons, we adjust the previously considered parameter setting. In the following, we choose the most extreme volatility parameters from the sensitivity analysis in Figure 2, i.e. we set \(\sigma=1.0\), \(\kappa=1.5\) and \(\rho=-0.9\), and increase the risk aversion coefficient to \(b=-15\) to obtain realistic portfolio allocations. Figure 3 compares the portfolio weights of \(\pi^{*}(t)\) and \(\text{Cap}(\pi_{u}(t),\alpha,\beta)\) for lower bounds \(\alpha\in\{1.75\pi_{M},2\pi_{M}\}\) such that the corresponding ODE solution \(B\) is (nearly) constant, as described in Corollary 2.13. If \(\alpha=1.75\pi_{M}\) then \(\pi^{*}\) is initially larger than the lower bound \(\alpha\), then decreases until \(\alpha\) is reached, while \(\pi^{*}\) is constant for \(\alpha=2\pi_{M}.\) Additionally, we observe that \[\pi^{*}(t)\leq\text{Cap}(\pi_{u}(t),\alpha,\beta)\quad\forall t\in[0,T],\] with equality only if \(\pi_{u}(t)\leq\alpha.\) In both cases illustrated in Figure 3, \(\pi^{*}\) lowers the portfolio allocation early throughout the investment horizon, thus accounting for the fact that the lower bound forces the portfolio allocation to be larger than the optimal unconstrained allocation later during the investment horizon. Figure 4(a) illustrates the behaviour of \(\pi^{*}\) for varying \(\alpha\). When increasing \(\alpha\), we observe that \(\pi^{*}\) decreases to the lower bound at earlier time points \(t\). Despite \(\pi^{*}\) being constant for \(\alpha=1.95\pi_{M}\), the solution \(B\) to the ODE (12) for \(B\) is not stationary, but \(\rho B(\tau)\) does not leave zone \(Z_{-}\) for \(\tau\leq T\). Thus, we expect \(\pi^{*}\) to be constant for all \(\alpha\in[1.95\pi_{M},2\pi_{M}]\) in our parameter setting. Figure 4(a) displays WELs \(L^{\pi}(t,z_{0})\) of \(\pi=\mathrm{Cap}\left(\pi_{u},\alpha,\beta\right)\), which are increasing in \(\alpha\) at \(t=0\). However, this monotonicity does not hold throughout the entire investment horizon, as increasing the lower bound \(\alpha\) implies that \(\pi^{*}\) and \(\mathrm{Cap}(\pi_{u},\alpha,\beta)\) coincide for longer parts of the investment horizon. Clearly, Figures 3 and 4 suggest a strong link between the value of \(\alpha\) and the difference between \(\pi^{*}\) and \(\mathrm{Cap}(\pi_{u},\alpha,\beta)\). Therefore, we quantify this difference not only using WELs, but additionally define the maximum absolute weight difference between \(\pi^{*}\) and a portfolio \(\pi\) as \[\Delta_{\max}^{\pi}:=\max_{t\in[0,T]}\Big{|}\pi(t)-\pi^{*}(t)\Big{|}. \tag{21}\] The relationship between the lower bound \(\alpha\) and the maximum absolute difference \(\Delta_{\max}^{\pi}\) and the annual WEL \(L^{\pi}(0,z_{0})\) are analysed in Figure 5. For \(\pi=\mathrm{Cap}(\pi_{u},\alpha,\beta)\), the maximum absolute difference \(\Delta_{\max}^{\pi}\) is generally increasing with \(\alpha\) except for larger lower bounds \(\alpha\). For large \(\alpha\), the optimal constrained portfolio \(\pi^{*}\) is constant throughout the investment horizon, as illustrated in Figure 4(a). Then, the monotonicity of \(\pi_{u}\) (see Remark 1) and \(\pi^{*}(T)=\alpha=\mathrm{Cap}(\pi_{u}(T),\alpha,\beta)\) ensure that the maximum in (21) is attained at \(t=0\). Further, \(\pi_{u}(0)>2\pi_{M}\geq\alpha\) does not depend on \(\alpha\). Therefore, if \(\pi^{*}(t)=\alpha\) for all \(t\in[0,T]\), then the difference \[\Delta_{\max}^{\mathrm{Cap}(\pi_{u},\alpha,\beta)}=\max_{t\in[0,T]}\left| \mathrm{Cap}(\pi_{u}(t),\alpha,\beta)-\pi^{*}(t)\right|=\left|\pi_{u}(0)- \alpha\right|=\pi_{u}(0)-\alpha\] decreases linearly in \(\alpha\), which causes a slight kink in Figure 5(a) for large lower bounds \(\alpha\). Irrespectively, the annual WEL for \(\pi=\mathrm{Cap}(\pi_{u},\alpha,\beta)\) is increasing with \(\alpha\). However, note that for all but very large lower bounds (e.g. \(\alpha\geq 1.75\pi_{M}\)), the annual WEL is still negligible. ## 5 Conclusion In this paper, we considered a portfolio optimisation problem with allocation constraints in Heston's stochastic volatility model. We derived an explicit expression for the optimal portfolio and analysed its properties. Surprisingly, this portfolio can be different from the naive constrained portfolio which caps off the optimal unconstrained portfolio at the boundaries of the constraint. In light of this fact, we have shown that the addition of allocation constraints can have a fundamentally different impact on the optimal portfolio in markets with stochastic volatility as compared to a Black-Scholes market with constant volatility - even in financial markets with only one risky asset. Irrespective of these theoretical certainties, we observed in a numerical study that the annual wealth equivalent loss incurred due to trading according to this naive portfolio is relatively small for the majority of realistic scenarios. In this sense, the naive 'capped' portfolio is nearly optimal for most applications. However, in turbulent financial markets, such as the financial crisis of 2008, investors with a high degree of risk aversion and lower bound on their portfolio allocation can suffer high wealth equivalent losses. For such scenarios, investors should be mindful of the optimal constrained portfolio and the naive capped portfolio.
2310.05813
Audio compression-assisted feature extraction for voice replay attack detection
Replay attack is one of the most effective and simplest voice spoofing attacks. Detecting replay attacks is challenging, according to the Automatic Speaker Verification Spoofing and Countermeasures Challenge 2021 (ASVspoof 2021), because they involve a loudspeaker, a microphone, and acoustic conditions (e.g., background noise). One obstacle to detecting replay attacks is finding robust feature representations that reflect the channel noise information added to the replayed speech. This study proposes a feature extraction approach that uses audio compression for assistance. Audio compression compresses audio to preserve content and speaker information for transmission. The missed information after decompression is expected to contain content- and speaker-independent information (e.g., channel noise added during the replay process). We conducted a comprehensive experiment with a few data augmentation techniques and 3 classifiers on the ASVspoof 2021 physical access (PA) set and confirmed the effectiveness of the proposed feature extraction approach. To the best of our knowledge, the proposed approach achieves the lowest EER at 22.71% on the ASVspoof 2021 PA evaluation set.
Xiangyu Shi, Yuhao Luo, Li Wang, Haorui He, Hao Li, Lei Wang, Zhizheng Wu
2023-10-09T15:53:42Z
http://arxiv.org/abs/2310.05813v2
# Audio Compression-Assisted Feature Extraction for Voice Replay Attack Detection ###### Abstract Replay attack is one of the most effective and simplest voice spoofing attacks. Detecting replay attacks is challenging, according to the Automatic Speaker Verification Spoofing and Countermeasures Challenge 2021 (ASVspoof 2021), because they involve a loudspeaker, a microphone, and acoustic conditions (e.g., background noise). One obstacle to detecting replay attacks is finding robust feature representations that reflect the channel noise information added to the replayed speech. This study proposes a feature extraction approach that uses audio compression for assistance. Audio compression compresses audio to preserve content and speaker information for transmission. The missed information after decompression is expected to contain content- and speaker-independent information (e.g., channel noise added during the replay process). We conducted a comprehensive experiment with 3 classifiers on the ASVspoof 2021 physical access (PA) set and confirmed the effectiveness of the proposed feature extraction approach. _To the best of our knowledge, the proposed approach achieves the lowest EER at 22.71% on the ASVspoof 2021 FA evaluation set._ Xiangyu Shi\({}^{1}\)1, Yuhao Luo\({}^{1}\)1, Li Wang\({}^{1}\), Haorui He\({}^{1}\), Hao Li\({}^{2}\), Lei Wang\({}^{3}\), Zhizheng Wu\({}^{1}\)\({}^{1}\)School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China \({}^{2}\)Huawei Technology, China \({}^{3}\)Huawei International, Singapore Physical Access, Voice Spoofing Detection, Replay Attack, Data Augmentation Footnote 1: Equal Contribution ## 1 Introduction A replay attack is when a prerecorded speech sample is used to deceive a speaker verification system. This type of attack can be carried out with just recording and playback devices, and does not require expert knowledge. Studies show that the success rate of these attacks increases when the recording and playback devices produce more accurate reproductions of the original audio, making them sound more like a human voice. There is increasing research interest in detecting replay attacks. The benchmark dataset for replay attacks was released during the ASVspoof competition, which took place in 2017 [1], 2019 [2], and 2021 [3], spanning three editions of the competition. Currently, most research articles utilize the 2019 or 2021 dataset as the benchmark. It is worth noting that the 2021 edition provided only a test set, while the training set used the dataset from 2019. Replay attack detection involves identifying channel noise that is produced during recording and playback. Current methods focus on extracting features and performing binary classification to determine whether the audio is genuine or spoofed. Researchers have found that audio features based on the Constant-Q Transform (CQT) are effective in detecting spoofing. Methods like CQCCE [4] and CDOC [5] have both been shown to be effective in this regard. Hidden artifacts in audio, such as silent segments and spectral defects, are crucial for detection [6]. Channel noise from loudspeakers is used to learn patterns and detect spoofing in low-frequency signals [7]. However, binary classification models face domain mismatch issues due to device diversity. In [8], a generative model is used to only model bonafide samples for better discriminative ability. Additionally, a WORLD vocoder is used to remove speech components and improve the detection of spoofed audio. According to ASVspoof 2021, the best system achieved 24.25% in the challenge for the PA task. This suggests that existing feature representations are still not robust enough to detect replay attacks. Since a replay attack involves a loudspeaker, a microphone, and acoustic conditions such as background noise, the variations of devices (i.e. loudspeaker and microphone) and acoustic conditions can create unlimited versions of replayed speech. _One obstacle to detecting replay attacks is finding robust feature representations that reflect the channel noise information added to the replayed speech_. The robust feature representation is expected to be independent of content and speaker information. This study continues the quest for a robust feature representation and proposes a feature extraction approach that uses audio compression for assistance. Audio compression compresses audio to preserve content and speaker information for transmission. _The information missed after decompression is expected to contain content- and speaker-independent information (e.g., channel noise added during the replay process)_. With the assistance of audio compression, we also explore the effect of different kinds of data augmentation techniques and compare their performance with three different one-class classifiers, namely variational autoencoder (VAE), one-class support vector machine (SVM), and AnoGAN - a deep convolutional generative adversarial network. ## 2 Feature Extraction with the Assistance of Audio Compression In this study, we propose a feature extraction approach that uses audio compression for assistance. The overall framework with the proposed feature extraction approach is presented in Fig. 1. We will describe the proposed framework, data augmentation methods and classifiers in this section. Data augmentation has improved the performance of many speech-related tasks. We examine the effectiveness of data augmentation along with the proposed framework. ### Audio Codec/Compression In this study, we investigate the use of audio codecs for feature extraction. Audio codecs have traditionally been used for audio compression and decompression. Our work uses a similar approach to that of [9], which uses a WORLD vocoder [10] to assist with feature extraction by computing the differences between the vocoded and original waveforms as feature presentations. We argue that _incorporating an audio codec could better preserve the most essential speech content information, with the difference between the original waveform and the decompressed audio reflecting content- and speaker-independent information more accurately_. **This property makes it more suitable for replay detection**. To achieve our goal, we use the Opus Codec1 in this work. The Opus Codec is well-known for its adaptability and is widely used in real-time communication applications. For feature extraction, we utilize the Opus encoder to compress the audio into packets at a specific bitrate. Then, we use the Opus decoder to unpack these packets and restore the audio to its original form. When the bitrate is lower than that of the original audio, it is expected that the Opus-processed audio will retain the complete vocal information while effectively reducing most of the channel noise. As a result, we can distinguish between authentic and spoofed audio by comparing the original and the Opus-processed audio. We now compare Pyworld-Opus-processed audio instead of Opus-processed. Footnote 1: [https://opus-codec.org/](https://opus-codec.org/) More specifically, we follow the following steps for feature extracting when using opus codec: * First, use the WORLD vocoder to resynthesize audio and use the Opus encoder to compress an original audio into a package. * Next, employ the Opus decoder to decompress the package back into audio, resulting in an Opus-preprocessed audio file. * After that, Transform both the original audio and the Opus-preprocessed audio into spectrograms. * Finally, subtract the temporal-averaged spectrograms from each other to obtain the feature representation. ### Data Augmentation Data augmentation has improved the performance of many speech-related tasks. In this work, we also study the impact of data augmentation with the proposed feature extraction. The investigated data augmentation approaches are * **SpecAugment**: SpecAugment [11] is proposed for automatic speech recognition by applying masking to a spectrogram for data augmentation and robustness. In this study, we investigate _frequency masking_ and _time masking_. Frequency masking is to apply masking to certain frequency bands, and time masking remove or reduce audio signals of specific time intervals to create gaps or silences in the waveform. * **AddNoise**: To add real office noise to audio. The noise used in this study contains the voices of multiple people talking. Noise is extracted from the VOiCES dataset [12]. * **AddReverb**: To simulate reverberation in a meeting room scenario. More specifically, Room Impulse Response (RIR) is convoluted with original audios. RIR data are from the VOiCES dataset [12]. * **AdjustSpeech**: We apply speed perturbation to randomly slow down or speed up audio by uncertain factors. Speed Perturbation factors are set to \([0.9,1.1]\). * **Pre-emphasis & De-emphasis**: Manipulate audio to enhance the high-frequency components in audio. This is motivated by the findings presented in [7], which highlight that high-frequency components tend to contain more forged information, thereby benefiting replay attack detection. ### Classifier We also explore the performance of three one-class classifiers for detection with the proposed feature extraction. The classifiers are variational auto-encoder (VAE), one-class support vector machine (SVM), and AnoGAN, a deep convolutional generative adversarial network. #### 2.3.1 Variational Auto-Encoder (VAE) The Variational Auto-encoder (VAE) [13] can learn a low-dimensional representation of input data, often used in anomaly detection. The latent space, formed by the representation of input data, serves as a compressed representation that captures essential features of the input. The VAE is trained by minimizing a reconstruction loss, which measures dissimilarity between the original input and the reconstructed output. The reconstruction probability is used as the anomaly score. **Implementation details:** In this study, five linear layers are employed for encoder and decoder, respectively. ReLU is employed as an activation function. The size of the latent space is \(2\), and the number of samples in the latent space is \(10\). #### 2.3.2 One-Class Support Vector Machine (OCSVM) The One-Class Support Vector Machine (OCSVM) was first introduced by Scholkopf et al. [14]. It allows SVM to be trained with Figure 1: The overall framework of the replay detection system with assistance of audio compression (audio codec) for feature extraction. WORLD vocoder synthesizes the input audio and Opus codec will compress and then decompress the synthesized audio. We subtract the original audio and the re-synthesized audio in Mel-spectrogram form before classification only genuine data. OCSVM tries to separate all data points from the origin in a high-dimensional feature space using a hyperplane, as shown in the objective function below. Points lying below the hyperplane and closer to the origin are viewed as outliers. **Implementation details:** We adopt OCSVM in scikit-learn2, and keep the same default configuration(kernel=RBF, \(\gamma\)=scale, tol=1e-3, \(\nu\)=0.5). Note that \(\nu\) is an upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Footnote 2: [https://scikit-learn.org](https://scikit-learn.org) #### 2.3.3 AnoGAN: Anomaly GAN AnoGAN [15] is a deep convolutional generative adversarial network, which was proposed in [15] for anomaly detection. The AnoGAN uses two different loss functions: a weighted average of residual loss and a discrimination loss. The residual loss measures the difference between the real feature \(\mathbf{x}\) and the generated feature \(G(\mathbf{z})\), where \(\mathbf{z}\) is a random input for generator. **Implementation details:** The generator consists of 5 transposed convolutional layers. The kernel size, stride and padding are the same for all the layers, which are 4, 2 and 1, respectively. 2 linear layers are concatenated to the input layer and the output layer, respectively, to match the feature shape of generated data and real data. The discriminator consists of 5 convolutional layers. The kernel size, stride and padding are the same for all the convolutional layers, which are 4, 2 and 1, respectively. The weight coefficients of residual loss and discrimination loss are equal, both are 0.5. ## 3 Experiments ### Dataset and Evaluation Metrics We conduct our experiments on ASVspoof 2019 [16] and ASVspoof 2021 [17] datasets. The classifiers were trained on the training set and the development set of the ASVspoof 2019 PA dataset, which are created through software simulations. The progress set and the evaluation set of the ASVspoof 2021 PA dataset, recorded in real physical spaces, are used for evaluation. The progress set and the evaluation set are utterance-disjoint, and some recording setting factors are reserved exclusively for the evaluation set [17]. We employ the Equal Error Rate (EER) to evaluate the performance of the classifiers. The output score of a classifier indicates the level of confidence in classifying the audio as a bonafide speech. To calculate the EER, a threshold is determined based on the output score, ensuring that the probability of missing a true positive is equal to the probability of false alarms. A lower EER indicates a better classification performance. ### Model and Feature Configurations The sampling rate of the data is 16 kHz. To extract replay channel response estimation features in [9], we set the FFT bins to 1024, the frame length to 50 ms and frame shift to 25ms. The first 512 dimensions were kept as features. The utterance level feature can be extracted by simply temporal averaging and PCA is applied to reduce the dimensions of features to preserve 98% energy. The replay channel response estimation follows the same pre-processing procedure as [9]: utilize the WORLD vocoder [10] to simulate the replay environment, then the subtraction result between original audio and simulated audio in log spectrogram level will be viewed as the feature. After temporal averaging, this feature will be fed into classifiers. For the Opus codec, we set the sample rate to 16000Hz and the number of channels to 1, which corresponds to the audio file, for both the encoder and the decoder. And the application mode of encoder is chosen as 'VoIP', which gives best quality at a given bitrate for voice signals. We tried different bitrates from 16k to 8k, which are far more lower than the original audio, about 256k. For data augmentation, different augmentation methods require different hyper-parameter settings. The maximum possible length of the mask of SpecAugment, including frequency masking and time masking, is \(80\). The maximum proportion of time steps that can be masked is \(100\%\). As for room scenario simulation, the SNR of AddNoise technique is \(10\). Noise data and RIR data are extracted from the VOiCES dataset [12] in our implementation. Coefficient of Pre-emphasis and De-emphasis is \(0.97\). Speed Perturbation factors are set to \([0.9,1.1]\). All data augmentation methods are implemented with TorchAudio3. Footnote 3: [https://github.com/pytorch/audio](https://github.com/pytorch/audio) ### Analysis of Data Augmentation We begin our analysis by examining the performance of spectrum augmentation. The results are presented in Table 1. While the technique has been proven effective in the context of speech recognition, its use in the replay spoofing detection experiment did not result in any improvement. _The experimental results suggest that neither time nor frequency masking is able to increase robustness to unseen replay artifacts._ However, it is worth noting that the small size of the dataset used in the experiment may have limited the effectiveness of spectrum augmentation, and further research with larger datasets could yield different results. Moving on, we turn our attention to the performance of waveform augmentation. Table 1 presents the results of our assessment. Interestingly, we observed that adding reverberation and applying de-emphasis or pre-emphasis slightly decreased the EERs. It is possible that the added complexity introduced by these techniques helped the model better distinguish between genuine and spoofed audio. On the other hand, adding noise or adjusting speech speed appeared to be ineffective. These findings highlight the importance of careful selection and evaluation of augmentation techniques when working with audio data. ### Performance of Audio Compression/Codec In this subsection, we analyze the performance of audio compression-assisted feature in detail. Firstly, we present the experimental results with varied bit-rates in Table 2. It is worth noting that the results show that when the Figure 2: Comparison of Mel spectrogram between the original audio and the 8k bitrates Opus compressed audio bitrate decreases from 16k to 10k, the EERs slightly increase for all 3 classifiers on both progress and eval datasets. This suggests that a lower bitrate can still preserve enough content and speaker information. However, when the bitrate is down to 8k, the EER deteriorates significantly. This is because when the bitrate is too low, it will impact audio quality and intelligibility. In fact, with 8k bitrates, the audio compressed by Opus actually loses almost all the high frequency information, as shown in Fig. 2. Consequently, high frequency noise cannot be extracted, which ultimately leads to poor results. We can also argue that while a lower bitrate may still be able to preserve content and speaker information, it can have a significant impact on the audio quality and perceptibility. When the bitrate is down to 8k, the audio quality drops significantly, and this can have consequences for the overall result. Therefore, it is important to strike a balance between the bitrate and the audio quality, in order to achieve the best possible outcome. In conclusion, while it is possible to achieve good results with lower bitrates, the quality of the compressed audio needs to be taken into account. A balance between bitrate and quality must be maintained to ensure optimal results. ### Performance of different classifiers Given the temporal averaged feature, we can employ a variety of classifiers, including VAE, OCSVM, and AnoGAN, to score the corresponding audio. The results of our tests show that OCSVM and AnoGAN outperform VAE, which was used in [9], on the evaluation dataset. Specifically, the equal error rate (EER) achieved by OCSVM and AnoGAN are 24.21 and 24.20, respectively, which is \(2.10\%\) and \(2.14\%\) lower than that achieved by VAE on the same evaluation dataset relatively. Consequently, it is reasonable to conclude that both OCSVM and AnoGAN are superior to VAE in terms of evaluating the audio feature. However, it is worth mentioning that on the progress dataset, OCSVM and AnoGAN achieve nearly the same performance as VAE does. This finding suggests that OCSVM and AnoGAN have better generalization performance on a larger dataset. As a result, the application of these classifiers can be extended to a broader range of audio content, which can provide more reliable results. The performance of the proposed feature extraction is presented in the last row of Table 1. It achieves the lowest EERs regardless of the classifier used. In summary, the proposed feature extraction that employs audio compression for assistance can considerably improve detection performance. However, when designing the feature extraction using audio compression, an appropriate bitrate needs to be chosen. ## 4 Conclusions This study proposes a feature extraction approach that utilizes audio compression for assistance. This is achieved by subtracting the reconstructed waveform from an audio codec from the original waveform. With the assistance of audio compression, we also explore the effect of different kinds of data augmentation techniques and compare their performance with three different one-class classifiers: variational autoencoder (VAE), one-class support vector machine (SVM), and AnoGAN, a deep convolutional generative adversarial network. The experiment conducted on the ASVspoof 2021 PA dataset suggests the effectiveness of the proposed approach, which achieves an equal error rate (EER) of 22.71%. To the best of our knowledge, this is the lowest EER achieved on the dataset. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Augmentation Category} & \multirow{2}{*}{Data Augmentation} & \multicolumn{3}{c}{Progress} & \multicolumn{2}{c}{Eval} \\ \cline{3-7} & & VAE & OCSVM & AnoGAN & VAE & OCSVM & AnoGAN \\ \hline - & SOTA reference [9] & 23.60 & - & - & 24.25 & - & - \\ - & Reproduced [9] & 23.07 & 23.07 & 23.04 & 24.73 & 24.21 & 24.20 \\ SpecAugment & FrequencyMasking & 23.09 & 23.04 & 23.04 & 24.77 & 24.21 & 24.20 \\ SpecAugment & TemporalMasking & 23.09 & 23.06 & 23.09 & 24.76 & 24.20 & 24.24 \\ WaveAugment & AddNoise & 23.30 & 23.10 & 23.20 & 24.59 & 24.21 & 24.29 \\ WaveAugment & AddReverb & 22.80 & 22.95 & 22.99 & 24.37 & 24.01 & 24.07 \\ WaveAugment & AdjustSpeed & 24.36 & 22.95 & 23.00 & 25.83 & 24.15 & 24.15 \\ WaveAugment & De-emphasis & 22.92 & 23.11 & 23.09 & 24.43 & 24.20 & 24.21 \\ WaveAugment & Pre-emphasis & 23.00 & 23.10 & 23.07 & 24.14 & 24.21 & 24.21 \\ - & Audio Codec & **21.73** & **21.53** & **21.47** & **23.40** & **22.75** & **22.71** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of the proposed feature extraction approach with three different classifier in comparison to different data augmentation strategies by Equal Error Rate (EER\(\downarrow\)). Note that the state-of-the-art (SOTA) reference is from a fusion system. The reproduced system is a single system that uses the WORLD vocoder for feature extraction. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Audio Codec} & \multirow{2}{*}{Bitrate} & \multicolumn{3}{c}{Progress} & \multicolumn{3}{c}{Eval} \\ \cline{3-8} & & VAE & OCSVM & AnoGAN & VAE & OCSVM & AnoGAN \\ \hline Opus & 16k & **21.73** & **21.53** & **21.47** & 23.40 & **22.75** & **22.71** \\ Opus & 14k & 21.82 & 21.75 & 21.74 & **23.21** & 22.91 & 22.90 \\ Opus & 12k & 22.21 & 21.77 & 21.68 & 23.66 & 22.95 & 22.89 \\ Opus & 10k & 22.02 & 21.87 & 21.81 & 23.59 & 23.13 & 23.08 \\ Opus & 8k & 23.67 & 23.11 & 23.10 & 25.59 & 24.99 & 24.97 \\ \hline \hline \end{tabular} \end{table} Table 2: Experiment results of different models under various bit-rate settings for audio codec, evaluated by Equal Error Rate (EER\(\downarrow\)).
2305.17079
Complete Multiparty Session Type Projection with Automata
Multiparty session types (MSTs) are a type-based approach to verifying communication protocols. Central to MSTs is a projection operator: a partial function that maps protocols represented as global types to correct-by-construction implementations for each participant, represented as a communicating state machine. Existing projection operators are syntactic in nature, and trade efficiency for completeness. We present the first projection operator that is sound, complete, and efficient. Our projection separates synthesis from checking implementability. For synthesis, we use a simple automata-theoretic construction; for checking implementability, we present succinct conditions that summarize insights into the property of implementability. We use these conditions to show that MST implementability is in PSPACE. This improves upon a previous decision procedure that is in EXPSPACE and applies to a smaller class of MSTs. We demonstrate the effectiveness of our approach using a prototype implementation, which handles global types not supported by previous work without sacrificing performance.
Elaine Li, Felix Stutz, Thomas Wies, Damien Zufferey
2023-05-26T16:38:37Z
http://arxiv.org/abs/2305.17079v3
# Complete Multiparty Session Type Projection with Automata ###### Abstract Multiparty session types (MSTs) are a type-based approach to verifying communication protocols. Central to MSTs is a _projection operator_: a partial function that maps protocols represented as global types to correct-by-construction implementations for each participant, represented as a communicating state machine. Existing projection operators are syntactic in nature, and trade efficiency for completeness. We present the first projection operator that is sound, complete, and efficient. Our projection separates synthesis from checking implementability. For synthesis, we use a simple automata-theoretic construction; for checking implementability, we present succinct conditions that summarize insights into the property of implementability. We use these conditions to show that MST implementability is PSPACE-complete. This improves upon a previous decision procedure that is in EXPSPACE and applies to a smaller class of MSTs. We demonstrate the effectiveness of our approach using a prototype implementation, which handles global types not supported by previous work without sacrificing performance. Keywords:Protocol verification Multiparty session types Communicating state machines Protocol fidelity Deadlock freedom. ## 1 Introduction Communication protocols are key components in many safety and operation critical systems, making them prime targets for formal verification. Unfortunately, most verification problems for such protocols (e.g. deadlock freedom) are undecidable [11]. To make verification computationally tractable, several restrictions have been proposed [2, 3, 10, 14, 42, 33]. In particular, multiparty session types (MSTs) [24] have garnered a lot of attention in recent years (see, e.g., the survey by Ancona et al. [6]). In the MST setting, a protocol is specified as a global type, which describes the desired interactions of all roles involved in the protocol. Local implementations describe behaviors for each individual role. The implementability problem for a global type asks whether there exists a collection of local implementations whose composite behavior when viewed as a communicating state machine (CSM) matches that of the global type and is deadlock-free. The synthesis problem is to compute such an implementation from an implementable global type. MST-based approaches typically solve synthesis and implementability simultaneously via an efficient syntactic _projection operator_[18, 24, 34, 41]. Abstractly, a projection operator is a partial map from global types to collections of implementations. A projection operator proj is sound when every global type \(\mathbf{G}\) in its domain is implemented by \(\texttt{proj}(\mathbf{G})\), and complete when every implementable global type is in its domain. Existing practical projection operators for MSTs are all incomplete (or unsound). Recently, the implementability problem was shown to be decidable for a class of MSTs via a reduction to safe realizability of globally cooperative high-level message sequence charts (HMSCs) [38]. In principle, this result yields a complete and sound projection operator for the considered class. However, this operator would not be practical. In particular, the proposed implementability check is in EXPSPACE. **Contributions.** In this paper, we present the first practical sound and complete projection operator for general MSTs. The synthesis problem for implementable global types is conceptually easy [38] - the challenge lies in determining whether a global type _is_ implementable. We thus separate synthesis from checking implementability. We first use a standard automata-theoretic construction to obtain a candidate implementation for a potentially non-implementable global type. However, unlike [38], we then verify the correctness of this implementation directly using efficiently checkable conditions derived from the global type. When a global type is not implementable, our constructive completeness proof provides a counterexample trace. The resulting projection operator yields a PSPACE decision procedure for implementability. In fact, we show that the implementability problem is PSPACE-complete. These results both generalize and tighten the decidability and complexity results obtained in [38]. We evaluate a prototype of our projection algorithm on benchmarks taken from the literature. Our prototype benefits from both the efficiency of existing lightweight but incomplete syntactic projection operators [18, 24, 34, 41], and the generality of heavyweight automata-based model checking techniques [28, 36]: it handles protocols rejected by previous practical approaches while preserving the efficiency that makes MST-based techniques so attractive. ## 2 Motivation and Overview **Incompleteness of existing projection operators.** A key limitation of existing projection operators is that the implementation for each role is obtained via a linear traversal of the global type, and thus shares its structure. The following example, which is not projectable by any existing approach, demonstrates how enforcing structural similarity can lead to incompleteness. Example 2.1 (Odd-even): Consider the following global type \(\mathbf{G}_{oe}\): \[+\left\{\!\!\!\begin{array}{l}\mathtt{p}\!\rightarrow\!\mathtt{q}:\mathtt{o} \!\cdot\mathtt{q}\!\rightarrow\!\mathtt{r}:\mathtt{o}\!\cdot\mathtt{\mu}t_{1} \!\cdot\!\left(\mathtt{p}\!\rightarrow\!\mathtt{q}:\mathtt{o}\!\cdot\mathtt{q} \!\rightarrow\!\mathtt{r}:\mathtt{o}\!\cdot\mathtt{q}\!\rightarrow\!\mathtt{r} :\mathtt{o}\!\cdot\mathtt{t}_{1}\!\right.\!\!+\left.\mathtt{p}\!\rightarrow\! \mathtt{q}:\mathtt{b}\!\cdot\mathtt{q}\!\rightarrow\!\mathtt{r}:\mathtt{b} \!\cdot\mathtt{r}\!\rightarrow\!\mathtt{p}:\mathtt{o}\!\cdot\mathtt{0})\\ \mathtt{p}\!\rightarrow\!\mathtt{q}:\mathtt{m}\!\cdot\mathtt{\mu}t_{2}\!\cdot \!\left(\mathtt{p}\!\rightarrow\!\mathtt{q}:\mathtt{o}\!\cdot\mathtt{o}\! \rightarrow\!\mathtt{r}:\mathtt{o}\!\cdot\mathtt{q}\!\rightarrow\!\mathtt{r }:\mathtt{o}\!\cdot\mathtt{t}_{2}\!\right.\!\!+\left.\mathtt{p}\!\rightarrow\! \mathtt{q}:\mathtt{b}\!\cdot\mathtt{q}\!\rightarrow\!\mathtt{r}:\mathtt{b} \!\cdot\mathtt{r}\!\rightarrow\!\mathtt{p}:\mathtt{m}\!\cdot\mathtt{0})\\ \end{array}\right.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Our automata-theoretic approach.The synthesis step in our projection operator uses textbook automata-theoretic constructions. From a given global type, we derive a finite state machine, and use it to define a homomorphism automaton for each role. We then determinize this homomorphism automaton via subset construction to obtain a local candidate implementation for each role. If the global type is implementable, this construction always yields an implementation. The implementations shown in Figs. 0(b) to 0(d) are the result of applying this construction to \(\mathbf{G}_{oe}\) from Example 2.1. Notice that the state labels in Fig. 0(d) correspond to sets of labels in the global protocol. Unfortunately, not all global types are implementable. Example 2.2: Consider the following four global types also depicted in Fig. 2: Similar to \(\mathbf{G}_{oe}\), in all four examples, \(\mathtt{p}\) chooses a branch by sending either \(\circ\) or \(m\) to \(\mathtt{q}\). The global type \(\mathbf{G}_{r}\) is not implementable because \(\mathtt{r}\) cannot learn which branch was chosen by \(\mathtt{p}\). For any local implementation of \(\mathtt{r}\) to be able to execute both branches, it must be able to receive \(\circ\) from \(\mathtt{p}\) and \(\mathtt{q}\) in any order. Because the two send events \(\mathtt{p}\triangleright\mathtt{r}!_{\circ}\) and \(\mathtt{q}\triangleright\mathtt{r}!_{\circ}\) are independent of each other, they may be reordered. Consequently, any implementation of \(\mathbf{G}_{r}\) would have to permit executions that are consistent with global behaviors not described by \(\mathbf{G}_{r}\), such as \(\mathtt{p}\!\rightarrow\!\mathtt{q}\!\cdot\!m\). \(\mathtt{q}\!\rightarrow\!\mathtt{r}\!\cdot\!\circ\). \(\mathtt{p}\!\rightarrow\!\mathtt{r}\!\cdot\!\circ\). Contrast this with \(\mathbf{G}_{r}^{\prime}\), which is implementable. In the top branch of \(\mathbf{G}_{r}^{\prime}\), role \(\mathtt{p}\) can only send to \(\mathtt{r}\) after it has received from \(\mathtt{r}\), which prevents the reordering of the send events \(\mathtt{p}\triangleright\mathtt{r}!_{\circ}\) and \(\mathtt{q}\triangleright\mathtt{r}!_{\circ}\). The bottom branch is symmetric. Hence, \(\mathtt{r}\) learns \(\mathtt{p}\)'s choice based on which message it receives first. For the global type \(\mathbf{G}_{s}\), role \(\mathtt{r}\) again cannot learn the branch chosen by \(\mathtt{p}\). That is, \(\mathtt{r}\) cannot know whether to send \(\circ\) or \(m\) to \(\mathtt{q}\), leading inevitably to dead-locking executions. In contrast, \(\mathbf{G}_{s}^{\prime}\) is again implementable because the expected behavior of \(\mathtt{r}\) is independent of the choice by \(\mathtt{p}\). These examples show that the implementability question is non-trivial. To check implementability, we present conditions that precisely characterize when the subset construction for \(\mathbf{G}\) yields an implementation. **Overview.** The rest of the paper is organized as follows. SS3 contains relevant definitions for our work. SS4 describes the synthesis step of our projection. SS5 Figure 2: High-level message sequence charts for the global types of Example 2.2. presents the two conditions that characterize implementability of a given global type. In SS6, we prove soundness of our projection via a stronger inductive invariant guaranteeing per-role agreement on a global run of the protocol. In SS7, we prove completeness by showing that our two conditions hold if a global type is implementable. In SS8, we discuss the complexity of our construction and condition checks. SS9 presents our artifact and evaluation, and SS10 as well as SS11 discuss related work. ## 3 Preliminaries Words.Let \(\Sigma\) be a finite alphabet. \(\Sigma^{*}\) denotes the set of finite words over \(\Sigma\), \(\Sigma^{\omega}\) the set of infinite words, and \(\Sigma^{\infty}\) their union \(\Sigma^{*}\cup\Sigma^{\omega}\). A word \(u\in\Sigma^{*}\) is a _prefix_ of word \(v\in\Sigma^{\infty}\), denoted \(u\leq v\), if there exists \(w\in\Sigma^{\infty}\) with \(u\cdot w=v\). Message Alphabet.Let \(\mathcal{P}\) be a set of roles and \(\mathcal{V}\) be a set of messages. We define the set of _synchronous events_\(\Sigma_{sync}:=\{\mathtt{p}\!\rightarrow\!\mathtt{q}\!:\!m\mid\mathtt{p}, \mathtt{q}\in\mathcal{P}\text{ and }m\in\mathcal{V}\}\) where \(\mathtt{p}\!\rightarrow\!\mathtt{q}\!:\!m\) denotes that message \(m\) is sent by \(\mathtt{p}\) to \(\mathtt{q}\) atomically. This is split for _asynchronous events_. For a role \(\mathtt{p}\in\mathcal{P}\), we define the alphabet \(\Sigma_{\mathtt{p},!}=\{\mathtt{p}\!\triangleright\mathtt{q}!m\mid\mathtt{q}\in \mathcal{P},\ m\in\mathcal{V}\}\) of _send_ events and the alphabet \(\Sigma_{\mathtt{p},?}=\{\mathtt{p}\!\triangleleft\mathtt{q}?m\mid\mathtt{q}\in \mathcal{P},\ m\in\mathcal{V}\}\) of _receive_ events. The event \(\mathtt{p}\!\triangleright\mathtt{q}!m\) denotes role \(\mathtt{p}\) sending a message \(m\) to \(\mathtt{q}\), and \(\mathtt{p}\!\triangleleft\mathtt{q}?m\) denotes role \(\mathtt{p}\) receiving a message \(m\) from \(\mathtt{q}\). We write \(\Sigma_{\mathtt{p}}=\Sigma_{\mathtt{p},!}\cup\Sigma_{\mathtt{p},?}\), \(\Sigma_{!}=\bigcup_{\mathtt{p}\in\mathcal{P}}\Sigma_{\mathtt{p},!}\), and \(\Sigma_{?}=\bigcup_{\mathtt{p}\in\mathcal{P}}\Sigma_{\mathtt{p},?}\). Finally, \(\Sigma_{async}=\Sigma_{!}\cup\Sigma_{?}\). We say that \(\mathtt{p}\) is _active_ in \(x\in\Sigma_{async}\) if \(x\in\Sigma_{\mathtt{p}}\). For each role \(\mathtt{p}\in\mathcal{P}\), we define a homomorphism \(\Downarrow_{\Sigma_{\mathtt{p}}}\), where \(x\Downarrow_{\Sigma_{\mathtt{p}}}=x\) if \(x\in\Sigma_{\mathtt{p}}\) and \(\varepsilon\) otherwise. We write \(\mathcal{V}(w)\) to project the send and receive events in \(w\) onto their messages. We fix \(\mathcal{P}\) and \(\mathcal{V}\) in the rest of the paper. Global Types - Syntax.Global types for MSTs [31] are defined by the grammar: \[G::=0\ \mid\ \sum_{i\in I}\mathtt{p}\!\rightarrow\!\mathtt{q}_{i}\!:\!m_{i}.G_{i} \ \mid\ \mu t.\ G\ \mid\ t\] where \(\mathtt{p},\mathtt{q}_{i}\) range over \(\mathcal{P}\), \(m_{i}\) over \(\mathcal{V}\), and \(t\) over a set of recursion variables. We require each branch of a choice to be distinct: \(\forall i,j\in I.\,i\neq j\Rightarrow(\mathtt{q}_{i},m_{i})\neq(\mathtt{q}_{j},m_{j})\), the sender and receiver of an atomic action to be distinct: \(\forall i\in I.\,\mathtt{p}\neq\mathtt{q}_{i}\), and recursion to be guarded: in \(\mu t.\,G\), there is at least one message between \(\mu t\) and each \(t\) in \(G\). When \(|I|=1\), we omit \(\sum\). For readability, we sometimes use the infix operator \(+\) for choice, instead of \(\sum\). When working with a protocol described by a global type, we write \(\mathbf{G}\) to refer to the top-level type, and we use \(G\) to refer to its subterms. For the size of a global type, we disregard multiple occurrences of the same subterm. We use the extended definition of global types from [31] that allows a sender to send messages to different roles in a choice. We call this _sender-driven choice_, as in [38], while it was called generalized choice in [31]. This definition subsumes classical MSTs that only allow _directed choice_[24]. The types we use focus on communication primitives and omit features like delegation or parametrization. We defer a detailed discussion of different MST frameworks to SS11. Global Types - Semantics.As a basis for the semantics of a global type \(\mathbf{G}\), we construct a finite state machine \(\mathsf{GAut}(\mathbf{G})=(Q_{\mathbf{G}},\Sigma_{\mathit{sync}},\delta_{ \mathbf{G}},q_{0,\mathbf{G}},F_{\mathbf{G}})\) where * \(Q_{\mathbf{G}}\) is the set of all syntactic subterms in \(\mathbf{G}\) together with the term \(0\), * \(\delta_{\mathbf{G}}\) is the smallest set containing \((\sum_{i\in I}\mathtt{p}\rightarrow\mathtt{q}_{i}:m_{i}.G_{i},\mathtt{p} \rightarrow\mathtt{q}_{i}:m_{i},G_{i})\) for each \(i\in I\), as well as \((\mu t.G^{\prime},\varepsilon,G^{\prime})\) and \((t,\varepsilon,\mu t.G^{\prime})\) for each subterm \(\mu t.G^{\prime}\), * \(q_{0,\mathbf{G}}=\mathbf{G}\) and \(F_{\mathbf{G}}=\{0\}\). We define a homomorphism \(\mathtt{split}\) onto the asynchronous alphabet: \[\mathtt{split}(\mathtt{p}\rightarrow\mathtt{q}:m):=\mathtt{p}\triangleright \mathtt{q}!m.\mathtt{q}\triangleleft\mathtt{p}?m\enspace.\] The semantics \(\mathcal{L}(\mathbf{G})\) of a global type \(\mathbf{G}\) is given by \(\mathcal{C}^{\sim}(\mathtt{split}(\mathcal{L}(\mathsf{GAut}(\mathbf{G}))))\) where \(\mathcal{C}^{\sim}\) is the closure under the indistinguishability relation \(\sim\)[31]. Two events are independent if they are not related by the _happened-before_ relation [26]. For instance, any two send events from distinct senders are independent. Two words are indistinguishable if one can be reordered into the other by repeatedly swapping consecutive independent events. The full definition is in Appendix 0.A.2. Communicating State Machine [11].\(\mathcal{A}=\{\!\!\{A_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is a CSM over \(\mathcal{P}\) and \(\mathcal{V}\) if \(A_{\mathtt{p}}\) is a finite state machine over \(\Sigma_{\mathtt{p}}\) for every \(\mathtt{p}\in\mathcal{P}\), denoted by \((Q_{\mathtt{p}},\Sigma_{\mathtt{p}},\delta_{\mathtt{p}},q_{0,\mathtt{p}},F_{ \mathtt{p}})\). Let \(\prod_{\mathtt{p}\in\mathcal{P}}s_{\mathtt{p}}\) denote the set of global states and \(\mathsf{Chan}=\{(\mathtt{p},\mathtt{q})\mid\mathtt{p},\mathtt{q}\in\mathcal{P},\mathtt{p}\neq\mathtt{q}\}\) denote the set of channels. A _configuration_ of \(\mathcal{A}\) is a pair \((\vec{s},\xi)\), where \(\vec{s}\) is a global state and \(\xi:\mathsf{Chan}\rightarrow\mathcal{V}^{*}\) is a mapping from each channel to a sequence of messages. We use \(\vec{s}_{\mathtt{p}}\) to denote the state of \(\mathtt{p}\) in \(\vec{s}\). The CSM transition relation, denoted \(\rightarrow\), is defined as follows. * \((\vec{s},\xi)\xrightarrow{\mathtt{p}\mathtt{q}!m}(\vec{s}^{\prime},\xi^{ \prime})\) if \((\vec{s}_{\mathtt{p}},\mathtt{p}\triangleright\mathtt{q}!m,\vec{s}^{\prime}_{ \mathtt{p}})\in\delta_{\mathtt{p}}\), \(\vec{s}_{\mathtt{r}}=\vec{s}^{\prime}_{\mathtt{r}}\) for every role \(\mathtt{r}\neq\mathtt{p}\), \(\xi^{\prime}(\mathtt{p},\mathtt{q})=\xi(\mathtt{p},\mathtt{q})\cdot m\) and \(\xi^{\prime}(c)=\xi(c)\) for every other channel \(c\in\mathsf{Chan}\). * \((\vec{s},\xi)\xrightarrow{\mathtt{q}\triangleleft\mathtt{p}?m}(\vec{s}^{\prime}, \xi^{\prime})\) if \((\vec{s}_{\mathtt{q}},\mathtt{q}\triangleleft\mathtt{p}?m,\vec{s}^{\prime}_{ \mathtt{q}})\in\delta_{\mathtt{q}}\), \(\vec{s}_{\mathtt{r}}=\vec{s}^{\prime}_{\mathtt{r}}\) for every role \(\mathtt{r}\neq\mathtt{q}\), \(\xi(\mathtt{p},\mathtt{q})=m\cdot\xi^{\prime}(\mathtt{p},\mathtt{q})\) and \(\xi^{\prime}(c)=\xi(c)\) for every other channel \(c\in\mathsf{Chan}\). In the initial configuration \((\vec{s}_{0},\xi_{0})\), each role's state in \(\vec{s}_{0}\) is the initial state \(q_{0,\mathtt{p}}\) of \(A_{\mathtt{p}}\), and \(\xi_{0}\) maps each channel to \(\varepsilon\). A configuration \((\vec{s},\xi)\) is said to be _final_ iff \(\vec{s}_{\mathtt{p}}\) is final for every \(\mathtt{p}\) and \(\xi\) maps each channel to \(\varepsilon\). Runs and traces are defined in the expected way. A run is _maximal_ if either it is finite and ends in a final configuration, or it is infinite. The language \(\mathcal{L}(\mathcal{A})\) of the CSM \(\mathcal{A}\) is defined as the set of maximal traces. A configuration \((\vec{s},\xi)\) is a _deadlock_ if it is not final and has no outgoing transitions. A CSM is _deadlock-free_ if no reachable configuration is a deadlock. Finally, implementability is formalized as follows. Definition 1 (Implementability [31]): A global type \(\mathbf{G}\) is _implementable_ if there exists a CSM \(\{\!\!\{A_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) such that the following two properties hold: (i) \(\mathrm{protocol\,fidelity}\colon\mathcal{L}(\{\!\!\{A_{\mathtt{p}}\}\!\}_{ \mathtt{p}\in\mathcal{P}})=\mathcal{L}(\mathbf{G})\), and (ii) \(\mathrm{deadlock\,freedom}\colon\{\!\!\{A_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is deadlock-free. We say that \(\{\!\!\{A_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) implements \(\mathbf{G}\). ## 4 Synthesizing Implementations The construction is carried out in two steps. First, for each role \(\mathtt{p}\in\mathcal{P}\), we define an intermediate state machine \(\mathsf{GAut}(\mathbf{G})\!\!\downarrow_{\mathtt{p}}\) that is a homomorphism of \(\mathsf{GAut}(\mathbf{G})\). We call \(\mathsf{GAut}(\mathbf{G})\!\!\downarrow_{\mathtt{p}}\) the _projection by erasure_ for \(\mathtt{p}\), defined below. Definition 4.1 (Projection by Erasure): Let \(\mathbf{G}\) be some global type with its state machine \(\mathsf{GAut}(\mathbf{G})=(Q_{\mathbf{G}},\Sigma_{sync},\delta_{\mathbf{G}},q _{0,\mathbf{G}},F_{\mathbf{G}})\). For each role \(\mathtt{p}\in\mathcal{P}\), we define the state machine \(\mathsf{GAut}(\mathbf{G})\!\!\downarrow_{\mathtt{p}}=(Q_{\mathbf{G}},\Sigma_{ \mathtt{p}}\uplus\{\varepsilon\},\delta_{\downarrow},q_{0,\mathbf{G}},F_{ \mathbf{G}})\) where \(\delta_{\downarrow}:=\{q\ \frac{\texttt{split}(a)\Downarrow_{\Sigma_{ \mathtt{p}}}}{q^{\prime}}\mid q\xrightarrow{a}q^{\prime}\in\delta_{\mathbf{G}}\}\). By definition of \(\texttt{split}(\cdot)\), it holds that \(\texttt{split}(a)\Downarrow_{\Sigma_{\mathtt{p}}}\in\Sigma_{\mathtt{p}} \uplus\{\varepsilon\}\). Then, we determinize \(\mathsf{GAut}(\mathbf{G})\!\!\downarrow_{\mathtt{p}}\) via a standard subset construction to obtain a deterministic local state machine for \(\mathtt{p}\). Definition 4.2 (Subset Construction): Let \(\mathbf{G}\) be a global type and \(\mathtt{p}\) be a role. Then, the _subset construction_ for \(\mathtt{p}\) is defined as * \(\delta(s,a):=\{q^{\prime}\in Q_{\mathbf{G}}\mid\exists q\in s,q\xrightarrow{a} \xrightarrow{\varepsilon}\ast^{q}q^{\prime}\in\delta_{\downarrow}\}\), for every \(s\subseteq Q_{\mathbf{G}}\) and \(a\in\Sigma_{\mathtt{p}}\) * \(s_{0,\mathtt{p}}:=\{q\in Q_{\mathbf{G}}\mid q_{0,\mathbf{G}}\xrightarrow{ \varepsilon}\ast^{q}q\in\delta_{\downarrow}\}\), * \(Q_{\mathtt{p}}:=\operatorname{lfp}_{\{s_{0,\mathtt{p}}\}}^{\subseteq}\!\lambda Q.\,Q\cup\{\delta(s,a)\mid s\in Q\wedge a\in\Sigma_{\mathtt{p}}\}\setminus\{ \emptyset\}\), and * \(\delta_{\mathtt{p}}:=\delta|_{Q_{\mathtt{p}}\times\Sigma_{\mathtt{p}}^{\prime}}\) * \(F_{\mathtt{p}}:=\{s\in Q_{\mathtt{p}}\mid s\cap F_{\mathbf{G}}\neq\emptyset\}\) Note that the construction ensures that \(Q_{\mathtt{p}}\) only contains subsets of \(Q_{\mathbf{G}}\) whose states are reachable via the same traces, i.e. we typically have \(|Q_{\mathtt{p}}|\ll 2^{|Q_{\mathbf{G}}|}\). The following characterization is immediate from the subset construction; the proof can be found in Appendix 0.B.1. Lemma 4.3: _Let \(\mathbf{G}\) be a global type, \(\mathtt{r}\) be a role, and \(\mathscr{C}(\mathbf{G},\mathtt{r})\) be its _subset construction_. If \(w\) is a trace of \(\mathsf{GAut}(\mathbf{G})\), \(\texttt{split}(w)\!\!\downarrow_{\Sigma_{\mathtt{r}}}\) is a trace of \(\mathscr{C}(\mathbf{G},\mathtt{r})\). If \(u\) is a trace of \(\mathscr{C}(\mathbf{G},\mathtt{r})\), there is a trace \(w\) of \(\mathsf{GAut}(\mathbf{G})\) such that \(\texttt{split}(w)\!\!\downarrow_{\Sigma_{\mathtt{r}}}=u\). It holds that \(\mathcal{L}(\mathbf{G})\!\!\Downarrow_{\Sigma_{\mathtt{r}}}=\mathcal{L}( \mathscr{C}(\mathbf{G},\mathtt{r}))\)._ Using this lemma, we show that the CSM \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\!\}_{\mathtt{p}\in\mathcal{P}}\) preserves all behaviors of \(\mathbf{G}\). Lemma 4.4: _For all global types \(\mathbf{G}\), \(\mathcal{L}(\mathbf{G})\subseteq\mathcal{L}(\{\!\!\{\mathscr{C}(\mathbf{G}, \mathtt{p})\}\!\!\}_{\mathtt{p}\in\mathcal{P}})\)._ We briefly sketch the proof here. Given that \(\{\!\!\{\!\{\!\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\!\}_{\mathtt{p}\in\mathcal{P}}\) is deterministic, to prove language inclusion it suffices to prove the inclusion of the respective prefix sets: \[\operatorname{pref}(\mathcal{L}(\mathbf{G}))\subseteq\operatorname{pref}( \mathcal{L}\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\!\}_{\mathtt{p}\in \mathcal{P}})\] Let \(w\) be a word in \(\mathcal{L}(\mathbf{G})\). If \(w\) is finite, membership in \(\mathcal{L}(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\!\}_{\mathtt{p}\in \mathcal{P}})\) is immediate from the claim above. If \(w\) is infinite, we show that \(w\) has an infinite run in \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathbb{P}\in\mathcal{P}}\) using Konig's Lemma. We construct an infinite graph \(\mathcal{G}_{w}(V,E)\) with \(V:=\{v_{\rho}\mid\mathtt{trace}(\rho)\leq w\}\) and \(E:=\{(v_{\rho_{1}},v_{\rho_{2}})\mid\exists\;x\in\Sigma_{async.}\mathtt{trace} (\rho_{2})=\mathtt{trace}(\rho_{1})\cdot x\}\). Because \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathbb{P}\in\mathcal{P}}\) is deterministic, \(\mathcal{G}_{w}\) is a tree rooted at \(v_{\varepsilon}\), the vertex corresponding to the empty run. By Konig's Lemma, every infinite tree contains either a vertex of infinite degree or an infinite path. Because \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathbb{P}\in\mathcal{P}}\) consists of a finite number of communicating state machines, the last configuration of any run has a finite number of next configurations, and \(\mathcal{G}_{w}\) is finitely branching. Therefore, there must exist an infinite path in \(\mathcal{G}_{w}\) representing an infinite run for \(w\), and thus \(w\in\mathcal{L}(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathbb{P}\in \mathcal{P}})\). The proof of the inclusion of prefix sets proceeds by structural induction and primarily relies on Lemma 4.3 and the fact that all prefixes in \(\mathcal{L}(\mathbf{G})\) respect the order of send before receive events. ## 5 Checking Implementability We now turn our attention to checking implementability of a CSM produced by the subset construction. We revisit the global types from Example 2.2 (also shown in Fig. 2), which demonstrate that the naive subset construction does not always yield a sound implementation. From these examples, we distill our conditions that precisely identify the implementable global types. In general, a global type \(\mathbf{G}\) is not implementable when the agreement on a global run of \(\mathsf{GAut}(\mathbf{G})\) among all participating roles cannot be conveyed via sending and receiving messages alone. When this happens, roles can take locally permitted transitions that commit to incompatible global runs, resulting in a trace that is not specified by \(\mathbf{G}\). Consequently, our conditions need to ensure that when a role \(\mathtt{p}\) takes a transition in \(\mathscr{C}(\mathbf{G},\mathtt{p})\), it only commits to global runs that are consistent with the local views of all other roles. We discuss the relevant conditions imposed on send and receive transitions separately. **Send Validity.** Consider \(\mathbf{G}_{s}\) from Example 2.2. The CSM \(\{\!\!\{\mathscr{C}(\mathbf{G}_{s},\mathtt{p})\}\!\}_{\mathbb{P}\in\mathcal{P}}\) has an execution with the trace \(\mathtt{p\!\!\triangleright\!q!^{\!\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathitmathit{ \mathitmathitmathitmathit{ \mathitmathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathitmathit{ \mathitmathit{ \mathitmathitmathit{ \mathitmathit{ \mathitmathitmathit{ \mathitmathitmathit{ \mathitmathit{ \mathitmathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathit{ \mathitmathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathitmathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ { \mathit{ { \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ \mathit{ { \mathit{ { \mathit{ \mathit{ { \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ \mathit{ { \mathit{ { \mathit{ { \mathit{ { \mathit{ \mathit{ { \mathit{ { { \mathit{ { \mathit{ { \mathit{ { \mathit{ { \mathit{ { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { \mathit{{ { \mathit{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{{ { \mathit{ { { \mathit{{ { \mathit{{ { \mathit{ { { \mathit{ { { \mathit{{ { \mathit{{ { \mathit{{ { { \mathit{{ { \mathit{{ { \mathit{ { \mathit{{ { \mathit{ { \mathit{ { \mathit{{ { \mathit{{ { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { { \mathit{ { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { \mathit{ { { { \mathit{ { { { \mathit{ { { { \mathit{ { { \mathit{ { **Definition 5.1** (Transition Origin and Destination).: _Let \(s\xrightarrow{x}s^{\prime}\in\delta_{\mathtt{p}}\) be a transition in \(\mathscr{C}(\mathbf{G},\mathtt{p})\) and \(\delta_{\downarrow}\) be the transition relation of \(\mathsf{GAut}(\mathbf{G})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Available messages.The set of available messages is recursively defined on the structure of the global type. To obtain all possible messages, we need to unfold the distinct recursion variables once. For this, we define a map \(get\mu\) from variable to subterms and write \(get\mu_{\mathbf{G}}\) for \(get\mu(\mathbf{G})\): \[\begin{array}{c}get\mu(0):=[\,]\qquad\quad get\mu(t):=[\,]\qquad\quad get\mu (\mu t.G):=[t\mapsto G]\cup get\mu(G)\\ \qquad\quad get\mu(\sum_{i\in I}\mathtt{p}\!\rightarrow\!\mathtt{q}_{i}\!:\!m _{i}.G_{i}):=\bigcup_{i\in I}get\mu(G_{i})\end{array}\] The function \(M^{\mathcal{B},T}_{(\cdots)}\) keeps a set of unfolded variables \(T\), which is empty initially. \[M^{\mathcal{B},T}_{(0\ldots)}:=\emptyset\qquad\quad M^{\mathcal{B},T}_{(\mu t.G\ldots)}:=M^{\mathcal{B},T\cup\{t\}}_{(G\ldots)}\qquad\quad M^{\mathcal{B},T }_{(t\ldots)}:=\begin{cases}\emptyset&\text{if }t\in T\\ M^{\mathcal{B},T\cup\{t\}}_{(get\mu_{\mathbf{G}}(t)\ldots)}&\text{if }t\not\in T \end{cases}\] \[M^{\mathcal{B},T}_{(\sum_{i\in I}\mathtt{p}\!\rightarrow\!\mathtt{q}_{i}\!:\! m_{i}.G_{i}\ldots)}:=\begin{cases}\bigcup_{i\in I,m\in\mathtt{\mathcal{V}}}(M^{ \mathcal{B},T}_{(G_{i}\ldots)}\setminus\{\mathtt{q}_{i}\!\!\circ\!\mathtt{p}?m \})\cup\{\mathtt{q}_{i}\!\!\circ\!\mathtt{p}?m_{i}\}&\text{if }\mathtt{p}\not\in \mathcal{B}\\ \bigcup_{i\in I}M^{\mathcal{B}\cup\{\mathtt{q}_{i}\},T}_{(G_{i}\ldots)}&\text{ if }\mathtt{p}\in\mathcal{B}\end{cases}\] We write \(M^{\mathcal{B}}_{(G\ldots)}\) for \(M^{\mathcal{B},\emptyset}_{(G\ldots)}\). If \(\mathcal{B}\) is a singleton set, we omit set notation and write \(M^{\mathtt{p}}_{(G\ldots)}\) for \(M^{\{\mathtt{p}\}}_{(G\ldots)}\). The set of available messages captures the possible states of all channels before a given receive transition is taken. Definition 5.3 (Receive Validity): \(\mathscr{C}(\mathbf{G},\mathtt{p})\) satisfies _Receive Validity_ iff no receive transition is enabled in an alternative continuation that originates from the same source state: \[\forall s\xrightarrow{\mathtt{p}\mathtt{q}_{1}?m_{1}}s_{1},\,s \xrightarrow{\mathtt{p}\mathtt{q}_{2}?m_{2}}s_{2}\in\delta_{\mathtt{p}}.\] \[\mathtt{q}_{1}\neq\mathtt{q}_{2}\ \implies\ \forall\ G_{2}\in\text{tr-dest}(s \xrightarrow{\mathtt{p}\mathtt{q}_{2}?m_{2}}s_{2}).\,\mathtt{q}_{1}\triangleright \mathtt{p}!m_{1}\notin M^{\mathtt{p}}_{(G_{2}\ldots)}\enspace.\] Subset Projection.We are now ready to define our projection operator. Definition 5.4 (Subset Projection of \(\mathbf{G}\)): The _subset projection_\(\mathscr{P}(\mathbf{G},\mathtt{p})\) of \(\mathbf{G}\) onto \(\mathtt{p}\) is \(\mathscr{C}(\mathbf{G},\mathtt{p})\) if it satisfies Send Validity and Receive Validity. We lift this operation to a partial function from global types to CSMs in the expected way. We conclude our discussion with an observation about the syntactic structure of the subset projection: Send Validity implies that no state has both outgoing send and receive transitions (also known as mixed choice). Corollary 5.5 (No Mixed Choice): _If \(\mathscr{P}(\mathbf{G},\mathtt{p})\) satisfies Send Validity, then for all \(s\xrightarrow{x_{1}}s_{1},s\xrightarrow{x_{2}}s_{2}\in\delta_{\mathtt{p}}\), \(x_{1}\in\Sigma_{!}\) iff \(x_{2}\in\Sigma_{!}\)._ ## 6 Soundness In this section, we prove the soundness of our subset projection, stated as follows. Theorem 6.1: _Let \(\mathbf{G}\) be a global type and \(\{\!\! Recall that implementability is defined as protocol fidelity and deadlock freedom. Protocol fidelity consists of two language inclusions. The first inclusion, \(\mathcal{L}(\mathbf{G})\subseteq\mathcal{L}(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p}) \}\!\}_{\mathtt{p}\in\mathcal{P}})\), enforces that the subset projection generates at least all behaviors of the global type. We showed in Lemma 4.4 that this holds for the subset construction alone (without Send and Receive Validity). The second inclusion, \(\mathcal{L}(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in \mathcal{P}})\subseteq\mathcal{L}(\mathbf{G})\), enforces that no new behaviors are introduced. The proof of this direction relies on a stronger inductive invariant that we show for all traces of the subset projection. As discussed in SS5, violations of implementability occur when roles commit to global runs that are inconsistent with the local views of other roles. Our inductive invariant states the exact opposite: that all local views are consistent with one another. First, we formalize the local view of a role. Definition 6.2 (Possible run sets): Let \(\mathbf{G}\) be a global type and \(\mathsf{GAMt}(\mathbf{G})\) be the corresponding state machine. Let \(\mathtt{p}\) be a role and \(w\in\Sigma^{*}_{\text{async}}\) be a word. We define the set of possible runs \(\mathrm{R}^{\mathbf{G}}_{\mathtt{p}}(w)\) as all maximal runs of \(\mathsf{GAMt}(\mathbf{G})\) that are consistent with \(\mathtt{p}\)'s local view of \(w\): \[\mathrm{R}^{\mathbf{G}}_{\mathtt{p}}(w):=\{\rho\text{ is a maximal run of }\mathsf{GAMt}(\mathbf{G})\mid w\!\!\}_{\Sigma_{\mathtt{p}}}\leq\texttt{ split}(\texttt{trace}(\rho))\!\!\}_{\Sigma_{\mathtt{p}}}\enspace.\] While Definition 6.2 captures the set of maximal runs that are consistent with the local view of a single role, we would like to refer to the set of runs that is consistent with the local view of all roles. We formalize this as the intersection of the possible run sets for all roles, which we denote as \[I(w):=\bigcap_{\mathtt{p}\in\mathcal{P}}\mathrm{R}^{\mathbf{G}}_{\mathtt{p}}(w )\enspace.\] With these definitions in hand, we can now formulate our inductive invariant: Lemma 6.3: _Let \(\mathbf{G}\) be a global type and \(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) be the subset projection. Let \(w\) be a trace of \(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\). It holds that \(I(w)\) is non-empty._ The reasoning for the sufficiency of Lemma 6.3 is included in the proof of Theorem 6.1, found in Appendix C. In the rest of this section, we focus our efforts on how to show this inductive invariant, namely that the intersection of all roles' possible run sets is non-empty. We begin with the observation that the empty trace \(\varepsilon\) is consistent with all runs. As a result, \(I(\varepsilon)=\bigcap_{\mathtt{p}\in\mathcal{P}}\mathrm{R}^{\mathbf{G}}_{ \mathtt{p}}(\varepsilon)\) contains all maximal runs in \(\mathsf{GAMt}(\mathbf{G})\). By definition, state machines for global types include at least one run, and the base case is trivially discharged. Intuitively, \(I(w)\) shrinks as more events are appended to \(w\), but we show that at no point does it shrink to \(\emptyset\). We consider the cases where a send or receive event is appended to the trace separately, and show that the intersection set shrinks in a principled way that preserves non-emptiness. In fact, when a trace is extended with a receive event, Receive Validity guarantees that the intersection set does not shrink at all. Lemma 6.4: _Let \(\mathbf{G}\) be a global type and \(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) be the subset projection. Let \(wx\) be a trace of \(\{\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) such that \(x\in\Sigma_{\mathtt{\gamma}}\). Then, \(I(w)=I(wx)\)._ To prove this equality, we further refine our characterization of intersection sets. In particular, we show that in the receive case, the intersection between the sender and receiver's possible run sets stays the same, i.e. \[\mathrm{R}_{\mathrm{p}}^{\mathbf{G}}(w)\cap\mathrm{R}_{\mathrm{q}}^{\mathbf{G}}( w)=\mathrm{R}_{\mathrm{p}}^{\mathbf{G}}(wx)\cap\mathrm{R}_{\mathrm{q}}^{\mathbf{G}}( wx)\enspace.\] Note that it is not the case that the receiver only follows a subset of the sender's possible runs. In other words, \(\mathrm{R}_{\mathrm{q}}^{\mathbf{G}}(w)\subseteq\mathrm{R}_{\mathrm{p}}^{ \mathbf{G}}(w)\) is not inductive. The equality above simply states that a receive action can only eliminate runs that have already been eliminated by its sender. Fig. 3 depicts this relation. Given that the intersection set strictly shrinks, the burden of eliminating runs must then fall upon send events. We show that send transitions shrink the possible run set of the sender in a way that is _prefix-preserving_. To make this more precise, we introduce the following definition on runs. Definition 6.5 (Unique splitting of a possible run): Let \(\mathbf{G}\) be a global type, \(\mathtt{p}\) a role, and \(w\in\Sigma_{async}^{*}\) a word. Let \(\rho\) be a possible run in \(\mathrm{R}_{\mathrm{p}}^{\mathbf{G}}(w)\). We define the longest prefix of \(\rho\) matching \(w\): \[\alpha^{\prime}:=\max\{\rho^{\prime}\mid\rho^{\prime}\leq\rho\ \land\ \mathtt{split}(\mathtt{trace}(\rho^{\prime}))\Downarrow_{\Sigma_{\mathrm{p}} }\leq w\Downarrow_{\Sigma_{\mathrm{p}}}\}\enspace.\] If \(\alpha^{\prime}\neq\rho\), we can split \(\rho\) into \(\rho=\alpha\cdot G\xrightarrow{l}G^{\prime}\cdot\beta\) where \(\alpha^{\prime}=\alpha\cdot G\), \(G^{\prime}\) denotes the state following \(G\), and \(\beta\) denotes the suffix of \(\rho\) following \(\alpha\cdot G\cdot G^{\prime}\). We call \(\alpha\cdot G\xrightarrow{l}G^{\prime}\cdot\beta\) the unique splitting of \(\rho\) for \(\mathtt{p}\) matching \(w\). We omit the role \(\mathtt{p}\) when obvious from context. This splitting is always unique because the maximal prefix of any \(\rho\in\mathrm{R}_{\mathrm{p}}^{\mathbf{G}}(w)\) matching \(w\) is unique. When role \(\mathtt{p}\) fires a send transition \(\mathtt{p}\triangleright\mathtt{q}!m\), any run \(\rho=\alpha\cdot G\xrightarrow{l}G^{\prime}\cdot\beta\) in \(\mathtt{p}\)'s possible run with \(\mathtt{split}(l)\Downarrow_{\Sigma_{\mathrm{p}}}\neq\mathtt{p}\triangleright \mathtt{q}!m\) is eliminated. While the resulting possible run set could no longer contain runs that end with \(G^{\prime}\cdot\beta\), Send Validity guarantees that it must contain runs that begin with \(\alpha\cdot G\). This is formalized by the following lemma. Lemma 6.6: _Let \(\mathbf{G}\) be a global type and \(\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\rho\) be a run in \(I(w)\), and \(\alpha\cdot G\xrightarrow{l}G^{\prime}\cdot\beta\) be the unique splitting of \(\rho\) for \(\mathtt{p}\) with respect to \(w\). Then, there exists a run \(\rho^{\prime}\) in \(I(wx)\) such that \(\alpha\cdot G\leq\rho^{\prime}\)._ This concludes our discussion of the send and receive cases in the inductive step to show the non-emptiness of the intersection of all roles' possible run sets. The full proofs and additional definitions can be found in Appendix 0.C. ## 7 Completeness In this section, we prove completeness of our approach. While soundness states that if a global type's subset projection is defined, it then implements the global type, completeness considers the reverse direction. Theorem 7.1 (Completeness): _If \(\mathbf{G}\) is implementable, then \(\{\!\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is defined._ We sketch the proof and refer to Appendix 0.D for the full proof. From the assumption that \(\mathbf{G}\) is implementable, we know there exists a witness CSM that implements \(\mathbf{G}\). While the soundness proof picks our subset projection as the existential witness for showing implementability - thereby allowing us to reason directly about a particular implementation - completeness only guarantees the existence of some witness CSM. We cannot assume without loss of generality that this witness CSM is our subset construction; however, we must use the fact that it implements \(\mathbf{G}\) to show that Send and Receive Validity hold on our subset construction. We proceed via proof by contradiction: we assume the negation of Send and Receive Validity for the subset construction, and show a contradiction to the fact that this witness CSM implements \(\mathbf{G}\). In particular, we contradict protocol fidelity (Definition 3.1(i)), stating that the witness CSM generates precisely the language \(\mathcal{L}(\mathbf{G})\). To do so, we exploit a simulation argument: we first show that the negation of Send and Receive Validity forces the subset construction to recognize a trace that is not a prefix of any word in \(\mathcal{L}(\mathbf{G})\). Then, we show that this trace must also be recognized by the witness CSM, under the assumption that the witness CSM implements \(\mathbf{G}\). To highlight the constructive nature of our proof, we convert our proof obligation to a witness construction obligation. To contradict protocol fidelity, it suffices to construct a witness trace \(v_{0}\) satisfying two properties, where \(\{\!\!\{B_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is our witness CSM: 1. \(v_{0}\) is a trace of \(\{\!\!\{B_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\), and 2. the run intersection set of \(v_{0}\) is empty: \(I(v_{0})=\bigcap_{\mathtt{p}\in\mathcal{P}}\operatorname{R}_{\mathtt{p}}^{ \mathbf{G}}(v_{0})=\emptyset\). We first establish the sufficiency of conditions (a) and (b). Because \(\{\!\!\{B_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is deadlock-free by assumption, every prefix extends to a maximal trace. Thus, to prove the inequality of the two languages \(\mathcal{L}(\{\!\!\{B_{\mathtt{p}}\}\!\}_{\mathtt{p}\in\mathcal{P}})\) and \(\mathcal{L}(\mathbf{G})\), it suffices to prove the inequality of their respective prefix sets. In turn, it suffices to show the existence of a prefix of a word in one language that is not a prefix of any word in the other. We choose to construct a prefix in the CSM language that is not a prefix in \(\mathcal{L}(\mathbf{G})\). We again leverage the definition of intersection sets (Definition 6.2) to weaken the property of language non-membership to the property of having an empty intersection set as follows. By the semantics of \(\mathcal{L}(\mathbf{G})\), for any \(w\in\mathcal{L}(\mathbf{G})\), there exists \(w^{\prime}\in\mathtt{split}(\mathcal{L}(\mathsf{GAut}(\mathbf{G})))\) with \(w\sim w^{\prime}\). For any \(w^{\prime}\in\mathtt{split}(\mathcal{L}(\mathsf{GAut}(\mathbf{G})))\), it trivially holds that \(w^{\prime}\) has a non-empty intersection set. Because intersection sets are invariant under the indistinguishability relation \(\sim\), \(w\) must also have a non-empty intersection set. Since intersection sets are monotonically decreasing, if the intersection set of \(w\) is non-empty, then for any \(v\leq w\), the intersection set of \(v\) is also non-empty. Modus tollens of the chain of reasoning above tells us that in order to show a word is not a prefix in \(\mathcal{L}(\mathbf{G})\), it suffices to show that its intersection set is empty. Having established the sufficiency of properties (a) and (b) for our witness construction, we present the steps to construct \(v_{0}\) from the negation of Send and Receive Validity respectively. We start by constructing a trace in \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})_{\mathtt{p}}\!\}\!\}_{\mathtt{p} \in\mathcal{P}}\) that satisfies (b), and then show that \(\{\!\!\{B_{\mathtt{p}}\!\}\!\}_{\mathtt{p}\in\mathcal{P}}\) also recognizes the trace, thereby satisfying (a). In both cases, let \(\mathtt{p}\) be the role and \(s\) be the state for which the respective validity condition is violated. **Send Validity (Definition 5.2).** Let \(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m}s^{\prime}\in \delta_{\mathtt{p}}\) be a transition such that \[\text{tr-orig}(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m}s ^{\prime})\neq s\enspace.\] First, we find a trace \(u\) of \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})_{\mathtt{p}}\!\}\!\}_{\mathtt{p}\in \mathcal{P}}\) that satisfies: (1) role \(\mathtt{p}\) is in state \(s\) in the CSM configuration reached via \(u\), and (2) the run of \(\mathsf{GAut}(\mathbf{G})\) on \(u\) visits a state in \(s\setminus\text{tr-orig}(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q} \mathsf{1}m}s^{\prime})\). We obtain such a witness \(u\) from the \(\mathtt{split}(\mathtt{trace}(-))\) of a run prefix of \(\mathsf{GAut}(\mathbf{G})\) that ends in some state in \(s\setminus\text{tr-orig}(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q} \mathsf{1}m}s^{\prime})\). Any prefix thus obtained satisfies (1) by definition of \(\mathscr{C}(\mathbf{G},\mathtt{p})\), and satisfies (2) by construction. Due to the fact that send transitions are always enabled in a CSM, \(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m\) must also be a trace of \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\), thus satisfying property (a) by a simulation argument. We then argue that \(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m\) satisfies property (b), stating that \(I(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m)\) is empty: the negation of Send Validity gives that there exist no run extensions from our candidate state in \(s\setminus\text{tr-orig}(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q} \mathsf{1}m}s^{\prime})\) with the immediate next action \(\mathtt{p}\to\mathtt{q}:m\), and therefore there exists no maximal run in \(\mathsf{GAut}(\mathbf{G})\) consistent with \(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}m\). **Receive Validity (Definition 5.3).** Let \(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}\mathsf{?}m_{1}}s_{1}\) and \(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{2}\mathsf{?}m_{2}}s_{2} \in\delta_{\mathtt{p}}\) be two transitions, and let \(G_{2}\in\text{tr-dest}(s\xrightarrow{\mathtt{p}\vartriangleleft\mathsf{q} \mathsf{2}\mathsf{?}m_{2}}s_{2})\) such that \[\mathtt{q}_{1}\neq\mathtt{q}_{2}\text{ and }\mathtt{q}_{1}\vartriangleleft\mathsf{p} \mathsf{1}m_{1}\in M^{\mathtt{p}}_{(G_{2}\dots)}\enspace.\] Constructing the witness \(v_{0}\) pivots on finding a trace \(u\) of \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) such that both \(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{1}\mathsf{?}m_{1}\) and \(u\cdot\mathtt{p}\vartriangleleft\mathsf{q}\mathsf{2}\mathsf{?}m_{2}\) are traces of \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\). Equivalently, we show there exists a reachable configuration of \(\{\!\!\{\mathscr{C}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) in which \(\mathtt{p}\) can receive either message from distinct senders \(\mathtt{q}_{1}\) and \(\mathtt{q}_{2}\). Formally, the local state of p has two outgoing states labeled with \(\texttt{p}\triangleleft\texttt{q}_{1}?m_{1}\) and \(\texttt{p}\triangleleft\texttt{q}_{2}?m_{2}\), and the channels \(\texttt{q}_{1},\texttt{p}\) and \(\texttt{q}_{2},\texttt{p}\) have \(m_{1}\) and \(m_{2}\) at their respective heads. We construct such a \(u\) by considering a run in \(\mathsf{GAut}(\mathbf{G})\) that contains two transitions labeled with \(\texttt{q}_{1}\xrightarrow{}\texttt{p}:m_{1}\) and \(\texttt{q}_{2}\xrightarrow{}\texttt{p}:m_{2}\). Such a run must exist due to the negation of Receive Validity. We start with the split trace of this run, and argue that, from the definition of \(M(\)-\()\) and the indistinguishability relation \(\sim\), we can perform iterative reorderings using \(\sim\) to bubble the send action \(\texttt{q}_{1}\triangleright\texttt{p}!m_{1}\) to the position before the receive action \(\texttt{p}\triangleleft\texttt{q}_{2}?m_{2}\). Then, (a) for \(u\!\cdot\!\texttt{p}\triangleleft\texttt{q}_{1}?m_{1}\) holds by a simulation argument. We then separately show that (b) holds for \(\texttt{p}\triangleleft\texttt{q}_{1}?m_{1}\) using similar reasoning as the send case to complete the proof that \(u\cdot\texttt{p}\triangleleft\texttt{q}_{1}?m_{1}\) suffices as a witness for \(v_{0}\). It is worth noting that the construction of the witness prefix \(v_{0}\) in the proof immediately yields an algorithm for computing counterexample traces to implementability. Remark 7.2 (Mixed Choice is Not Needed to Implement Global Types): Theorem 7.1 basically shows the necessity of Send Validity for implementability. Corollary 5.5 shows that Send Validity precludes states with both send and receive outgoing transitions. Together, this implies that an implementable global type can always be implemented without mixed choice. Note that the syntactic restrictions on global types do not inherently prevent mixed choice states from arising in a role's subset construction, as evidenced by r in the following type: \(\texttt{p}\!\rightarrow\!\texttt{q}\!\!:\!\texttt{l}.\texttt{q}\!\!\rightarrow \!\texttt{r}\!\!:\!\texttt{m}.\texttt{0}+\texttt{p}\!\rightarrow\!\texttt{q} \!\!:\!\texttt{r}.\texttt{r}\!\!\rightarrow\!\texttt{q}\!\!:\!\texttt{m}. \texttt{0}\). Our completeness result thus implies that this type is not implementable. Most MST frameworks [18, 24, 31] implicitly force _no mixed choice_ through syntactic restrictions on local types. We are the first to prove that mixed choice states are indeed not necessary for completeness. This is interesting because mixed choice is known to be crucial for the expressive power of the synchronous \(\pi\)-calculus compared to its asynchronous variant [32]. ## 8 Complexity In this section, we establish PSPACE-completeness of checking implementability for global types. Theorem 8.1: _The MST implementability problem is PSPACE-complete._ Proof: We first establish the upper bound. The decision procedure enumerates for each role p the subsets of \(\mathsf{GAut}(\mathbf{G})\!\!\downarrow_{\texttt{p}}\). This can be done in polynomial space and exponential time. For each p and \(s\subseteq Q_{\mathbf{G}}\), it then (i) checks membership of \(s\) in \(Q_{\texttt{p}}\) of \(\mathscr{C}(\mathbf{G},\texttt{p})\), and (ii) if \(s\in Q_{\texttt{p}}\), checks whether all outgoing transitions of \(s\) in \(\mathscr{C}(\mathbf{G},\texttt{p})\) satisfy Send and Receive Validity. Check (i) can be reduced to the intersection non-emptiness problem for nondeterministic finite state machines, which is in PSPACE [44]. It is easy to see that check (ii) can be done in polynomial time. In particular, the computation of available messages for Receive Validity only requires a single unfolding of every loop in \(\mathbf{G}\). Note that the synthesis problem has the same complexity. The subset construction to determinize \(\mathsf{GAut}(\mathbf{G})\!\!\!\downarrow_{\mathtt{p}}\) can be done using a PSPACE transducer. While the output can be of exponential size, it is written on an extra tape that is not counted towards memory usage. However, this means we need to perform the validity checks as described above instead of using the computed deterministic state machines. Second, we prove the lower bound. The proof is inspired by the proof for Theorem 4 [4] in which Alur et al. prove that checking safe realizability of bounded HMSCs is PSPACE-hard. We reduce the PSPACE-complete problem of checking universality of an NFA \(M=(Q,\Delta,\delta,q_{0},F)\) to checking implementability. Without loss of generality, we assume that every state can reach a final state. We construct a global type \(\mathbf{G}\) for \(\mathtt{p},\mathtt{q}\) and \(\mathtt{r}\) that is implementable iff \(\mathcal{L}(M)=\Delta^{\!*}\). For this, we define subterms \(G_{l}\) and \(G_{r}\) as well as \(G_{q}\) for every \(q\in Q\) and \(G_{*}\). We use a fresh letter \(\bot\) to handle final states of \(M\). We also define \(\mathtt{p}\!\!\leftrightarrow\!\mathtt{q}\!:\!m\) as an abbreviation for \(\mathtt{p}\!\!\rightarrow\!\mathtt{q}\!:\!m\).\(\mathtt{q}\!\!\rightarrow\!\mathtt{p}\!:\!m\). \[\mathbf{G}:=G_{l}+G_{r}\] \[G_{l}:=\mathtt{p}\!\!\leftrightarrow\!\mathtt{q}\!:\!l\,.\,\mathtt{p}\! \!\leftrightarrow\!\mathtt{r}\!:\!go\!\cdot\!G_{q_{0}}\] \[G_{q}:=\begin{cases}\sum_{(a,q^{\prime})\in\delta(q)}(\mathtt{r}\!\! \leftrightarrow\!\mathtt{q}\!:\!a\!\cdot\!G_{q^{\prime}})&\text{if }q\notin F\\ \mathtt{r}\!\!\leftrightarrow\!\mathtt{q}\!:\!\bot\,.\,0\,\,+\,\,\sum_{(a,q^{ \prime})\in\delta(q)}(\mathtt{r}\!\!\leftrightarrow\!\mathtt{q}\!:\!a\!\cdot\! G_{q^{\prime}})&\text{if }q\in F\end{cases}\] \[G_{r}:=\mathtt{p}\!\!\leftrightarrow\!\mathtt{q}\!:\!r\!\cdot\!\mathtt{p}\! \leftrightarrow\!\mathtt{r}\!:\!go\!\cdot\!G_{*}\] \[G_{*}:=\mathtt{r}\!\!\leftrightarrow\!\mathtt{q}\!:\!\bot\,.\,0+\sum_{a\in \Delta}(\mathtt{r}\!\!\leftrightarrow\!\mathtt{q}\!:\!a\!\cdot\!G_{*})\] The global type \(\mathbf{G}\) is constructed such that \(\mathtt{p}\) first decides whether words from \(\mathcal{L}(M)\) or from \(\Delta^{\!*}\) are sent subsequently. This decision is known to \(\mathtt{p}\) and \(\mathtt{q}\) but not to \(\mathtt{r}\). The protocol then continues with \(\mathtt{r}\) sending letters from \(\Delta\) to \(\mathtt{q}\), and \(\mathtt{p}\) is not involved. Intuitively, \(\mathtt{q}\) is able to receive these letters if and only if \(\mathcal{L}(M)=\Delta^{\!*}\). From Theorems 6.1 and 7.1, we know that \(\{\!\!\{\!\mathcal{C}(\mathbf{G},\mathtt{p})_{\mathtt{p}}\}\!\! Note that PSPACE-hardness only holds if the size of \(\mathbf{G}\) does not account for common subterms multiple times. Because every message is immediately acknowledged, the constructed global type specifies a universally 1-bounded [23] language, proving that PSPACE-hardness persists for such a restriction. For our construction, it does not hold that \(\mathcal{V}(\mathcal{L}(G_{l})\Downarrow_{\Sigma_{\mathfrak{q},\gamma}})= \mathcal{L}(M)\). We chose so to have a more compact protocol. However, we can easily fix this by sending the decision of \(\mathtt{r}\) first to \(\mathtt{p}\), allowing to omit the messages \(\bot\) to \(\mathtt{q}\). This result and the fact that local languages are preserved by the subset projection (Lemma 4.3) leads to the following observation. Corollary 8: _Let \(\mathbf{G}\) be an implementable global type. Then, the subset projection \(\{\!\!\{\mathscr{P}(\mathbf{G},\mathtt{p})\}\!\}_{\mathtt{p}\in\mathcal{P}}\) is a local language preserving implementation for \(\mathbf{G}\), i.e., \(\mathcal{L}(\mathscr{P}(\mathbf{G},\mathtt{p}))=\mathcal{L}(\mathbf{G}) \Downarrow_{\Sigma_{\mathtt{p}}}\) for every \(\mathtt{p}\), and can be computed in PSPACE._ Remark 8: _(MST implementability with directed choice is PSPACE-hard)._ Theorem 8.1 is stated for global types with sender-driven choice but the provided type is in fact directed. Thus, the PSPACE lower bound also holds for implementability of types with directed choice. ## 9 Evaluation We consider the following three aspects in the evaluation of our approach: (E1) difficulty of implementation (E2) completeness, and (E3) comparison to state of the art. For this, we implemented our subset projection in a prototype tool [1, 37]. It takes a global type as input and computes the subset projection for each role. It was straightforward to implement the core functionality in approximately 700 lines of Python3 code closely following the formalization (E1). We consider global types (and communication protocols) from seven different sources as well as all examples from this work (cf. 1st column of Table 1). Our experiments were run on a computer with an Intel Core i7-1165G7 CPU and used at most 100MB of memory. The results are summarized in Table 1. The reported size is the number of states and transitions of the respective state machine, which allows not to account for multiple occurrences of the same subterm. As expected, our tool can project every implementable protocol we have considered (E2). Regarding the comparison against the state of the art (E3), we directly compared our subset projection to the incomplete approach by Majumdar et al. [31], and found that the run times are in the same order of magnitude in general (typically a few milliseconds). However, the projection of [31] fails to project four implementable protocols (including Example 2.1). We discuss some of the other examples in more detail in the next section. We further note that most of the run times reported by Scalas and Yoshida [36] on their model checking based tool are around 1 second and are thus two to three orders of magnitude slower. ## 10 Discussion **Success of Syntactic Projections Depends on Representation.** Let us illustrate how unfolding recursion helps syntactic projection operators to succeed. Consider this implementable global type, which is not syntactically projectable: \[\mathbf{G}_{\mathrm{fold}}:=+\begin{cases}\mathtt{p}\!\rightarrow\mathtt{q}\!: \!\mathtt{o}\!:\,\mu t_{1}\!\!\mathtt{.}\,(\mathtt{p}\!\rightarrow\mathtt{q}\!: \!\mathtt{o}\!:\mathtt{o}\!\rightarrow\mathtt{r}\!:\!\mathtt{o}\!\mathtt{t}_{1} \!\!\mathtt{+}\,\,\mathtt{p}\!\rightarrow\mathtt{q}\!:\!\mathtt{b}\!:\mathtt{q} \!\rightarrow\mathtt{r}\!:\!\mathtt{b}\!:\mathtt{0})\\ \mathtt{p}\!\rightarrow\mathtt{q}\!:\!\mathtt{m}\!:\mathtt{q}\!\rightarrow\! \mathtt{r}\!:\!\mathtt{m}\!:\mathtt{m}\!:\mathtt{d}\!:\!\mathtt{o}\!\rightarrow \mathtt{r}\!:\!\mathtt{o}\!:\mathtt{d}\!\rightarrow\mathtt{r}\!:\!\mathtt{o} \!:\mathtt{t}_{2}\!\!\mathtt{+}\,\,\mathtt{p}\!\rightarrow\mathtt{q}\!:\! \mathtt{b}\!:\mathtt{q}\!\rightarrow\mathtt{r}\!:\!\mathtt{b}\!:\mathtt{0}) \end{cases}.\] Similar to projection by erasure, a syntactic projection erases events that a role is not involved in and immediately tries to _merge_ different branches. The merge operator is a partial operator that checks sufficient conditions for implementability. Here, the merge operator fails for \(\mathtt{r}\) because it cannot merge a recursion variable binder and a message reception. Unfolding the global type preserves the represented protocol and resolves this issue: \[\mathbf{G}_{\mathrm{unf}}:=+\begin{cases}\mathtt{p}\!\rightarrow\mathtt{q}\!: \!\mathtt{o}\!:\,\mathtt{o}\!:\,\begin{cases}\mathtt{p}\!\rightarrow\mathtt{q} \!:\!\mathtt{b}\!:\,\mathtt{q}\!\rightarrow\!\mathtt{r}\!:\!\mathtt{b}\!:\, \mathtt{0}\\ \mathtt{p}\!\rightarrow\mathtt{q}\!:\!\mathtt{o}\!:\mathtt{o}\!:\,\mathtt{q} \!\rightarrow\!\mathtt{r}\!:\!\mathtt{o}\!:\,\mu t_{1}\!\!\mathtt{.}\,(\mathtt{p} \!\rightarrow\mathtt{q}\!:\!\mathtt{o}\!:\mathtt{q}\!\rightarrow\mathtt{r}\!: \!\mathtt{o}\!:\mathtt{t}_{2}\!\!\mathtt{+}\,\,\mathtt{p}\!\rightarrow\mathtt{q} \!:\!\mathtt{b}\!:\mathtt{q}\!\rightarrow\mathtt{r}\!:\!\mathtt{b}\!:\mathtt{0}) \end{cases}.\] \begin{table} \begin{tabular}{c l c|c c c c c c|c} \hline \hline Source & Name & Impl. & Subset Proj. & Size & \(|\mathcal{P}|\) & Size & \multicolumn{2}{c|}{[31]} \\ & & & (complete) & & & Proj’s & \multicolumn{2}{c}{(incomplete)} \\ \hline [35] & Instrument Contr. Prot. A & ✓ & ✓ & 0.4 ms & 22 & 3 & 61 & ✓ & 0.2 ms \\ & Instrument Contr. Prot. B & ✓ & ✓ & 0.3 ms & 17 & 3 & 47 & ✓ & 0.1 ms \\ & OAuth2 & ✓ & ✓ & 0.1 ms & 10 & 3 & 23 & ✓ & \(<\)0.1 ms \\ \hline [34] & Multi Party Game & ✓ & ✓ & 0.5 ms & 21 & 3 & 67 & ✓ & 0.1 ms \\ \hline [24] & Streaming & ✓ & ✓ & 0.2 ms & 13 & 4 & 28 & ✓ & \(<\)0.1 ms \\ \hline [13] & Non-Compatible Merge & ✓ & ✓ & 0.2 ms & 11 & 3 & 25 & ✓ & 0.1 ms \\ \hline [45] & Spring-Hibernate & ✓ & ✓ & 1.0 ms & 62 & 6 & 118 & ✓ & 0.7 ms \\ \hline [31] & Group Present & ✓ & ✓ & 0.6 ms & 51 & 4 & 85 & ✓ & 0.6 ms \\ Late Learning & ✓ & ✓ & 0.3 ms & 17 & 4 & 34 & ✓ & 0.2 ms \\ Load Balancer (\(n=10\)) & ✓ & ✓ & 3.9 ms & 36 & 12 & 106 & ✓ & 2.4 ms \\ Logging (\(n=10\)) & ✓ & ✓ & 71.5 ms & 81 & 13 & 322 & ✓ & 10.0 ms \\ \hline [38] & 2 Buyer Protocol & ✓ & ✓ & 0.5 ms & 22 & 3 & 60 & ✓ & 0.2 ms \\ [38] & 2B-Prot. Omit No & ✓ & ✓ & 0.4 ms & 19 & 3 & 56 & \((\diamond)\) & 0.1 ms \\ 2B-Prot. Subscription & ✓ & ✓ & 0.7 ms & 46 & 3 & 95 & \((\diamond)\) & 0.3 ms \\ 2B-Prot. Inner Recursion & ✓ & ✓ & 0.4 ms & 17 & 3 & 51 & ✓ & 0.1 ms \\ \hline & Odd-even (Example 2.1) & ✓ & ✓ & 0.5 ms & 32 & 3 & 70 & \((\diamond)\) & 0.2 ms \\ & \(\mathbf{G}_{r}\) – Receive Val. Violated (§2) & \(\times\) & \(\times\) & 0.1 ms & 12 & 3 & - & \((\diamond)\) & \(<\)0.1 ms \\ New & \(\mathbf{G}_{r}^{\prime}\) – Receive Val. Satisfied (§2) & ✓ & ✓ & 0.2 ms & 16 & 3 & 35 & ✓ & 0.1 ms \\ & \(\mathbf{G}_{s}\) – Send Val. Violated (§2) & \(\times\) & \(\times\) & \(<\)0.1 ms & 8 & 3 & - & \((\diamond)\) & \(<\)0.1 ms \\ & \(\mathbf{G}_{s}^{\prime}\) – Send Val. Satisfied (§2) & ✓ & ✓ & \(<\)0.1 ms & 7 & 3 & 17 & ✓ & \(<\)0.1 ms \\ & \(\mathbf{G}_{\mathrm{fold}}\) (§10) & ✓ & ✓ & 0.4 ms & 21 & 3 & 50 & \((\diamond)\) & 0.1 ms \\ & \(\mathbf{G}_{\mathrm{unf}}\) (§10) & ✓ & ✓ & 0.4 ms & 30 & 3 & 61 & ✓ & 0.2 ms \\ \hline \hline \end{tabular} \end{table} Table 1: Projecting Global Types. For every protocol, we report whether it is implementable ✓ or not \(\times\), the time to compute our subset projection and the generalized projection by Majumdar et al. [31] as well as the outcome as ✓ for “implementable”, \(\times\) for “not implementable” and \((\times)\) for “not known”. We also give the size of the protocol (number of states and transitions), the number of roles, the combined size of all subset projections (number of states and transitions). (We refer to Fig. 4 in Appendix E.1 for visual representations of both global types.) This global type can be projected with most syntactic projection operators and shows that the representation of the global type matters for syntactic projectability. However, such unfolding tricks do not always work, e.g. for the odd-even protocol (Example 2.1). We avoid this brittleness using automata and separating the synthesis from checking implementability. **Entailed Properties from the Literature.** We defined implementability for a global type as the question of whether there exists a deadlock-free CSM that generates the same language as the global type. Various other properties of implementations and protocols have been proposed in the literature. Here, we give a brief overview and defer to Appendix E.2 for a detailed analysis. _Progress_[18], a common property, requires that every sent message is eventually received and every expected message will eventually be sent. With deadlock freedom, our subset projection trivially satisfies progress for finite traces. For infinite traces, as expected, fairness assumptions are required to enforce progress. Similarly, our subset projection prevents _unspecified receptions_[14] and _orphan messages_[9, 21], respectively interpreted in our multiparty setting with sender-driven choice. We also ensure that every local transition of each role is _executable_[14], i.e. it is taken in some run of the CSM. Any implementation of a global type has the _stable property_[28], i.e., one can always reach a configuration with empty channels from every reachable configuration. While the properties above are naturally satisfied by our subset projection, the following ones can be checked directly on an implementable global type without explicitly constructing the implementation. A global type is _terminating_[36] iff it does not contain recursion and _never-terminating_[36] iff it does not contain term \(0\). ## 11 Related Work MSTs were introduced by Honda et al. [24] with a process algebra semantics, and the connection to CSMs was established soon afterwards [20]. In this work, we present a complete projection procedure for global types with sender-driven choice. The work by Castagna et al. [13] is the only one to present a projection that aims for completeness. Their semantic conditions, however, are not effectively computable and their notion of completeness is "less demanding than the classical ones" [13]. They consider multiple implementations, generating different sets of traces, to be sound and complete with regard to a single global type [13, Sec. 5.3]. In addition, the algorithmic version of their conditions does not use global information as our message availability analysis does. MST implementability relates to safe realizability of HMSCs, which is undecidable in general but decidable for certain classes [30]. Stutz [38] showed that implementability of global types that are always able to terminate is decidable.1 The EXPSPACE decision procedure is obtained via a reduction to safe realizability of globally-cooperative HMSCs, by proving that the HMSC encoding [39] of any implementable global type is globally-cooperative and generalizing results for infinite executions. Thus, our PSPACE-completeness result both generalizes and tightens the earlier decidability result obtained in [38]. Stutz [38] also investigates how HMSC techniques for safe realizability can be applied to the MST setting - using the formal connection between MST implementability and safe realizability of HMSCs - and establishes an undecidability result for a variant of MST implementability with a relaxed indistinguishability relation. Similar to the MST setting, there have been approaches in the HMSC literature that tie branching to a role making a choice. We refer the reader to the work by Majumdar et al. [31] for a survey. Standard MST frameworks project a global type to a set of _local types_ rather than a CSM. Local types are easily translated to FSMs [31, Def.11]. Our projection operator, though, can yield FSMs that cannot be expressed with the limited syntax of local types. Consider this implementable global type: \(\mathtt{p}\rightarrow\mathtt{q}\,\colon\,\mathtt{o}\,\mathtt{0}\,\,\,+\,\, \mathtt{p}\rightarrow\mathtt{q}\,\colon\,\mathtt{m}\,\mathtt{p}\rightarrow \mathtt{r}\,\colon\,\mathtt{b}\,\mathtt{0}\). The subset projection for \(\mathtt{r}\) has two final states connected by a transition labeled \(\mathtt{r}\triangleleft\mathtt{p}\,\mathtt{?}\mathtt{b}\). In the syntax of local types, \(\mathtt{0}\) is the only term indicating termination, which means that final states with outgoing transitions cannot be expressed. In contrast to the syntactic restrictions for global types, which are key to effective verification, we consider local types unnecessarily restrictive. Usually, local implementations are type-checked against their local types and subtyping gives some implementation freedom [12, 16, 17, 27]. However, one can also view our subset projection as a local specification of the actual implementation. We conjecture that subtyping would then amount to a variation of alternating refinement [5]. CSMs are Turing-powerful [11] but decidable classes were obtained for different semantics: restricted communication topology [33, 42], half-duplex communication (only for two roles) [14], input-bounded [10], and unreliable channels [2, 3]. Global types (as well choreography automata [7]) can only express existentially 1-bounded, 1-synchronizable and half-duplex communication [39]. Key to this result is that sending and receiving a message is specified atomically in a global type -- a feature Dagnino et al. [19] waived for their deconfined global types. However, Dagnino et al. [19] use deconfined types to capture the behavior of a given system rather than projecting to obtain a system that generates specified behaviors. This work relies on reliable communication as is standard for MST frameworks. Work on fault-tolerant MST frameworks [8, 43] attempts to relax this restriction. In the setting of reliable communication, both context-free [25, 40] and parametric [15, 22] versions of session types have been proposed to capture more expressive protocols and entire protocol families respectively. Extending our approach to these generalizations is an interesting direction for future work. #### 4.0.1 Acknowledgements. This work is funded in part by the National Science Foundation under grant 1815633. Felix Stutz was supported by the Deutsche Forschungsgemeinschaft project 389792660 TRR 248--CPEC.
2306.16637
On Smirnov's approach to the ABC conjecture
We use algebraic geometry over pointed monoids to give an intrinsic interpretation for the compactification of the spectrum of the ring of integers of a number field $K$, for the projective line over algebraic extensions of $\mathbb{F}_1$ and for maps between them induced by elements of $K$, as introduced by Alexander Smirnov in his approach to the ABC conjecture.
Manoel Jarra
2023-06-29T02:22:30Z
http://arxiv.org/abs/2306.16637v2
# On Smirnov's approach to the ABC conjecture ###### Abstract We use algebraic geometry over pointed monoids to give an intrinsic interpretation for the compactification of the spectrum of the ring of integers of a number field \(K\), for the projective line over algebraic extensions of \(\mathbb{F}_{1}\) and for maps between them induced by elements of \(K\), as introduced by Alexander Smirnov in his approach to the ABC conjecture. ###### Contents * 1 Monoid schemes * 2 Strong congruence spaces * 3 The projective line over algebraic extensions of \(\mathbb{F}_{1}\) * 4 The compactification of \(\operatorname{Spec}\mathcal{O}_{K}\) * 5 Maps from \(\operatorname{\overline{Spec}\mathcal{O}_{K}}\) to \(\mathbb{\widehat{P}}^{1}_{\mathbb{F}_{1}}\) induced by elements of \(K\) ## Introduction In [10], Smirnov proposes an approach to the ABC conjecture based on the analogy between number fields and function fields of algebraic curves (see also [1]). The main idea is to consider the "compactification" \(\operatorname{\overline{Spec}\mathbb{Z}^{\operatorname{Smi}}}\) of \(\operatorname{Spec}\mathbb{Z}\) as a curve over "the field with one element" \(\mathbb{F}_{1}\). The curve \(\operatorname{\overline{Spec}\mathbb{Z}^{\operatorname{Smi}}}\) is defined as the set of non-trivial places of the "function field" \(\mathbb{Q}\), _i.e._, as the set \[\{[2],[3],[5],[7],[11],\dots\}\cup\{[\infty]\},\] where \([p]\) is the class of the \(p\)-adic valuation \[q\mapsto v_{p}(q)=n\quad\text{if}\quad q=p^{n}\frac{a}{b}\ \ \text{ with}\ \ a,b\in\mathbb{Z}\ \ \text{ and}\ \ p\ \big{\downarrow}ab,\] and \([\infty]\) is the class of the achimedean valuation \[q\mapsto v_{\infty}(q)=-\log(|q|).\] The degree of a point is given by \[\deg([p])=\log(p)\quad\text{and}\quad\deg([\infty])=1,\] which is the unique choice (up to common multiple) that satisfies \[\sum_{[x]}v_{x}(q)\deg([x])=0\quad\text{ for all }\quad q\in\mathbb{Q}^{*}.\] The set of non-zero elements of the "field of constants" of \(\mathbb{Q}\) is \[\{q\in\mathbb{Q}^{*}\mid v_{x}(q)=0\text{ for all }[x]\}=\{1,-1\}.\] The projective line \(\mathbb{P}^{1,\text{Smi}}_{\mathbb{F}_{1}}\) is defined as \(\big{\{}[n]\mid n\in\mathbb{N}\big{\}}\cup\{[\infty]\}\), with degree map given by \[\deg([0])=1=\deg([\infty])\quad\text{ and }\quad\deg([n])=\phi(n)\quad\text{ for }\quad 0<n<\infty,\] where \(\phi\) is the Euler function. Each "non-constant" \(q=\frac{a}{b}\in\mathbb{Q}^{*}\backslash\{1,-1\}\) with \(\gcd(a,b)=1\) defines a map \(\varphi_{q}\) from \(\overline{\operatorname{Spec}\mathbb{Z}}^{\text{Smi}}\) to \(\mathbb{P}^{1,\text{Smi}}_{\mathbb{F}_{1}}\), given by \[[x] \longmapsto \left\{\begin{array}{ll}[0]&\text{if }x=p\neq\infty\quad\text{and }p\mid a\\ [\infty]&\text{if }x=p\neq\infty\quad\text{and }p\mid b\\ [n]&\text{if }x=p\neq\infty\quad\text{and }p\nmid ab,\text{ where }n=\operatorname{ord}\bigl{(}\overline{a}\overline{b}^{-1}\bigr{)} \text{ in }\mathbb{F}_{p}^{*}\\ [0]&\text{if }x=\infty\quad\quad\text{and }a<b\\ [\infty]&\text{if }x=\infty\quad\quad\text{and }a>b.\end{array}\right.\] The ramification index of a point in \(\overline{\operatorname{Spec}\mathbb{Z}}^{\text{Smi}}\) is \[e_{[p]}:= \left\{\begin{array}{ll}\max\{k\mid p^{k}\text{ divides }a\}&\text{if } \varphi_{q}([p])=[0]\\ \max\{k\mid p^{k}\text{ divides }b\}&\text{if }\varphi_{q}([p])=[\infty]\\ \max\{k\mid p^{k}\text{ divides }a^{n}-b^{n}\}&\text{if }\varphi_{q}([p])=[n]\end{array}\right.\] if \(p\) is a prime number, and \(e_{[\infty]}=-\log(|q|)\). In analogy with the function field case, where the degree of a non-constant map between curves is the degree of its divisor of zeros, the degree of \(\varphi_{q}\) is \[\deg(\varphi_{q}) =\sum_{v_{x}(f)>0}v_{x}(f)\deg([x]).\] The _arithmetic defect_ of a point \([x]\in\overline{\operatorname{Spec}\mathbb{Z}}^{\text{Smi}}\) is \[\delta_{[x]}:=\frac{(e_{[x]}-1)\deg([x])}{\deg(\varphi_{q})}.\] **Conjecture** ([15]).: _For each \(\epsilon>0\), there exists a constant \(C\) satisfying_ \[\sum_{[x]\in X(q)}\delta_{[x]}\leq 2+\epsilon+\frac{C}{\deg(\varphi_{q})},\] _where \(X(q)\) is the set \(\{[x]\in\overline{\operatorname{Spec}\mathbb{Z}}\mid\varphi_{q}([x])\text{ has degree }1\}\)._ **Theorem** ([15]).: _If the conjecture above holds, then the ABC conjecture is true._ **Remark**.: The conjecture above can be understood as analogous to the inequality \[\sum_{\mathfrak{P}\text{ prime of }L}\frac{\bigl{(}e(\mathfrak{P}/P)-1\bigr{)} \deg_{L}\mathfrak{P}}{[L:K]}\leq 2-2g_{K}+\frac{2g_{L}-2}{[L:K]},\] which follows from the well-known Riemann-Hurwitz formula for finite, separable, geometric extensions of function fields \(L/K\) (see [11, Thm. 7.16]). In this paper we link Smirnov's approach with strong congruence spaces of monoid schemes, which are topological spaces that locally characterize maps from the corresponding coordinate monoids into fields. The next result shows how to recover \(\mathbb{P}^{1,\operatorname{Smi}}_{\mathbb{F}_{1}}\) by using \(\operatorname{SCong}\mathbb{P}^{1}_{\mathbb{F}_{1}}\), the strong congruence space of the projective line over \(\mathbb{F}_{1}\). **Theorem A** (Corollary 3.4).: _There is a canonical inclusion whose image consists of all non-generic points of \(\operatorname{SCong}\mathbb{P}^{1}_{\mathbb{F}_{1}}\)._ ### Compactification of \(\operatorname{Spec}\mathbb{Z}\) In Section 4, we construct the compactification of \(\operatorname{Spec}\mathbb{Z}\) as follows: by forgetting the addition the structural sheaf of \(\operatorname{Spec}\mathbb{Z}\), one has the monoidal space \((\operatorname{Spec}\mathbb{Z})^{\bullet}\). As a topological space, \(\operatorname{\overline{Spec}\mathbb{Z}}\) is the disjoint union \((\operatorname{Spec}\mathbb{Z})\sqcup\{\infty\}\), with cofinite topology on \((\operatorname{Spec}\mathbb{Z}\setminus\{0\})\sqcup\{\infty\}\) and generic point \(\{0\}\). The sheaf \(\operatorname{\mathcal{O}}_{\operatorname{\overline{Spec}\mathbb{Z}}}\) is given by \[(\operatorname{\overline{Spec}\mathbb{Z}})\backslash\{\infty\}\simeq( \operatorname{Spec}\mathbb{Z})^{\bullet}\quad\text{and}\quad\operatorname{ \mathcal{O}}_{\operatorname{\overline{Spec}\mathbb{Z}},\infty}:=[-1,1]\cap \mathbb{Q}.\] It comes with a canonical injection whose image is the set of non-generic points of \(\operatorname{\overline{Spec}\mathbb{Z}}\). Maps from \(\operatorname{\overline{Spec}\mathbb{Z}}\) to \(\operatorname{SCong}\mathbb{P}^{1}_{\mathbb{F}_{1}}\) Given \(q\in\mathbb{Q}^{*}\backslash\{1,-1\}\), we consider the morphism of pointed monoids \[\begin{array}{ccc}\sigma_{q}:&\mathbb{F}_{1}[T]&\longrightarrow&(\mathbb{Z}[q ])^{\bullet}\\ &T&\longmapsto&q,\end{array}\] where \((\mathbb{Z}[q])^{\bullet}\) is the multiplicative pointed monoid of \(\mathbb{Z}[q]\). It induces a continuous map \(\sigma_{q}^{*}:\operatorname{Spec}\mathbb{Z}[q]\to\operatorname{SCong}\mathbb{ F}_{1}[T]\) that sends a prime ideal \(\mathfrak{p}\) to the strong prime congruence \[\sigma^{*}(\mathfrak{p}):=\big{\{}(a,b)\in\mathbb{F}_{1}[T]\times\mathbb{F}_{1 }[T]\bigm{|}\overline{\sigma(a)}=\overline{\sigma(b)}\text{ in }\mathbb{Z}[q]/ \mathfrak{p}\big{\}}.\] The next result shows how the map \(\varphi_{q}\) fits in this context (see Theorem 5.2). **Theorem B**.: _There exists a continuous map \(\tilde{q}:\operatorname{\overline{Spec}\mathbb{Z}}\to\operatorname{SCong} \mathbb{P}^{1}_{\mathbb{F}_{1}}\) that extends \(\sigma_{q}^{*}\) and such that the diagram_ _commutes._ ### Projective line over algebraic extensions of \(\mathbb{F}_{1}\) In [10], Smirnov also introduces the projective line over certain "algebraic extensions of the field with one element". Let \(\mu_{\infty}\) be the group of complex roots of unity \(\{z\in\mathbb{C}\mid z^{n}=1\text{ for some }n\geq 1\}\) and define \(\mathbb{F}_{1^{\infty}}:=\mu_{\infty}\cup\{0\}\). The set of "geometric points" of \(\mathbb{P}^{1}\) is \[\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}}):=\mathbb{F}_{1^{ \infty}}\sqcup\{\infty\}.\] The Galois group \(G:=\operatorname{Gal}\bigl{(}\mathbb{Q}(\mu_{\infty})/\mathbb{Q}\bigr{)}\) acts on \(\mathbb{F}_{1^{\infty}}\), and hence on \(\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}})\), with trivial action on \(\infty\). For each subgroup \(\Gamma\) of \(G\), a "schematic point" in \(\mathbb{P}^{1,\operatorname{Smi}}_{\Gamma}\) is an orbit of the action \[\Gamma\times\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}}) \longrightarrow\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}}),\] _i.e._, Smirnov defines \(\mathbb{P}^{1,\operatorname{Smi}}_{\Gamma}:=\mathbb{P}^{1,\operatorname{Smi} }(\mathbb{F}_{1^{\infty}})/\Gamma\). **Proposition C** (Corollary 3.5).: _There is a canonical inclusion_ \[i_{\infty}:\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}}) \rightarrow\operatorname{\mathsf{SCong}}_{\mathbb{F}_{1^{\infty}}}\mathbb{P}^ {1}_{\mathbb{F}_{1^{\infty}}}\] _whose image consists of all non-generic points of \(\operatorname{\mathsf{SCong}}_{\mathbb{F}_{1^{\infty}}}\mathbb{P}^{1}_{ \mathbb{F}_{1^{\infty}}}\)._ Given a subgroup \(\Gamma\) of \(G\), we define \[\mathbb{F}_{\Gamma}:=\{\lambda\in\mathbb{F}_{1^{\infty}}\mid h\cdot\lambda= \lambda\text{ for all }h\in\Gamma\}.\] The inclusion \(\mathbb{F}_{\Gamma}\hookrightarrow\mathbb{F}_{1^{\infty}}\) induces a surjective continuous map \[\Phi_{\mathbb{F}_{\Gamma}}:\operatorname{\mathsf{SCong}}_{\mathbb{F}_{1^{ \infty}}}\mathbb{P}^{1}_{\mathbb{F}_{1^{\infty}}}\rightarrow\operatorname{ \mathsf{SCong}}_{\mathbb{F}_{\Gamma}}\mathbb{P}^{1}_{\mathbb{F}_{\Gamma}}.\] The next result, which follows from Theorem 3.8 and Remark 3.10, shows how we recover Smirnov's projective lines by using strong congruence spaces. **Theorem D**.: _If \(\Gamma\) is a closed subgroup of \(\operatorname{Gal}\bigl{(}\mathbb{Q}(\mu_{\infty})/\mathbb{Q}\bigr{)}\), then there exists an injective map \(j_{\Gamma}:\mathbb{P}^{1,\operatorname{Smi}}_{\Gamma}\rightarrow\operatorname{ \mathsf{SCong}}_{\mathbb{F}_{\Gamma}}\mathbb{P}^{1}_{\mathbb{F}_{\Gamma}}\) whose image consists of all non-generic points of \(\operatorname{\mathsf{SCong}}_{\mathbb{F}_{\Gamma}}\mathbb{P}^{1}_{\mathbb{F}_{ \Gamma}}\), and such that the diagram_ _commutes._ **Acknowledgements.** The author thanks Oliver Lorscheid for several conversations and for his help with preparing this text. The author also thanks Eduardo Vital for useful conversations. The present work was carried out with the support of CNPq, National Council for Scientific and Technological Development - Brazil. ## 1. Monoid schemes In this section we recall the theory of monoid schemes. For more details, see [10], [11] and [1]. ### Basic definitions A _pointed monoid_ is a set \(A\) equipped with an associative binary operation \[A\times A \to A\] \[(a,b) \mapsto a\cdot b,\] called _multiplication_ or _product_, such that \(A\) has an element \(0\), called _absorbing element_ or _zero_, and an element \(1\), called _one_, satisfying \(0\cdot a=a\cdot 0=0\) and \(1\cdot a=a\cdot 1=a\) for all \(a\) in \(A\). We sometimes use \(ab\) to denote \(a\cdot b\). A pointed monoid is _commutative_ if \(a\cdot b=b\cdot a\) for all \(a,b\) in \(A\). Throughout this text all monoids are commutative. A pointed monoid is _integral_ if \(ab=0\) implies \(a=0\) or \(b=0\). An _ideal_ of \(A\) is a set \(I\subseteq A\) such that \(0\in I\) and \(ax\in I\) for all \(a\in A\) and \(x\in I\). Every ideal \(I\) induces an equivalence relation \(\sim\) on \(A\) generated by \(\{(0,x)\in A\times A\mid x\in I\}\). The quotient set \(A/I:=A/\sim\) is a pointed monoid with operation given by \([a]\cdot[b]:=[ab]\), absorbing element \([0]\) and one \([1]\). The ideal \(I\) is _prime_ if \(A/I\) is integral. The _ideal generated_ by a subset \(E\subseteq A\) is the intersection \(\langle E\rangle\) of all ideals of \(A\) that contain \(E\), which is the smallest ideal containing \(E\). We use \(\langle a_{i}\mid i\in I\rangle\) to denote the ideal \(\langle\{a_{i}\}_{i\in I}\rangle\). A _morphism_ of pointed monoids is a multiplicative map preserving zero and one. We use \(\mathbb{F}_{1}\) to denote the initial object \(\{0,1\}\) of the category of pointed monoids. An element \(u\in A\) is _invertible_ if there exists \(y\in A\) such that \(uy=1\). We denote the set of invertible elements of \(A\) by \(A^{\times}\). The set \(A\backslash A^{\times}\) is a prime ideal and contains every proper ideal of \(A\). A morphism of pointed monoids \(f:A\to B\) is _local_ if \(f^{-1}(B^{\times})=A^{\times}\). A _pointed group_ is a pointed monoid \(A\) such that \(A^{\times}=A\backslash\{0\}\). If \(f:A\to B\) is a morphism and \(I\) is an ideal of \(B\), then the set \[f^{*}(I):=\{a\in A\mid f(a)\in I\}\] is an ideal of \(A\), called _pullback of \(I\) along \(f\)_. If \(I\) is prime, then \(f^{*}(I)\) is also prime. A _multiplicative subset_ of \(A\) is a multiplicatively closed subset \(S\subseteq A\) that contains \(1\). We define the _localization_\(S^{-1}A\) in the same way as in the case of rings. The _group of fractions_ of an integral pointed monoid \(A\) is the localization \(\operatorname{Frac}(A):=(A\backslash\{0\})^{-1}A\). It is a pointed group and the natural map \(A\to\operatorname{Frac}(A)\) is injective. ### The geometry of monoids We use \(\operatorname{MSpec}A\) to denote the set of prime ideals of the pointed monoid \(A\). For \(h\in A\), we denote the set \(\{I\in\operatorname{MSpec}A\mid h\notin I\}\) by \(U_{h}\). We endow \(\operatorname{MSpec}A\) with the topology generated by the basis \(\{U_{a}\mid a\in A\}\) and with the sheaf of pointed monoids \(\mathcal{O}_{\operatorname{MSpec}A}\) given by \(\Gamma(\mathcal{O}_{\operatorname{MSpec}A},U_{h})=A[h^{-1}]\). A _monoidal space_ is a pair \((X,\mathcal{O}_{X})\) where \(X\) is a topological space and \(\mathcal{O}_{X}\) is a sheaf of pointed monoids on \(X\). A _morphism_ of monoidal spaces \((X,\mathcal{O}_{X})\to(Y,\mathcal{O}_{Y})\) is a pair \((\varphi,\varphi^{\#})\) where \(\varphi:X\to Y\) is continuous and \(\varphi^{\#}:\mathcal{O}_{Y}\to\varphi_{*}\mathcal{O}_{X}\) is a morphism of sheaves of pointed monoids such that the induced map between stalks \(\varphi_{x}^{\#}:\mathcal{O}_{Y,\varphi(x)}\to\mathcal{O}_{X,x}\) is local for every \(x\in X\). An _affine monoid scheme_ is a monoidal space isomorphic to \((\operatorname{MSpec}A,\mathcal{O}_{\operatorname{MSpec}A})\) for some pointed monoid \(A\). A _monoid scheme_ is a monoidal space that has an open covering by affine monoid schemes. ### Base extension to rings Throughout this text all rings are commutative and with unit. Let \(A\) be a pointed monoid. Define \(A_{\mathbb{Z}}\) as the ring \(\mathbb{Z}[A]/\langle 1_{\mathbb{Z}}.0_{A}\rangle\). The assignment \(A\mapsto A_{\mathbb{Z}}\) extends to a functor from the category of pointed monoids to rings, which is left-adjoint to the forgetful functor \(B\mapsto B^{\bullet}\) from rings to pointed monoids. For an affine monoid scheme \(U=\operatorname{MSpec}A\), we use \(U_{\mathbb{Z}}\) to denote the affine Grothendieck scheme \(\operatorname{Spec}(A_{\mathbb{Z}})\). We also define a functor \(Y\mapsto Y^{\bullet}\) from schemes to monoidal spaces by simply forgetting the additive structure of \(\mathcal{O}_{Y}\). Let \(X\) be a monoid scheme. Let \(\mathcal{U}_{X}\) be the category of all affine open subschemes of \(X\), with inclusions as morphisms. We define the _base extension of \(X\) to \(\mathbb{Z}\)_ (or \(\mathbb{Z}\)_-realization of \(X\)_) as the Grothendieck scheme \[X_{\mathbb{Z}}:=\operatorname*{colim}_{U\in\mathcal{U}_{X}}U_{\mathbb{Z}}.\] If \(\{U_{i}\}_{i\in I}\) is an affine covering of a monoid scheme \(X\), then \(\{U_{i,\mathbb{Z}}\}_{i\in I}\) is an affine covering of \(X_{\mathbb{Z}}\). If \(U,V\) are affine open subsets of \(X\), then \(U_{\mathbb{Z}}\cap V_{\mathbb{Z}}=(U\cap V)_{\mathbb{Z}}\) in \(X_{\mathbb{Z}}\) (_cf._[12, Cor. 3.3] and [15, Section 5]). **Proposition 1.1**.: _There exists a morphism of monoidal spaces \(p:X_{\mathbb{Z}}^{\bullet}\to X\), functorial in \(X\), such that if \(X=\operatorname{MSpec}A\) is affine, then_ \[\begin{array}{ccc}p:&(\operatorname{Spec}A_{\mathbb{Z}})^{\bullet}&\longrightarrow &\operatorname{MSpec}A\\ &Q&\longmapsto&Q\cap A\end{array}\] _and \(p^{\#}\) is induced by \(A\hookrightarrow A_{\mathbb{Z}}\)._ Proof.: See [11, Prop. 1.1]. ## 2. Strong congruence spaces We explain the construction of strong congruence spaces introduced by the author in [11], which is based on the congruence space of Lorscheid and Ray (see [14]). Let \(A\) be a pointed monoid. A _congruence_ on \(A\) is an equivalence relation \(\mathfrak{c}\subseteq A\times A\) such that \((ab,ac)\in\mathfrak{c}\) whenever \((b,c)\in\mathfrak{c}\) and \(a\in A\). The _null ideal_ of \(\mathfrak{c}\) is the set \(I_{\mathfrak{c}}:=\{a\in A\mid(a,0)\in\mathfrak{c}\}\). The _congruence generated_ by a subset \(E\subseteq A\times A\) is the intersection \(\langle E\rangle\) of all congruences on \(A\) that contain \(E\), which is the smallest congruence containing \(E\). The _trivial congruence_ is \(\langle\emptyset\rangle=\{(a,b)\in A\times A\mid a=b\}\). The quotient set \(A/\mathfrak{c}\) is a pointed monoid with product \([a]\cdot[b]:=[ab]\), absorbing element \([0]\) and one \([1]\). The congruence \(\mathfrak{c}\) is _prime_ if \(A/\mathfrak{c}\) is integral. If \(\mathfrak{c}\) is prime, then \(I_{\mathfrak{c}}\) is a prime ideal. Let \(f:A\to B\) be a morphism of pointed monoids and \(\mathfrak{d}\) a congruence on \(B\). The _pullback of \(\mathfrak{d}\) along \(f\)_ is the set \[f^{*}(\mathfrak{d}):=\{(x,y)\in A\times A\mid(f(x),f(y))\in\mathfrak{d}\},\] which is a congruence on \(A\). The _congruence kernel_ of \(f\) is \(\operatorname{congker}(f):=f^{*}(\mathfrak{d}_{\operatorname{triv}})\), where \(\mathfrak{d}_{\operatorname{triv}}\) is the trivial congruence on \(B\). **Notation.** If \(\mathfrak{c}\) is a congruence, we sometimes use \(a\sim_{\mathfrak{c}}b\) or \(a\sim b\) instead of \((a,b)\in\mathfrak{c}\). We also use \(\langle a_{i}\sim b_{i}\mid i\in I\rangle\) to denote the congruence \(\langle\{(a_{i},b_{i})\}_{i\in I}\rangle\). **Definition 2.1**.: A _domain_ is a pointed monoid \(A\) that is isomorphic to a pointed submonoid of \(K^{\bullet}\) for some field \(K\). A _strong_ prime congruence on \(A\) is a congruence \(\mathfrak{c}\) such that \(A/\mathfrak{c}\) is a domain. We denote the set of strong prime congruences on \(A\) by \(\operatorname{SCong}A\). **Proposition 2.2**.: _Let \(A\) be a pointed monoid. Then the following are equivalent:_ * \(A\) _is a domain;_ * \(A\) _is isomorphic to a pointed submonoid of_ \(K^{\bullet}\) _for some field_ \(K\) _of characteristic zero;_ * \(A\) _is integral and_ \(\#\{x\in A\mid x^{n}=\alpha\}\leq n\) _for every_ \(\alpha\) _in_ \(A\) _and_ \(n\geq 1\) Proof.: It follows from Rem. 3.2, Rem. 4.6 and Prop. 4.5 of [1]. Let \(k\) be a pointed monoid and \(A\) a \(k\)-algebra. A \(k\)_-congruence_ on \(A\) is a strong prime congruence \(\mathfrak{c}\) such that the natural map \(k\to A/\mathfrak{c}\) is injective. The set of \(k\)-congruences of \(A\), endowed with the topology generated by the sets \[U_{a,b}:=\{\mathfrak{c}\mid(a,b)\notin\mathfrak{c}\},\] is denoted by \(\operatorname{SCong}_{k}A\). Note that \(\operatorname{SCong}A=\operatorname{SCong}_{\mathbb{F}_{1}}A\), and \(\operatorname{SCong}_{k}A=\emptyset\) if \(k\) is not a domain or if the map \(k\to A\) is not injective. There exists a functor \(\operatorname{SCong}_{k}(-)\) from \(k\)-schemes to topological spaces such that: 1. If \(X=\operatorname{MSpec}A\) is affine, then \(\operatorname{SCong}_{k}X=\operatorname{SCong}_{k}A\); 2. If \(\varphi:\operatorname{MSpec}B\to\operatorname{MSpec}A\) is a morphism of affine \(k\)-schemes, then \(\operatorname{SCong}_{k}\varphi=\big{(}\varphi^{\#}(\operatorname{MSpec}A) \big{)}^{*}:\operatorname{SCong}_{k}B\to\operatorname{SCong}_{k}A\); 3. If \(\{U_{i}\}_{i\in I}\) is an open covering of \(X\), then \(\{\operatorname{SCong}_{k}U_{i}\}_{i\in I}\) is an open covering of \(\operatorname{SCong}_{k}X\). The space \(\operatorname{SCong}_{k}X\) comes with a continuous map \(\pi_{X}:\operatorname{SCong}_{k}X\to X\), defined as follows: given \(\tilde{x}\in\operatorname{SCong}_{k}X\), the image of \(\tilde{x}\) by \(\pi_{X}\) is \(\pi_{X}(\tilde{x})=I_{\mathfrak{c}}\) if \(U\subseteq X\) is an open affine and \(\mathfrak{c}\) is a \(k\)-congruence on \(\Gamma U\) such that \[\tilde{x}=\mathfrak{c}\in\operatorname{SCong}_{k}\Gamma U\subseteq \operatorname{SCong}_{k}X.\] We define the _residue field_ of \(\tilde{x}\) as the pointed group \(\kappa(\tilde{x}):=\operatorname{Frac}(\Gamma U/\mathfrak{c})\), which does not depend on the choice of \(U\). A morphism of \(k\)-schemes \(\varphi:X\to Y\) induces an injective morphism of \(k\)-algebras \[\kappa\big{(}\operatorname{SCong}_{k}(\varphi)(\tilde{x})\big{)}\hookrightarrow \kappa(\tilde{x})\] for each \(\tilde{x}\in\operatorname{SCong}_{k}X\). ### The map \(\gamma:X_{\mathbb{Z}}\to\operatorname{SCong}X\) Let \(A\) be a pointed monoid. The canonical inclusion \(i:A\to A_{\mathbb{Z}}^{\bullet}\) induces a continuous map \(i^{*}:\operatorname{Spec}(A_{\mathbb{Z}})\to\operatorname{SCong}A\) that sends a prime ideal \(\mathfrak{p}\) to the strong prime congruence \[i^{*}(\mathfrak{p}):=\operatorname{congker}(\pi_{\mathfrak{p}}\circ i)=\{(a,b )\in A\times A\mid\overline{1.a}=\overline{1.b}\text{ in }A_{\mathbb{Z}}/\mathfrak{p}\},\] where \(\pi_{\mathfrak{p}}:A_{\mathbb{Z}}\to A_{\mathbb{Z}}/\mathfrak{p}\) is the natural projection. **Theorem 2.3**.: _Let \(X\) be a monoid scheme. The underlying continuous map of the morphism of monoidal spaces \(p:X_{\mathbb{Z}}^{\bullet}\to X\) factorizes through a continuous map_ \[\gamma:X_{\mathbb{Z}}\to\operatorname{SCong}X\] _characterized by the property that if \(U\) is an open affine of \(X\), then \(\gamma(U_{\mathbb{Z}})\subseteq\operatorname{SCong}U\) and \(\gamma|_{U_{\mathbb{Z}}}:U_{\mathbb{Z}}\to\operatorname{SCong}U\) is induced by the inclusion \(\Gamma U\to(\Gamma U_{\mathbb{Z}})^{\bullet}\)._ _The map \(\gamma\) induces an injective morphism of pointed groups \(\gamma_{x}:\kappa(\gamma(x))\to\kappa(x)^{\bullet}\) for every \(x\in X_{\mathbb{Z}}\)._ Proof.: See [1, Thm. 4.1 and Prop. 4.2]. ## 3 The projective line over algebraic extensions of \(\mathbb{F}_{1}\) Fix a pointed submonoid of \(\mathbb{F}\subseteq\mathbb{F}_{1^{\infty}}\) and note that \(\mathbb{F}\) is a pointed group. The _projective line over \(\mathbb{F}\)_ is the monoid scheme \[\mathbb{P}_{\mathbb{F}}^{1}=D_{+}(Y)\cup D_{+}(X),\] where \(D_{+}(Y)=\operatorname{MSpec}\mathbb{F}[X/Y]\) and \(D_{+}(X)=\operatorname{MSpec}\mathbb{F}[Y/X]\), with identifications \[\operatorname{MSpec}\mathbb{F}[X/Y]\quad\supset\quad U_{X/Y}\quad=\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \ **Notation.** Let \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}}:=\mathsf{SCong}_{\mathbb{F}}\,\mathbb{P}^{1}_ {\mathbb{F}}\). We introduce the following notation for the non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}}\): * \([n,\lambda]:=\langle(X/Y)^{n}\sim\lambda\rangle\in D_{+}(Y)\cap D_{+}(X)\) for \(n\geq 1\) and \(\lambda\in\mathbb{F}^{\times}\) such that there is no divisor \(d>1\) of \(n\) and \(\theta\in\mathbb{F}\) satisfying \(\theta^{d}=\lambda\); * \([0]:=\langle X/Y\sim 0\rangle\in D_{+}(Y)\); * \([\infty]:=\langle 0\sim Y/X\rangle\in D_{+}(X)\). Note that if \([n,\lambda]\in\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\), then \(n=1\). In this case we use \([\lambda]\) instead of \([1,\lambda]\). As \(\mathbb{F}^{\times}_{1}=\{1\}\), if \([n,\lambda]\in\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\), then \(\lambda=1\). In this case we use \([n]\) instead of \([n,1]\). **Corollary 3.4**.: _The map \(i:\mathbb{P}^{1,\mathrm{Smi}}_{\mathbb{F}_{1}}\to\widetilde{\mathbb{P}}^{1}_{ \mathbb{F}^{1}_{\mathbb{F}^{1}_{\mathbb{F}^{1}_{\mathbb{F}^{1}_{\mathbb{F}^{1} }}}}}\), given by \(i([n])=[n]\), is an injection whose image is the set of non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\)._ **Corollary 3.5**.: _The map \(i_{\infty}:\mathbb{P}^{1,\mathrm{Smi}}(\mathbb{F}_{1^{\infty}})\to\widetilde{ \mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\), given by \(i_{\infty}(\lambda)=[\lambda]\), is an injection whose image is the set of non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\)._ The inclusion \(\mathbb{F}\hookrightarrow\mathbb{F}_{1^{\infty}}\) induces a morphism \(\mathbb{P}^{1}_{\mathbb{F}_{1^{\infty}}}\to\mathbb{P}^{1}_{\mathbb{F}}\) and, consequently, a continuous map \(\mathsf{SCong}_{\mathbb{F}}\,\mathbb{P}^{1}_{\mathbb{F}_{1^{\infty}}}\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}}\). As \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\subseteq\mathsf{SCong}_{ \mathbb{F}}\,\mathbb{P}^{1}_{\mathbb{F}_{1^{\infty}}}\), by restriction one has \[\Phi_{\mathbb{F}}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}}.\] For \(\zeta\) in \(\mathbb{F}_{1^{\infty}}\), let \(\mathrm{ord}_{\mathbb{F}}(\zeta):=\min\{w\geq 1\mid\zeta^{w}\in\mathbb{F}\}\). Note that \(\Phi_{\mathbb{F}}\) is surjective and \[\Phi_{\mathbb{F}}^{-1}([n,\lambda])=\{[\zeta]\in\widetilde{\mathbb{P}}^{1}_{ \mathbb{F}_{1^{\infty}}}\mid\mathrm{ord}_{\mathbb{F}}(\zeta)=n\text{ and }\zeta^{n}=\lambda\}.\] Analogously, the inclusion \(\mathbb{F}_{1}\hookrightarrow\mathbb{F}\) induces a surjective continuous map \[\Psi_{\mathbb{F}}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}}\to\widetilde{ \mathbb{P}}^{1}_{\mathbb{F}_{1}}.\] such that \(\Psi_{\mathbb{F}}^{-1}([n])=\{[m,\lambda]\in\widetilde{\mathbb{P}}^{1}_{ \mathbb{F}}\mid\mathrm{ord}(\lambda)=n/m\}\). **Remark 3.6**.: The map \(\Phi_{\mathbb{F}_{1}}=\Psi_{\mathbb{F}_{1^{\infty}}}:\widetilde{\mathbb{P}}^{1 }_{\mathbb{F}_{1^{\infty}}}\to\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\) satisfies \(\#\Phi_{\mathbb{F}_{1}}^{-1}([n])=\phi(n)\), where \(\phi\) is the Euler function. **Remark 3.7**.: Note that the map \(\Psi_{\mathbb{F}_{1^{2}}}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{2}}}\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\) is a homeomorphism. Let \(G\) be the profinite group \(\mathrm{Gal}(\mathbb{Q}(\mu_{\infty})/\mathbb{Q})\). The map \(\varphi\mapsto\varphi|_{\mathbb{F}_{1^{\infty}}}\) is an isomorphism \(G\stackrel{{\sim}}{{\to}}\mathrm{Aut}_{\mathscr{M}_{0}}(\mathbb{F }_{1^{\infty}})\) (_cf._ [11, Ex. IV.2.6]). The group \(G\) acts on \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\) by \[G\times\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}} \longrightarrow \widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\] \[(g,Q) \longmapsto \left\{\begin{array}{ll}[g\cdot\lambda]&\text{if $Q=[\lambda]$ for some $\lambda\in\mathbb{F}_{1^{\infty}}$;}\\ \begin{bmatrix}\infty\end{bmatrix}&\text{if $Q=[\infty]$;}\\ \begin{matrix}Q&\text{if $Q$ is the generic point.}\end{array}\end{array}\right.\] Given a subgroup \(\Gamma\) of \(G\), we define the pointed subgroup of \(\mathbb{F}_{1^{\infty}}\) \[\mathbb{F}_{\Gamma}:=\{\lambda\in\mathbb{F}_{1^{\infty}}\mid g\cdot\lambda= \lambda\text{ for all $g\in\Gamma$}\}.\] **Theorem 3.8**.: _If \(\Gamma\) is a closed subgroup of \(\mathrm{Gal}(\mathbb{Q}(\mu_{\infty})/\mathbb{Q})\), then \(\Phi_{\mathbb{F}_{\Gamma}}\) induces a continuous bijection \(\text{br}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}/\Gamma\to\widetilde{ \mathbb{P}}^{1}_{\mathbb{F}_{\Gamma}}\)._ Proof.: Note that \(\Phi_{\mathbb{F}_{\Gamma}}([\zeta])=\Phi_{\mathbb{F}_{\Gamma}}(g\cdot[\zeta])\) for all \(\zeta\in(\mathbb{F}_{1^{\infty}})^{\times}\) and \(g\in\Gamma\), thus there exists a continuous map \(p_{\Gamma}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}/\Gamma\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{\Gamma}}\) such that commutes. As \(\Phi_{\mathbb{F}_{\Gamma}}\) is surjective, it only remains to show that \(b_{\Gamma}\) is injective. Let \(\zeta,\xi\in\mathbb{F}^{\times}\) such that \(\Phi_{\mathbb{F}_{\Gamma}}([\zeta])=\Phi_{\mathbb{F}_{\Gamma}}([\xi])\). Then \(n:=\operatorname{ord}_{\mathbb{F}_{\Gamma}}(\zeta)=\operatorname{ord}_{ \mathbb{F}_{\Gamma}}(\xi)\) and \(\lambda:=\zeta^{n}=\xi^{n}\). Note that \(\mathbb{Q}(\mathbb{F}_{\Gamma},\zeta)=\mathbb{Q}(\mathbb{F}_{\Gamma},\xi)\) and there exists \(\varphi\in\operatorname{Gal}(\mathbb{Q}(\mathbb{F}_{\Gamma},\zeta)/\mathbb{ Q}(\mathbb{F}_{\Gamma}))\) such that \(\varphi(\zeta)=\xi\). Thus there exists \(\psi\in\operatorname{Gal}(\mathbb{Q}(\mu_{\infty})/\mathbb{Q}(\mathbb{F}_{ \Gamma}))\) such that \(\psi|_{\mathbb{Q}(\mathbb{F}_{\Gamma},\zeta)}=\varphi\). As \(\Gamma\) is closed, \(\Gamma=\operatorname{Gal}(\mathbb{Q}(\mu_{\infty})/\mathbb{Q}(\mathbb{F}_{ \Gamma}))\). Therefore \(b_{\Gamma}\) is injective. **Remark 3.9**.: For \(G=\operatorname{Gal}(\mathbb{Q}(\mu_{\infty})/\mathbb{Q})\), the set of non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}/G\) has the cofinite topology, thus it is a \(T_{1}\) space. Note that \(\mathbb{F}_{G}=\mathbb{F}_{1^{2}}\) and the set of non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{2}}}\) is not a \(T_{1}\) space, because the point \([9,1]\) is in the closure of \([3,1]\). We conclude that the map \(b_{\Gamma}:\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}/\Gamma\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{\Gamma}}\) is not a homeomorphism in general. **Remark 3.10**.: Note that \(i_{\infty}:\mathbb{P}^{1,\operatorname{Smi}}(\mathbb{F}_{1^{\infty}})\to \widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}\) is a morphism of \(G\)-sets. Thus, given a subgroup \(\Gamma\) of \(G\), one has an injective map \(p_{\Gamma}:\mathbb{P}^{1,\operatorname{Smi}}_{\Gamma}\to\widetilde{\mathbb{P }}^{1}_{\mathbb{F}_{1^{\infty}}}/\Gamma\), whose image consists of all non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1^{\infty}}}/\Gamma\). If \(\Gamma\) is closed, by Theorem 3.8, the map \(j_{\Gamma}:=b_{\Gamma}\circ p_{\Gamma}:\mathbb{P}^{1,\operatorname{Smi}}_{ \Gamma}\to\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{\Gamma}}\) is an injection whose image consists of all non-generic points of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{\Gamma}}\). ## 4 The compactification of \(\operatorname{Spec}\mathcal{O}_{K}\) Let \(K\) be a number field and \(\mathcal{O}_{K}\) its ring of integers. We use \(|-|_{0}\) to denote the trivial absolute value on \(K\) and \([0]\) for the trivial place. As \(\mathcal{O}_{K}\) is a Dedekind domain, every non-zero fractional ideal \(I\subseteq K\) has a unique decomposition \[I=\prod P^{e_{P}},\] where \(P\in(\operatorname{Spec}\mathcal{O}_{K})\backslash\{0\}\) and \(e_{P}\in\mathbb{Z}\), with \(e_{P}\neq 0\) for only finitely many primes \(P\). Given a constant \(c>1\), each \(P\in\operatorname{Spec}\mathcal{O}_{K}\) induces a non-archimedean absolute value \[|-|_{P}: K \longrightarrow \mathbb{R}_{\geq 0}\] \[a \longmapsto \left\{\begin{array}{ll}c^{-e_{P}}&\text{if}\;\;a\neq 0\;\; \text{and}\;\;a\mathcal{O}_{K}=\prod P^{e_{P}}\\ 0&\text{if}\;\;a=0.\end{array}\right.\] We use \([P]\) to denote the equivalence class of \(|-|_{P}\). Every non-trivial non-archimedean absolute value on \(K\) is equivalent to \(|-|_{P}\) for some prime \(P\). Given a (possibly archimedean) place \(\mathfrak{a}\), the set \(\mathcal{O}_{\mathfrak{a}}:=\left\{a\in K^{\bullet}\;\big{|}\;|a|\leq 1\right\}\) is a pointed monoid with maximal ideal \(m_{\mathfrak{a}}:=\left\{a\in K^{\bullet}\;\big{|}\;|a|<1\right\}\). The _residue field_ of \(\mathfrak{a}\) is the pointed group \(\kappa(\mathfrak{a}):=\mathcal{O}_{\mathfrak{a}}/m_{\mathfrak{a}}\). We define the _compactification of \(\operatorname{Spec}\mathcal{O}_{K}\)_ as the set \[\overline{\operatorname{Spec}\mathcal{O}_{K}}:=\{\text{places of }K\}\] endowed with the topology where \([0]\) is the generic point and \(\overline{\operatorname{Spec}\mathcal{O}_{K}}\backslash\{[0]\}\) has the cofinite topology. As \(K\) has only a finite number of archimedean places, the subset \[(\overline{\operatorname{Spec}\mathcal{O}_{K}})_{\operatorname{nA}}:=\{\text{ non-archimedean places of }K\}\] is open. Note that for any \(f\in K\), the subset \[U_{f}:=\{\mathfrak{a}\in\overline{\operatorname{Spec}\mathcal{O}_{K}}\mid f \notin m_{\mathfrak{a}}\}\] is open. We define the _structure sheaf_ of \(\overline{\operatorname{Spec}\mathcal{O}_{K}}\) by \[\mathcal{O}_{\overline{\operatorname{Spec}\mathcal{O}_{K}}}(U):=\{\lambda\in K ^{\bullet}\mid\lambda\in\mathcal{O}_{\mathfrak{a}}\text{ for all }\mathfrak{a}\in U\}\] for an open subset \(U\subseteq\overline{\operatorname{Spec}\mathcal{O}_{K}}\). Note that \(\overline{\operatorname{Spec}\mathcal{O}_{K}}\) is a monoidal space and the map \(P\mapsto[P]\) induces an isomorphism \((\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\simeq(\overline{ \operatorname{Spec}\mathcal{O}_{K}})_{\operatorname{nA}}\). Maps from \(\overline{\operatorname{Spec}\mathcal{O}_{K}}\) to \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\) induced by elements of \(K\) There is a map \(K^{*}\to\operatorname{Hom}_{\operatorname{Sch}}(\operatorname{Spec}\mathcal{O }_{K},\mathbb{P}^{1}_{\mathbb{Z}})\) that sends a non-zero \(f\) to the morphism \(\varphi_{f}:\operatorname{Spec}\mathcal{O}_{K}\to\mathbb{P}^{1}_{\mathbb{Z}}\) induced by the pair \[\begin{array}{ccccc}\varphi_{1}:&\mathbb{Z}[X/Y]&\longrightarrow&\mathcal{O }_{K}[f]&\text{ and }&\varphi_{2}:&\mathbb{Z}[Y/X]&\longrightarrow&\mathcal{O}_{K}[1/f]\\ &X/Y&\longmapsto&f&Y/X&\longmapsto&1/f.\end{array}\] By Theorem 2.3, every \(f\in K^{*}\) defines a continuous map \[\widetilde{f}=\gamma\circ\varphi_{f}^{\bullet}:(\operatorname{Spec}\mathcal{O }_{K})^{\bullet}\to\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}},\] which induces an injective morphism of pointed groups \(\widetilde{f}_{P}:\kappa(\widetilde{f}(P))\to\kappa(P)^{\bullet}\) for every \(P\in\operatorname{Spec}\mathcal{O}_{K}\). If \(P\in\operatorname{Spec}(\mathcal{O}_{K}[f])\) is non-zero, then \(\widetilde{f}(P)=\operatorname{congker}(\pi_{P}^{\bullet}\circ\varphi_{1}^{ \bullet}\circ\iota)\), where \[\mathbb{F}_{1}[X/Y]\stackrel{{\iota}}{{\longrightarrow}}\mathbb{ Z}[X/Y]^{\bullet}\xrightarrow{\varphi_{1}^{\bullet}}\mathcal{O}_{K}[f]^{ \bullet}\xrightarrow{\pi_{P}^{\bullet}}(\mathcal{O}_{K}[f]/P)^{\bullet},\] _i.e._, in \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\) one has \[\widetilde{f}(P)=\left\{\begin{array}{ll}[0]&\text{if }f\in P\\ [n]&\text{if }f\notin P\text{ and }n\text{ is the order of }\overline{f}\text{ in }(\mathcal{O}_{K}[f]/P)^{\times}.\end{array}\right.\] The map \(\widetilde{f}_{P}:\kappa(\widetilde{f}(P))\hookrightarrow\kappa(P)^{\bullet}\) is the inclusion \(\mathbb{F}_{1}\hookrightarrow\kappa(P)^{\bullet}\) if \(\widetilde{f}(P)=[0]\), and the inclusion \[\mathbb{F}_{1}[T]/\langle T^{n}\sim 1\rangle \longleftrightarrow \kappa(P)^{\bullet}\] \[\overline{T} \longmapsto \overline{f}\] if \(\widetilde{f}(P)=[n]\), where \(T:=X/Y\). Analogously, if \(P\) is a non-zero prime ideal of \(\mathcal{O}_{K}[1/f]\), then \[\widetilde{f}(P)=\left\{\begin{array}{ll}[\infty]&\text{if }1/f\in P\\ [\operatorname{ord}(\overline{1/f})]&\text{if }1/f\notin P.\end{array}\right.\] The map \(\widetilde{f}_{P}:\kappa(\widetilde{f}(P))\hookrightarrow\kappa(P)^{\bullet}\) is the inclusion \(\mathbb{F}_{1}\hookrightarrow\kappa(P)^{\bullet}\) if \(\widetilde{f}(P)=[\infty]\), and the inclusion \[\mathbb{F}_{1}[T^{-1}]/\langle T^{-m}\sim 1\rangle \longleftrightarrow \kappa(P)^{\bullet}\] \[\overline{(T^{-1})} \longmapsto \overline{1/f}\] if \(\widetilde{f}(P)\neq[\infty]\). If \(P\) is the generic point of \(\operatorname{Spec}\mathcal{O}_{K}\), then \(\widetilde{f}(P)\) is the generic point of \(\widetilde{\mathbb{P}}^{1}_{\mathbb{F}_{1}}\) and \(\widetilde{f}_{P}\) is the map \[\begin{array}{rcl}\mathbb{F}_{1}[T^{\pm}]&\longrightarrow&K^{\bullet}\\ T&\longmapsto&f.\end{array}\] **Definition 5.1** ([11]).: An element \(\lambda\in K\) is called _exceptional number_ if there exists an archimedean absolute value \(|-|\) such that \(|\lambda|=1\). **Theorem 5.2**.: _If \(f\in K^{*}\) is not exceptional, then there exists a morphism of monoidal spaces \(\psi:\overline{\operatorname{Spec}\mathcal{O}_{K}}\to\mathbb{P}^{1}_{\mathbb{ F}_{1}}\) that extends_ \[(\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\stackrel{{ \varphi^{\bullet}_{f}}}{{\longrightarrow}}(\mathbb{P}^{1}_{\mathbb{F}_{1}})^{ \bullet}\stackrel{{ P}}{{\longrightarrow}}\mathbb{P}^{1}_{ \mathbb{F}_{1}},\] _and a continuous map \(\breve{f}:\overline{\operatorname{Spec}\mathcal{O}_{K}}\to\widetilde{ \mathbb{P}}^{1}_{\mathbb{F}_{1}}\) that factorizes \(\psi\) and extends_ \[\widetilde{f}:(\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\to\widetilde{ \mathbb{P}}^{1}_{\mathbb{F}_{1}},\] _as summarized in the following commutative diagram_ _The map \(\breve{f}\) induces an injective morphism \(\breve{f}_{\mathfrak{a}}:\kappa(\breve{f}(\mathfrak{a}))\hookrightarrow\kappa (\mathfrak{a})\) for each \(\mathfrak{a}\in\overline{\operatorname{Spec}\mathcal{O}_{K}}\), such that \(\breve{f}_{[P]}=\widetilde{f}_{P}\) for all \(P\in\operatorname{Spec}\mathcal{O}_{K}\)._ Proof.: If \(\mathfrak{a}\) is an archimedean place, as \(f\) is non-exceptional, \(f\in m_{\mathfrak{a}}\) or \(1/f\in m_{\mathfrak{a}}\). In this case, define \[\psi(\mathfrak{a}):=\left\{\begin{array}{ll}\langle X/Y\rangle\in D_{+}(Y)& \text{if }f\in m_{\mathfrak{a}}\\ \langle Y/X\rangle\in D_{+}(X)&\text{if }1/f\in m_{\mathfrak{a}}.\end{array}\right.\] For \(Q\in(\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\), define \(\psi([Q]):=p\circ\varphi^{\bullet}_{f}(Q)\). Note that \(\psi:\overline{\operatorname{Spec}\mathcal{O}_{K}}\to\mathbb{P}^{1}_{\mathbb{ F}_{1}}\) is a continuous map. Let \(T:=X/Y\) and define \[f^{\#}:\ \ \mathbb{F}_{1}[T^{\pm}] \longrightarrow K^{\bullet}\] \[T \longmapsto f.\] For \(U\subseteq\mathbb{P}^{1}_{\mathbb{F}_{1}}\) open, note that \(f^{\#}\big{(}\Gamma(\mathcal{O}_{\mathbb{P}^{1}_{\mathbb{F}_{1}}},U)\big{)} \subseteq\Gamma\big{(}\mathcal{O}_{\overline{\operatorname{Spec}\mathcal{O}_{K }}},\psi^{-1}(U)\big{)}\) and define \[\begin{array}{rcl}\psi^{\#}(U):&\Gamma(\mathcal{O}_{\mathbb{P}^{1}_{\mathbb{ F}_{1}}},U)&\longrightarrow&\Gamma\big{(}\mathcal{O}_{\overline{ \operatorname{Spec}\mathcal{O}_{K}}},\psi^{-1}(U)\big{)}\\ &z&\longmapsto&f^{\#}(z).\end{array}\] Note that \((\psi,\psi^{\#})\) is a morphism of monoidal spaces \(\overline{\operatorname{Spec}\mathcal{O}_{K}}\to\mathbb{P}^{1}_{\mathbb{F}_{1}}\) that extends \(p\circ\varphi^{\bullet}_{f}:(\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\to \mathbb{P}^{1}_{\mathbb{F}_{1}}\). Next we construct \(\breve{f}\). If \(\mathfrak{a}\) is an archimedean place, define \[\tilde{f}(\mathfrak{a}):=\left\{\begin{array}{ll}[0]&\text{if }f\in m_{ \mathfrak{a}}\\ _{[\infty]}&\text{if }1/f\in m_{\mathfrak{a}}.\end{array}\right.\] For \(Q\in(\operatorname{Spec}\mathcal{O}_{K})^{\bullet}\), define \(\tilde{f}([P]):=\widetilde{f}(P)\). Note that \(\tilde{f}:\overline{\operatorname{Spec}\mathcal{O}_{K}}\to\widetilde{\mathbb{ P}}_{\mathbb{F}_{1}}^{1}\) is continuous. For \(\mathfrak{a}\in\overline{\operatorname{Spec}\mathcal{O}_{K}}\), we define the injective morphism \(\tilde{f}_{\mathfrak{a}}:\kappa(\tilde{f}(\mathfrak{a}))\hookrightarrow\kappa( \mathfrak{a})\) as follows: if \(\mathfrak{a}=[P]\) for some \(P\in\operatorname{Spec}\mathcal{O}_{K}\), define \(\tilde{f}_{\mathfrak{a}}:=\widetilde{f}_{P}\). If \(\mathfrak{a}\) is archimedean, then \(\kappa(\tilde{f}(\mathfrak{a}))\simeq\mathbb{F}_{1}\). In this case, define \(\tilde{f}_{\mathfrak{a}}\) as the inclusion \(\mathbb{F}_{1}\hookrightarrow\kappa(\mathfrak{a})\). **Remark 5.3**.: In Theorem 5.2, if \(\tilde{f}(\mathfrak{a})\in\operatorname{SCong}D_{+}(Y)\), the diagram commutes, and if \(\tilde{f}(\mathfrak{a})\in\operatorname{SCong}D_{+}(X)\), commutes. Thus the maps \(\tilde{f}_{\mathfrak{a}}\) are induced by \(f^{\#}\).
2304.14659
MultiZenoTravel: a Tunable Benchmark for Multi-Objective Planning with Known Pareto Front
Multi-objective AI planning suffers from a lack of benchmarks exhibiting known Pareto Fronts. In this work, we propose a tunable benchmark generator, together with a dedicated solver that provably computes the true Pareto front of the resulting instances. First, we prove a proposition allowing us to characterize the optimal plans for a constrained version of the problem, and then show how to reduce the general problem to the constrained one. Second, we provide a constructive way to find all the Pareto-optimal plans and discuss the complexity of the algorithm. We provide an implementation that allows the solver to handle realistic instances in a reasonable time. Finally, as a practical demonstration, we used this solver to find all Pareto-optimal plans between the two largest airports in the world, considering the routes between the 50 largest airports, spherical distances between airports and a made-up risk.
Alexandre Quemy, Marc Schoenauer, Johann Dreo
2023-04-28T07:09:23Z
http://arxiv.org/abs/2304.14659v1
# MultiZenoTravel: a Tunable Benchmark for Multi-Objective Planning with Known Pareto Front ###### Abstract Multi-objective AI planning suffers from a lack of benchmarks exhibiting known Pareto Fronts. In this work, we propose a tunable benchmark generator, together with a dedicated solver that provably computes the true Pareto front of the resulting instances. First, we prove a proposition allowing us to characterize the optimal plans for a constrained version of the problem, and then show how to reduce the general problem to the constrained one. Second, we provide a constructive way to find all the Pareto-optimal plans and discuss the complexity of the algorithm. We provide an implementation that allows the solver to handle realistic instances in a reasonable time. Finally, as a practical demonstration, we used this solver to find all Pareto-optimal plans between the two largest airports in the world, considering the routes between the 50 largest airports, spherical distances between airports and a made-up risk. ## 1 Introduction The progress of algorithmics, the availability of more & more data and the dramatic increase of computational power drive a fast-pace evolution of the artificial intelligence (AI) field. As part of this change, the need to assess the performances of computational methods and to compare their merits is crucial. Many of the core fields of AI have set up standard benchmarks and competitions, in order to complement expert knowledge and analysis. In that regard, the automated planning community is at the edge of the state of the art, with the well-known International Planning Competition (IPC), hosting a large benchmark and using a common definition language. A deterministic planning problem consists in selecting a sequence of actions --a plan--having an effect on a state, so that applying the plan on an initial state allows to reach a goal (partial) state, while optimizing a function of the plan's value. This function is generally the total duration to reach the goal (the _makespan_), each action having a duration. However, the value function may very well represent another aspect of the problem, such as the cost of the plan, the energy it requires or the uncertainty produced by the actions. In realistic problems, it is very often the case that several such _objective functions_ exists and are contradictory. For example, a short plan may be costly, while a cheap plan may take a long time. In such a setting, using a linear combination of those objective functions falls back to introducing a bias about the preferences of the operational user. However, preferences cannot always be modelled easily in practice, e.g. the user may decide based on political information. In addition, the knowledge of the feasible compromises between the objectives may, in fact, influence the decision maker simply because it gives additional information about the problem itself. For instance, if the two objectives are a cost and a risk, the decision maker might revise its risk appetite knowing that the increase of the risk by \(1\%\) beyond its initial acceptable risk threshold can lead to a \(50\%\) cost decrease. This calls for the use of multi-objective optimization, where the problem is actually modelled with several objective functions, and the output of the solver is a set of solutions that are non-dominated by other solutions, regarding the objectives. The weak dominance of a \(d\)-dimensional point \(\mathbf{a}\) over point \(\mathbf{b}\) is defined as \(\mathbf{a}\leq\mathbf{b}\iff\mathbf{a}^{d}\leq\mathbf{b}^{d}\ \forall d\in\mathbb{N}^{+}\). A set of point \(P\) is then defined as Pareto-optimal if it dominates every other points in \(X\), all its points being non-dominated by each others: \(\mathbf{p}\leq\mathbf{x}\wedge\nexists\mathbf{q}\mid\mathbf{q}\leq\mathbf{p} \ \forall\mathbf{p},\mathbf{q}\in P,\ \mathbf{x}\in X\). The output of solving such a problem is thus a set of solutions ordered as a "Pareto front". That is, for a problem with two objective functions, a set of points ordered along a monotonic function. Operationally, the user is still in charge of taking the final decision, but the complexity of their decision has been drastically reduced to a \(d-1\) dimensional problem. While real-world problems are often multi-objective in nature, few work actually consider their study in the automated planning area. The well known IPC do not proposes benchmark for such problems [20, 21] to this date. In fact, PDDL 3.0 explicitly offered hooks for several objectives [15], but the only organized competition tracks concerned aggregated objectives, tracks which were canceled in 2011. Despite various work on benchmark generation [14, 13, 15] and extension to other planning problems [22], no truly multi-objective planning problem has been proposed apart from our line of work. ### Previous Works We have previously proposed a problem instance generator for such multi-objective planning problems [16, 17, 18], extending the ZenoTravel[23] problem with an additional objective. Such problem sets are crucial for benchmarking optimization algorithms, especially when the optimum is known, as it allows for a rigorous comparison of the performances of solvers. The original ZenoTravel problem [23] involves planes moving passengers between cities, while taking care of their fuel. Actions such as flying, boarding, deplaning or refueling take various time to complete, and a plane cannot fly without fuel. The objective is to minimize the makespan, while honoring passengers' destinations. The proof-of-concept of the MultiZenoTravel problem is based on a simplified ZenoTravel model [16]. In this model, there are five connected cities (see Figure 1), planes may transport only one passenger at a time and there is only one flying speed. The main addition to the problem is that an additional objective is attached to all actions. This second objective is either a _cost_, which is additive (each plane has to pay the corresponding tax every time it lands in that city) either a _risk_ (for which the maximal value encountered during the complete execution of a plan is to be minimized). In this first instance, three passengers can be moved across the cities. The second version of the MultiZenoTravel problem [16] builds up on the proof-of-concept, allows for 3, 6 and 9 passengers, and a 3-to-2 passengers-to-planes ratio, which makes up for very small instances, due to the combinatorial explosion of the solution space. In [18], we introduce an algorithm to compute the true Pareto fronts of very large instance in reasonable time, and the first version of the ZenoSolver software is described (see Section 4 for further details). The article also provides a few typical instances that exhibit very different shapes of Pareto Fronts, for different levels of complexity. Unfortunately, this works suffer from two unrealistic assumptions. First, the distances are assumed to be symmetric around the central cities, that is to say \(f\forall i,d_{i}=\bar{d}_{i}\). Second, the proof of the proposition which allows us to find Pareto-optimal plans relies on the following unrealistic assumption: \[\forall(i,j)\in[1,n]^{2},d_{i}+d_{j}<d_{ij}\] In other words, none of the instances generated under this assumption would respect a triangular inequality. Even if a benchmark does not necessarily have to be realistic by nature, this particular assumption drastically restrains the extrapolation of the performances of a solver observed on the benchmark to real life problems. ### Our Contribution In this work, we introduce a constructive way to find the Pareto-optimal solutions for the MultiZenoTravel problem. Our first contribution is to generalize from the original symmetric clique problem introduced in [10] to a non-symmetric clique version and to a version with no particular assumption on the graph. As a second contribution, we provide a way to characterize the Pareto-optimal plans for all versions of the problem, leading to a constructive algorithm to find the Pareto-optimal solution for any instance. In particular, we would like to insist on the fact that this paper is not an extension of [10] to a more generic case. In fact, in [10] the unrealistic aforementioned assumption contradicts the triangular inequality assumption made in this paper. In addition, even the technical implementation of the algorithm have been completely revised. The only intersection between the papers is contained in Section 3 and concerns how we define and count the potential and admissible Pareto Optimal Plans. In addition, we present the ZenoSolver, a C++ implementation of the algorithm to solve MultiZenoTravel. It can output the instance definition in PDDL such that the generated instance can easily be used by other solvers. We demonstrate how to generate instances with different behavior by tuning the input parameters. We provide a theoretical and empirical study of the performances of ZenoSolver. Although we think that the primary utility of ZenoSolver is to generate benchmarks with known Pareto Front to study other solvers' properties and behaviors, we provide a demonstration of the ZenoSolver on a problem using real data. This had been made possible because of the extension of the solver capabilities from the symmetric clique problem to any arbitrary graph. ### Outline The plan of this paper is as follows: in Section 2, we define three versions of the MultiZenoTravel problem. Section 2.2 is dedicated to solve the symmetric MultiZenoTravel problem, followed by Section 2.3 that focuses on the non-symmetric version of the problem. In Section 2.4, we show how any instance of the general MultiZenoTravel problem can be reduced to a non-symmetric version via a polynomial-time reduction. Figure 1: A schematic view of a non-symmetric clique MultiZenoTravel problem. In Section 3, we detail how to construct Pareto optimal plans for the non-symmetric problem. In Section 4, we introduce the ZenoSolver, a \(\mathsf{C}^{+}\) implementation of the algorithm described in the previous sections. Finally, in Section 5, we study an (almost) realistic application for MultiZenoTravel and ZenoSolver using the Openflight database to find the optimal routes between the two largest airports in the world. ## 2 MultiZenoTravel problems In this Section, we introduce three versions of the MultiZenoTravel problem: the symmetric clique MultiZenoTravel, the non-symmetric clique MultiZenoTravel and the general MultiZenoTravel. First, we will prove a proposition that characterizing the non-optimal plans for the symmetric clique MultiZenoTravel and therefore, helps building optimal plans. Then, we show that the for the non-symmetric version, the proposition still holds. Finally, we show that we can reduce the general MultiZenoTravelto a clique version, and thus, still build optimal plans. ### Instances Let us introduce some notations related to the planning problem briefly presented in the introduction: a non-symmetric clique MultiZenoTravel instance (Figure 1) is defined by the following elements: * \(n\) central cities, organized as a clique in which every node is connected to \(C_{I}\) and \(C_{G}\), respectively the initial city and the goal city. * \(c\in(\mathbb{R}^{+})^{n}\), where \(c_{i}\) is the cost for landing in \(C_{i}\). * \(D\in(\mathbb{R}^{+})^{n\times n}\), where \(d_{ij}\) is the flying time between \(C_{i}\) and \(C_{j}\). * \(d\in(\mathbb{R}^{+})^{n}\), where \(d_{i}\) is the flying time between \(C_{I}\) and \(C_{i}\). * \(\bar{d}\in(\mathbb{R}^{+})^{n}\), where \(\bar{d_{i}}\) is the flying time between \(C_{i}\) and \(C_{G}\). * \(p\) planes, initially in \(C_{I}\), that have a capacity of a unique person. * \(t\) persons, initially in \(C_{I}\). The goal of MultiZenoTravel is to carry all \(t\) persons, initially in \(c_{I}\), to \(c_{G}\) using \(p\) planes, minimizing both the makespan and the cost of the plan. Without loss of generality, all pairs \((d_{i},c_{i})\) are assumed to be pairwise distinct. Otherwise, the 2 cities can be "merged" and the resulting \(n-1\) cities problem is equivalent to the original \(n\) cities problem, as there exist no city capacity constraints. Finally, we only consider cases where \(t\geq p\), as the problem is otherwise trivial. An instance of the symmetric clique MultiZenoTravel problem is an instance such that \(d=\bar{d}\). A general MultiZenoTravel instance is an instance such that the \(n\) central cities are organized as an arbitrary graph. \[\begin{array}{ccccccc}p_{1}:&C_{I}&\stackrel{{ t_{1}}}{{ \rightarrow}}C_{4}&\to C_{G}&\to C_{2}&\stackrel{{ t_{2}}}{{ \rightarrow}}C_{G}\\ p_{2}:&C_{I}&\stackrel{{ t_{2}}}{{\rightarrow}}C_{2}&\to C _{I}&\stackrel{{ t_{3}}}{{\rightarrow}}C_{3}&\stackrel{{ t_{3}}}{{\rightarrow}}C_{4}&\to C_{G}\end{array}\] Figure 2: Example of an admissible plan to transport 3 travelers via 2 planes. This representation indicates the successive actions for each plane. We noted by \(\stackrel{{ t_{i}}}{{\rightarrow}}\) a flight with a traveler. Figure 2 illustrates an admissible solution. Note that the makespan for a plan is not necessarily the largest sum of the flights' duration as some planes might have to wait for others. This could be the case for \(p_{1}\) waiting for \(t_{2}\) in \(C_{2}\). ### Symmetric MultiZenoTravel The method to find the Pareto Front of any general MultiZenoTravel instance consists of three steps. First, we provide an efficient algorithm to find the Pareto Front for any symmetric clique MultiZenoTravel instance. Then, we show that there exists one particular case in which the algorithm does not work for the non-symmetric version of the problem. However, we provide a way to easily transform the instance such that a slightly modified version of the algorithm can find the Pareto Front. Last, we provide an algorithm to transform any instance of the general MultiZenoTravel problem into a non-symmetric clique version. The following proposition is the cornerstone of the method to identify and construct the Pareto Set of any instance: **Proposition I**: Pareto-optimal plans are plans where exactly \(2t-p\) (possibly identical) central cities are used by a plane. In particular, Proposition I will be proven only for the symmetric case as we will show there is one corner-case to the demonstration for the non-symmetric version. We will overcome this corner-case by a slight modification of the solver's main algorithm. For the rest of this paper, we make the following triangular inequality assumption: **Assumption**: \(\forall(i,j,k)\in([1,n]\cup\{I,G\})^{3},\ d_{ij}+d_{jk}\geq d_{ik}\) (A\(\Delta\)) The goal of this section is to establish the proof of Proposition I for the Symmetric case. For that purpose, we will first determine a couple of properties, mostly deduced from (A\(\Delta\)), to restrict the movements of the planes in Pareto Optimal plans to four different patterns. **Property I**: 1. A plane flies from \(C_{I}\) with a passenger and to \(C_{I}\) empty. 2. A plane flies from \(C_{G}\) empty and to \(C_{G}\) with a passenger. 3. A plane does not fly two times in a row between central cities with or without a passenger. **Proof**: Straightforward consequences of (A\(\Delta\)). \(\Box\) **Corollary I**: All the planes finish their respective sequence in \(C_{G}\). **Proof**: If a plane finishes in \(C_{I}\), it cannot arrives full from a central city due to Property I.1 nor empty because the last move would be useless. If the plane finishes in \(C_{i}\) and came empty, it cannot comes from \(C_{I}\) due to Property I.1. It cannot come from \(C_{j}\) another central city or \(C_{G}\) because the movement would be useless. If the plane finishes in \(C_{i}\) and came full, another plane will have to carry the passenger to \(C_{G}\) and thus, it would be at least as fast to go directly to \(C_{G}\) with the inital plane. \(\Box\) From those observations, we deduce the only four possible patterns that a plane can perform, denoted by \(A\), \(\bar{A}\), \(B\), \(\bar{B}\). More precisely, if a plane does perform another pattern than these ones, the plan is dominated by the plan corrected in such a way that it respects Property I because the second plan uses less cities (the makespan might be the same but no longer as insured by the triangular inequality). We denote by \(|X|\) the number of the patterns \(X\in\{A,\bar{A},B,\bar{B}\}\) in a given plan and call _multiplicity_ the number \(\theta\) associated to a specific pattern execution. Depending on the pattern, the number of cities involved is either even or odd, so as to respect Property I. Using Property I and the triangular inequality, we deduce the following property on the cardinal of each patterns in potential Pareto plans: **Property II**: If a plan does not respect the following constraints, it is dominated: 1. \(|A|+|B|=t\) 2. \(|A|+|\bar{B}|=t\) 3. \(|B|=|\bar{B}|\) 4. \(|A|=|\bar{A}|+p\) **Proof**: The pattern \(A\) and \(B\) are the only ones that allow to take out a passenger from \(C_{I}\), so a feasible plan has at least \(t\) of those patterns. Once all the passengers are out, (A\(\Delta\)) ensures that there is no reason to come back to \(C_{I}\). Apply the same reasoning to \(C_{G}\) with \(A\) and \(\bar{B}\) to prove the second point. The third point is a simple substraction. To go to \(C_{G}\) from \(C_{I}\) there is a need for a pattern \(A\) but as all the planes are finishing in \(C_{G}\), there is at least \(p\) pattern \(A\). If there is a pattern \(\bar{A}\) there is a need of another pattern \(|A|\), thus proving the fourth point. \(\Box\) **Corollary II**: If a plan does not perform exactly \(2t-p\) patterns, it is dominated. **Proof**: Straightforward consequence of Property II. \(\Box\) By using (A\(\Delta\)) and given any plan such that a passenger crosses more than one city using two planes, it is easy to find a reorganization of the plan such that it uses only one city and dominates the previous one. However, if a passenger lands in more than one city using at least three planes (one flying full between two central cities), it is not clear whether such a reorganization is possible. This case is illustrated in Figure 4. Such a sequence always starts by a pattern \(A\) or \(B\) and ends by a pattern \(\bar{A}\) or \(\bar{B}\). As the method to reorganize a plan is independent of the original number of cities a passenger goes through, without loss of generality, we will consider the case where a passenger goes through two cities using three planes. Figure 3: On top, Pattern \(A\) and \(\bar{A}\). In the bottom, \(B\) and \(\bar{B}\). The dots above the arrows indicate a flight with a passenger. Fixing \(|A|\) fully determines the cardinal of all patterns and as \(|A|\in[p,t]\) we can parameterize the pattern distribution by a single integer \(k\in[0,t-p]\). For a given \(k\), \(\Psi(k)\) denotes the set of elements indicating for each pattern the list of cities. We characterize such a partition w.r.t. a given \(k\) by: \[\Psi(k)=\{\psi(k)\}\ \ \text{s.t.}\ \ \ \psi(k)\ \ \text{of the form}\] \[\psi(k)=\left\{\begin{array}{lcl}a&:=(a_{1},\ \...,\ \ a_{p+k})\\ \bar{a}&:=(\bar{a}_{1},\ \...,\ \ \bar{a}_{k})\\ b&:=(b_{1},\ \...,\ \ b_{t-p-k})\\ \bar{b}&:=(\bar{b}_{1},\ \...,\ \ \ \bar{b}_{t-p-k})\end{array}\right.\] such that any element \(e\) of any of the four tuples describes a pattern execution, i.e. \(e\in\{1,...,n\}^{|e|}\) with \(|e|\) the number of cities involved in the pattern. For the sake of readability, we denote \(\psi(k)\) by \((k,\psi)\) where implicitly, \(\psi\in\Psi(k)\). For each couple \((k,\psi)\) we denote by \(\mathcal{P}(k,\psi)\) the set of all feasible plans respecting the induced conditions. For a given instance of MultiZenoTravel, it is easy to see that \(\bigcup\limits_{(k,\psi)}\mathcal{P}(k,\psi)\) is a partition of the set of feasible plans respecting Property II. In other words, for any feasible plan \(p\), there exists an element \((k,\psi)\) such that \(p\in\mathcal{P}(k,\psi)\) For any \(k\), \(\Psi_{0}(k)\) denotes the subset of \(\Psi(k)\) such that each pattern has a null multiplicity. The elements of the union of \(\mathcal{P}(k,\psi_{0})\) for any \(k\) and \(\psi_{0}\in\Psi_{0}(k)\) are the feasible plans using only \(2t-p\) cities. Notice that \(\mathcal{P}(k,\psi_{0})\) may be empty, for instance if a city \(b_{i}\) in \(b\) is not present in \(\bar{b}\). In such a case, it is impossible to create a feasible plan respecting the induced constraints. The idea of the demonstration is to show how we can transform any \((k,\psi)\) into a \((k,\psi_{0})\) such that there exists \(p\in\mathcal{P}(k,\psi_{0})\) such that for any \(p^{\prime}\in\mathcal{P}(k,\psi)\), \(p\succeq p^{\prime}\). The idea is to arbitrarily chose one city for each pattern such that each pattern as a null multiplicity. The only problem is with the pattern \(B\) and \(\bar{B}\) that may not have joint city, i.e. one plane will carry a passenger in city and no other plane will come to bring it to \(C_{G}\). Then, we show it is always possible to _repair_ such a plan, such that the new plan has a lower cost and makespan by construction and uses only \(2t-p\) cities. **Assumption**: (Symmetry) \(\forall i\in[1,n],\ d_{i}=\bar{d}_{i}\) (S) Figure 4: The only non-trivial case where a plan rearrangement is not trivial. One passenger travels through two cities \(c_{i_{1}}\) and \(c_{i_{2}}\) using three planes, respectively \(p_{1}\), \(p_{2}\) and \(p_{3}\). The path taken by the passenger is in bold. By Property II, \(p_{1}\) transports the passenger by performing a pattern \(X_{1}\in\{A,B\}\) and \(p_{2}\) a pattern \(X_{3}\in\{\bar{A},\bar{B}\}\). We do not assume the pattern performed by \(p_{2}\). **Definition**: (Pattern Reduction) For any pattern with a multiplicity \(\theta>0\) such that it goes through the cities with indices \((i_{1},...,i_{K})\) (with the first and the last city being \(I\) or \(G\) depending on the considered pattern), the pattern reduction operation consists in selecting a single city among \((i_{2},...,i_{K-1})\) to create a new _reduced_ pattern of the same type but with multiplicity \(0\). The reduced pattern has obviously a lower cost as it requires less landings and its duration is lower due to the Assumption \(A\Delta\). **Definition**: (\((B\bar{B})\)-pairing) Consider a pattern \(B\) and \(\bar{B}\) in a list of patterns, both of multiplicity \(0\) and using respectively the central city \(C_{i}\) and \(C_{j}\). Pairing them consists in using the central city \(C^{*}=\text{argmin }(d_{i}+\bar{d_{i}},d_{j}+\bar{d_{j}})\). In the symmetric case, the condition becomes \(C^{*}=\text{argmin }(d_{i},d_{j})\). Under Assumption (\(S\)), the duration of both patterns involved in the pairing is lower or equal to its duration before the pairing. Let us consider a plan \(p_{1}\) such that there is at least one pattern (of any type) with a multiplicity higher than \(0\). We can always apply the following transformation on its list of patterns: 1. for each pattern with a non-zero multiplicity, do a pattern reduction. 2. for each couple of patterns \(B\) and \(\bar{B}\), do a \((B\bar{B})\)-pairing. It is always possible to pair the patterns since we consider only plans respecting the Property II and in particular the following pattern constraint: \(|B|=|\bar{B}|\). Those steps transform any \((k,\phi)\) into a \((k,\phi_{0})\). Indeed, the cardinality of each type of patterns defined by \(k\) is conserved but for each single pattern, its multiplicity is now zero. On top of that, each city that appeared in \(\phi_{0}\) also appears in \(\phi\) which implies a lower cost of all feasible plans on \((k,\phi_{0})\). The set \(\mathcal{P}(k,\phi_{0})\) is not empty since we constructed a valid plan. The duration of each pattern remains the same or is lower than in the original plan due to the reduction. The \((B\bar{B})\)-pairing also let unchanged the duration for one of the pattern and lower it for the second one involved in the couple due to the symmetric assumption (S). As a result, the transformed plan dominates the original one while using only \(2t-p\) cities, thus proving Proposition I under Assumption (A\(\Delta\)) and Assumption (S). In conclusion, we proved that for the Symmetric MultiZenoTravel problem, Pareto Optimal plans are plans using exactly \(2t-p\) central cities. This will allow a constructive algorithm to drastically reduce the search space of feasible and Pareto-optimal plans. ### Non-Symmetric Clique MultiZenoTravel In this section, we relax the Assumption (S). As a result, the previous method does not work in the particular case a \((B\bar{B})\)-pairing is not possible. More precisely, it happens when there is no choice of cities to perform a \((B\bar{B})\)-pairing because for two cities \(C_{i}\) and \(C_{j}\) (resp. for \(B\) and \(\bar{B}\)) we have \(d_{i}<d_{j}\) and \(\bar{d}_{j}<\bar{d}_{i}\). In such case, the \((B\bar{B})\)-pairing on \(C_{i}\) (resp. \(C_{j}\)) would increase the duration of the pattern \(\bar{B}\) (resp. \(B\)) by \(2(\bar{d}_{i}-\bar{d}_{j})\) (resp. \(2(d_{j}-d_{i})\)). As a result, without any other change, the transformed plan has a larger total flight duration for at least one plane which may result in a larger makespan. A reorganization of the plan is not trivial such that instead, we will further characterize such situations and propose in Section 3.3 a transformation to get rid of them. We will show that for a plan not to be dominated, the three patterns executed by \(p_{1}\), \(p_{2}\) and \(p_{3}\) as illustrated by Figure 4 must be \(B\), \(A\) and \(\bar{B}\). We call it a \(B\bar{A}\bar{B}\)_situation_ and in particular, \(A\) is of multiplicity exactly equals to 1. For each plan, denote by \(\mathcal{O}\) (for _out_) and \(\mathcal{I}\) (for _in_) the sets of patterns, respectively to take a passenger from \(C_{I}\) and to bring a passenger to \(C_{G}\). The set \(\mathcal{O}\) contains all patterns \(A\) and \(B\) while \(\mathcal{I}\) contains also all \(A\) but also \(\bar{B}\). As proven previously, in a non-dominated plan, a passenger needs to travel through exactly one pattern from \(\mathcal{O}\) and one pattern from \(\mathcal{I}\). Therefore, for each particular passenger, there is the choice between the following couples of patterns: \[\begin{array}{ccc}C1&=&\text{a single $A$ with $\theta=0$}\\ C2&=&(A,A)\\ C3&=&(A,\bar{B})\\ C4&=&(B,A)\\ C5&=&(B,\bar{B})\end{array}\] The multiplicity for each pattern in each couple is defined by \[\begin{array}{ccc}M1&=&(0)\\ M2&=&(\theta>0,\theta>0)\\ M3&=&(\theta>0,\theta\geq 0)\\ M4&=&(\theta\geq 0,\theta>0)\\ M5&=&(\theta\geq 0,\theta\geq 0)\end{array}\] Therefore, for each couple of patterns, we can calculate the number of passengers transported from \(C_{I}\) or to \(C_{G}\), on top of the passengers defined by the couple itself: \[\begin{array}{ccc}T1&=(0,0)\\ T2&=(1,1)\\ T3&=(0,1)\\ T4&=(1,0)\\ T5&=(0,0)\end{array}\] For instance, it means that \(C3\) implies that another passenger moved from a central city to the destination. Therefore, it implies that we need to select another couple such that the pattern \(\mathcal{I}=A\), i.e. \(C2\) or \(C4\). We can deduce that: \[\begin{array}{ccc}C3&\implies&C2\lor C4\\ C4&\implies&C2\lor C3\\ C2&\implies&C2\lor C4+C3\lor C3\end{array}\] As there is a finite number of patterns to pick in a feasible and optimal plan, it is impossible to select \(C_{2}\), \(C_{3}\) or \(C_{4}\). In other words, we can use only \(C_{1}\) and \(C_{5}\). The only pattern in \(C_{1}\) has a null multiplicity by definition, such that, if a passenger travels through two central cities, it implies that, not only she does it through \(B\) and \(\bar{B}\) but also, there exists a central pattern with multiplicity greater than zero. This pattern cannot be \(A\). Assume such case and that the \((B\bar{B})\)-pairing is not possible. If the central pattern is \(B\), then, another passenger has been moved from \(C_{I}\). Therefore, there is another \(B\) somewhere in the plan. In total, the situation implies four patterns \(B_{1}\), \(\bar{B}_{1}\), \(B_{2}\) and \(\bar{B}_{2}\) and we assumes \(B_{1}\) and \(\bar{B}_{1}\) could not be paired with any other pattern. However, by construction, \(B_{2}\) is pairable with \(\bar{B}_{1}\). Therefore, the central city cannot be \(B\). By a symmetric reasoning, we conclude that the central city cannot be \(\bar{B}\) and must be \(\bar{A}\) with multiplicity greater than 0 as illustrated by Figure 7. In conclusion, we proved that the proof of the previous section holds except when a \(B\bar{B}\)-pairing is not possible. In this case, we proved that non-dominated plans must use a conjunction of three patterns \(B\bar{A}\bar{B}\) altogether, with \(A\) having a multiplicity exactly equal to 1. Therefore, a constructive algorithm can still focus on plans with \(2t-p\) central cities with additional care for the particular situation where \(B\bar{B}\)-pairing is not possible. ### General MultiZenoTravel We now consider a connected weighted graph \(U=(V,E)\) such that \(|E|=n+2\) and two arbitrary vertices named \(I\) and \(G\) (with weight 0), respectively for the initial and the goal cities. A MultiZenoTravel instance \(\Pi\) is defined by the triplet \((U,I,G)\). We denote by \(\Lambda\) the set of paths over \(U\), and for any path \(p\in U\), \(|p|\) is the number of cities in the path. We define the functions \(\phi\) (resp. \(\omega\)) as follows: \[\forall p\in\Lambda,\ \phi(p) =\sum_{0\leq i\leq|p-1|}d_{p_{i,i+1}} \tag{1}\] \[\forall p\in\Lambda,\ \omega(p) =\sum_{1\leq i\leq|p|}c_{p_{i}} \tag{2}\] The function \(\phi\) provides the duration, while \(\omega\) provides the landing cost of the path. Notice that the first city in a path does not appear in \(\omega\) because the initial state is such that the planes are in the initial city. The following algorithm \(f\) allows to transform the general MultiZenoTravel transport problem defined by \((U,I,G)\) into the original non-symmetric clique problem. Figure 7: Illustration of a \(B\bar{A}\bar{B}\) situation. The passenger is carried to and from a central city by a \(B\) or \(\bar{B}\) (dashed) and is transported by a \(\bar{A}\) between central cities (bold). * For each vertex \(i\) find all the paths from \(I\to i\) (resp. from \(G\to i\)) and denote this set \(\Lambda_{I\to i}\) (resp. \(\Lambda_{i\to G}\)). * Construct a new graph \(\bar{U}=(\bar{V},\bar{E})\) such that for each \((w_{i},e_{i})\in\Lambda_{I\to i}\times\Lambda_{i\to G}\), create a vertice of weight \(\omega(e_{i})+\omega(w_{i})\)1 and an edge \(I\to i\) (resp. \(i\to G\)) of weight \(\phi(w_{i})\) (resp. \(\phi(e_{i})\)). Footnote 1: The cost of landing in city \(C_{i}\) is counted only once, in the west path. * For all couple of cities \((i,j)\in\bar{V}^{2}\), assign \(d_{i,j}=+\infty\). **Proposition:** A solution to the transformed problem is a solution to the generic problem. **Proof:** Let \(\Pi\) be an instance of the generic problem and \(\Pi^{*}\) the clique instance obtained by the reduction function \(f\), i.e. \(\Pi^{*}=f(\Pi)\). We need to show that \(p\in P_{s}(\Pi)\Leftrightarrow p^{*}=f(p)\in P_{s}(\Pi^{*})\). The function \(f\) is surjective: \(\forall p^{*}\in\mathcal{P}(\Pi^{*})\), \(f^{-1}(p^{*})\) is the (unique) plan such that we expand every vertex by the associated path in \(\Lambda_{I\to i}\times\Lambda_{i\to G}\). For a given \(p\in\mathcal{P}(\Pi)\) there exist as many \(p^{*}\in\mathcal{P}(\Pi^{*})\) as there are ways of splitting a sequence of cities into two. The application \(f^{-1}\) is obviously injective. By construction, \(\forall p\in\mathcal{P}(\Pi),\forall p_{1}^{*},p_{2}^{*}\in(\mathrm{Im}_{f}(p ))^{2},\ M(p_{1}^{*})=M(p_{2}^{*})\) and \(C(p_{1}^{*})=C(p_{2}^{*})\). Furthermore, \(\forall p\in\mathcal{P}(\Pi),\forall p^{*}\in\mathrm{Im}_{f}(p),\ M(p)=M(p^{*})\) and \(C(p)=C(p^{*})\) by construction of \(f\). Therefore, \(f^{-1}\) defines an equivalence relation whose classes are uniquely identified by a \(p\in\mathcal{P}(\Pi)\)2. As a result, \(f\) defines a bijection from \(\mathcal{P}(\Pi)\) to \(\mathcal{P}(\frac{\Pi^{*}}{f^{-1}})\). As for all \(p^{*}\in\mathcal{P}(\frac{\Pi^{*}}{f^{-1}})\) the objective vectors of \(p^{*}\) and \(f^{-1}(p^{*})\) are the same, \(p\in P_{s}(\Pi)\implies p^{*}=f(p)\in P_{s}(\Pi^{*})\). \(\square\) Footnote 2: They are defined by \(p\) and not by \(M\) and \(C\) since it might exist two plans in \(\mathcal{P}(\Pi^{*})\) with the same objective vector but a different image by \(f^{-1}\). By construction, for any generic instance \(\Pi\), the reduced instance \(\Pi^{*}\) satisfies the assumption (A\(\Delta\)) such that the method described in Section 3.2 to identify the Pareto Front directly apply to any instance. **On the complexity:** In general the number of cities in \(\Pi^{*}\) is not polynomial in function of \(n\), the number of cities in \(\Pi\) and thus, solving \(\Pi\) through \(\Pi^{*}\) might be challenging w.r.t. the initial complexity of the problem and our algorithm. On top of that, the number of paths between two vertices can be up to super-exponential, as illustrated in Figure 8, and computing the cardinal of the set of paths is already a \(\sharp P\)-complete problem [20]. We can notice that in our case, every sub-path of a path in a Pareto-optimal Plan is a non-dominated path itself. Consider a plan \(p\in\mathcal{P}(\Pi)\) such that there exists two cities \(i\) and \(j\) such that the path between those two cities is dominated by another path. It then obvious that the plan \(p^{\prime}\) based on \(p\) but using the non-dominated path is at least as good as \(p\) because there can not be any \(B\bar{A}\bar{B}\) situation by construction. Therefore, in non-dominated plans, all paths are non-dominated and all sub-path of a non-dominated path is non-dominated. As a result, it is unnecessary to turn the dominated paths from \(\Pi\) into the cities of \(\Pi^{*}\)3. Footnote 3: It is possible to lose some elements of the Pareto Set, i.e. in the decision space, but the Pareto Frontier will be entirely found. However, even in this case, the number of central cities in \(\Pi^{*}\) is function of the cardinal of the Pareto Set of non-dominated paths for each couple of cities in \(\Pi\). In [14], Hansen proposed some pathological instances of bicriterion graphs such that the set of non-dominated paths between two extremes nodes is exponentiel in the size of nodes \(n\). We slightly modified the instance to fit our problem as illustrated by Figure 9. In this instance, all the paths between \(C_{I}\) and \(C_{G}\) are non-dominated and there are exactly \(2^{\frac{n-1}{2}}\) paths. For computational purposes, we show that if an instance respects an extended version of the triangular inequality, then the number of non-dominated paths is bounded by a polynomial in \(n\). This assumption holds, for instance, for planar straight line graph where nodes are associated to points in a Euclidean plane. **Proposition IV:** If for a graph \(U\) the number of non-dominated paths is bounded by a polynomial of \(n\), the reduction is polynomial of \(n\) and the number of cities in \(\Pi^{*}\) is polynomial of \(n\). **Proof:** Since any subpath of a non-dominated path must be non-dominated, and the number of non-dominated paths is bounded by a polynomial of \(n\), finding the set of non-dominated paths can be done in \(n^{k}\) for a certain \(k\). For each non-dominated paths from \(C_{I}\) to \(C_{G}\), the reduction consists in splitting the paths for each city in the paths to create a new city. Let us denote by \(D\) the number of non-dominated paths from \(C_{I}\) to \(C_{G}\) and by \(L\) the maximal number of central cities of a non-dominated path from \(C_{I}\) to \(C_{G}\). For any non-dominated path from \(C_{I}\) to \(C_{G}\), there are at most \(L\) new cities to create. Therefore, the total complexity is bounded by \(n^{k}DL\). \(\square\) In conclusion, we know have a method to reduce any general MultiZenoTravel instance to a symmetric MultiZenoTravel instance. Therefore, any solver for the symmetric MultiZenoTravel problem can solve the general MultiZenoTravel problem after transformation. We leave for future work the classification of graphs for which the assumption of Proposition IV holds. Figure 8: An example of graph with super-exponential number of s-t paths depending on the number of nodes. The graph is made of \(\sqrt{n}\) layers of \(\sqrt{n}\). Each unlabelled edge has a unit weight. The number of \(s-t\) path is then \((\sqrt{n})^{\sqrt{n}}\). Figure 9: A modified instance of Hansen graph for which the set of non-dominated paths between \(C_{I}\) and \(C_{G}\) is exponential in \(n\). Pareto Optimal Plans We now focus on finding the Pareto Optimal Plans (PPP) from any list of \(2t-p\) cities, that is to say from the elements \((k,\psi_{0})\), for \(k\in[0,t-p]\) and \(\psi_{0}\in\Psi_{0}(k)\) for the symmetric and **non-symmetric** case, that is, for the latter, we assume that there is no \(B\bar{A}\bar{B}\) situations for now. ### Definitions **PPPs and Admissible PPPs**: A Possibly Pareto-optimal Plan (PPP) is defined by 3 tuples, namely \(a\in\{1,...,n\}^{k+p}\) for cities involved in a pattern \(A\), \(\bar{a}\in\{1,...,n\}^{k}\) for cities involved in a pattern \(\bar{A}\), and \(b\in\{1,...,n\}^{t-p-k}\) for the cities involved in \(B\) and \(\bar{B}\). Nevertheless, \(a\), \(\bar{a}\) and \(b\) do not hold any information about which plane will land in a particular city. This is the reason why there exists many feasible schedules, i.e., schedules that actually are feasible plans for \(p\) planes4 using the corresponding \(4t-2p\) edges. There are at most \(n^{(2t-p)}\) possible PPP but it is clear that the set of PPPs contains many redundancies, that can easily be removed by ordering the indices Footnote 4: Most of them are probably not Pareto-optimal, but w.r.t. Using Proposition I as hypothesis, any schedule resulting from a larger tuple \(a\) or \(\bar{a}\) or \(b\) would be Pareto-dominated. **Definition**: (Admissible PPP) An _admissible PPP_ is an element of \(A\times\bar{A}\times B\), where \(A=\{a\in[1,n]^{k+p};\forall i\in[1,k+p],d_{ai}\geq d_{a_{i+1}}\}\), \(\bar{A}=\{\bar{a}\in[1,n]^{k};\forall i\in[1,k],d_{\bar{a}_{i}}\geq d_{\bar{ a}_{i+1}}\}\) and \(B=\{b\in[1,n]^{t-p-k};\forall i\in[1,t-p-k],d_{b_{i}}\geq d_{b_{i+1}}\}\). **Number of admissible PPPs**: Let \(K_{k}^{m}\) be the set of \(k\)-multicombinations (or multi-subset of size \(k\)) with elements in a set of size \(m\). The cardinality of \(K_{k}^{m}\) is \(\Gamma_{k}^{m}={m+k-1\choose k}\). As \(A\) is in bijection with \(K_{k+p}^{n}\), \(\bar{A}\) with \(K_{k}^{n}\) and \(B\) with \(K_{t-p-k}^{n}\), the number of PPP is \((t-p)\Gamma_{k+p}^{n}\Gamma_{k}^{n}\Gamma_{t-p-k}^{n}\), i.e., \((t-p){n+k+p-1\choose k+p}{n+k-1\choose k}{n+t-p-k-1\choose t-p-k}\). **Cost of a PPP**: Given the PPP \(\psi_{0}=(a,\bar{a},b)\in A\times\bar{A}\times B\), the cost of **any** plan using only the cities in \(a\), \(\bar{a}\) and \(b\) is uniquely defined by \(\text{Cost}(\psi_{0})=\sum\limits_{a_{i}\in a}c_{a_{i}}+\sum\limits_{\bar{a}_ {i}\in\bar{a}}c_{\bar{a}_{i}}+2\sum\limits_{b_{i}\in b}c_{b_{i}}\). **Makespan of a PPP**: The makespan of a PPP is thus that of the shortest schedule that uses its \(4t-2p\) edges in a feasible way. Trivial upper and lower bounds for the shortest makespan of a PPP \(\psi_{0}\) are respectively \(M_{S}(\psi_{0})\), the makespan of the sequential plan (i.e., that of the plan for a single plane that would carry all persons one by one), and \(M_{L}(\psi_{0})\), the makespan of the perfect plan where none of the \(p\) planes would ever stay idle and the length can be perfectly shared between the planes. These bounds are useful to prune the set of PPPs: \[M_{S}(\psi_{0}) =\sum\limits_{a_{i}\in a}(d_{a_{i}}+\bar{d}_{a_{i}})+\sum\limits_ {\bar{a}_{i}\in\bar{a}}(\bar{d}_{\bar{a}_{i}}+d_{\bar{a}_{i}})+\sum\limits_{b_{ i}\in b}(d_{b_{i}}+\bar{d}_{b_{i}})\] \[M_{L}(\psi_{0}) =\frac{M_{S}(\psi_{0})}{p}\] \(\Psi\)**-domination**: Given two PPP \((k,\psi_{0})\) and \((k^{\prime},\psi_{0}^{\prime})\), \(\psi_{0}\)\(\Psi\)_-dominates_\(\psi_{0}^{\prime}\) if \(M_{S}(\psi_{0})\leq M_{L}(\psi_{0}^{\prime})\) and \(Cost(\psi_{0})\leq Cost(\psi_{0}^{\prime})\). \(\Psi\)-domination is different from the standard domination as it occurs in a different space: we do not compare plans but the PPP structures which can individually lead to several plans. Note that if \(\psi_{0}\stackrel{{\Psi}}{{\succeq}}\psi^{\prime}_{0}\), there is no need to compute the shortest makespan for \(\psi^{\prime}_{0}\) because any plan in \(\mathcal{P}(k^{\prime},\psi^{\prime}_{0})\) is dominated by any plan in \(\mathcal{P}(k,\psi_{0})\). ### Computing the Shortest Makespan and Constructing the Plan The method to compute the optimal makespan for a particular PPP \(\psi_{0}\) is broken down into four steps. All the steps are performed greedily allowing a resolution per PPP in linear time in function of the size of \(\psi_{0}\). After detailing these steps, we will give a constructive proof that the obtained makespan is optimal. If a non-symmetric instance is highly imbalanced in durations, while performing a \(B\bar{B}\), there is a chance that the plane performing \(\bar{B}\) will have to wait. It implies that its makespan is not the sum of the durations of its patterns but the moment the passenger arrives in the central city, plus the remaining duration of its track. For this reason, we denote by \(T_{i}\) the moment passenger \(i\) arrives in the central city she goes through. 1. For each city \(i\) in \(b\), greedily distribute by descending order of \(d_{i}+\bar{d}_{i}\) the duration \(2\max\ (d_{i},\bar{d}_{i})\) among the planes. If the \(d_{i}>\bar{d}_{i}\) add \(C_{G}\to C_{i}\to C_{G}\) to the sequence of the plane, otherwise add \(C_{I}\to C_{i}\to C_{I}\). If there is already a sequence from \(C_{I}\) (resp. \(C_{G}\)) add the new one at the right side (resp. left side) of the existing ones. 2. Greedily distribute the \(p\)-largest elements of \(a\) among the \(p\) planes. Note that each plane must receive one duration due to the fact that each plane should finish in \(C_{G}\). For each plane, the sub-sequence \(C_{I}\to C_{i}\to C_{G}\) is to be added between the western and eastern parts of the pattern induced by \(b\) distributed in the first step. 3. While it remains some elements in \(a\) and \(\bar{a}\), select the plane with the minimal duration and add the largest element of \(a\) or \(\bar{a}\) depending the previous element it received (\(a\) if \(\bar{a}\), and vice versa). For an element of \(a\) (resp. \(\bar{a}\)), add \(C_{I}\to C_{i}\to C_{G}\) (resp. \(C_{G}\to C_{i}\to C_{I}\)) right before the sequence added during the second step. 4. For each city \(i\) in \(b\), greedily distribute by descending order of \(d_{i}+\bar{d}_{i}\) the duration \(2\min\ (d_{i},\bar{d}_{i})\) among the planes. The rules to add the sub-sequences are the same as in the first step. If \(d_{i}<\bar{d}_{i}\), assign to the plane the makespace \(max(D(p)+\bar{d}_{i}),T_{i})+\bar{d}_{i}\) where \(D(p)\) is the duration of the partial track up to the moment the plane arrives in \(C_{i}\). The optimal makespan for the given PPP is the longest duration among the \(p\) planes. **Proposition**: For a given PPP \(\psi_{0}\) and \(\beta_{\text{set}}\), the algorithm returns the optimal makespan. **Proof for the symmetric case**: The incompressible time to transport all passengers, according to a given PPP is \(T(\psi_{0})=2\underset{i\in b}{\sum}(d_{i}+\bar{d}_{i})+\underset{i\in\{a, \bar{a}\}}{\sum}(d_{i}+\bar{d}_{i})\). A theoretical optimal plan with this pattern repartition is a plan without any waiting point for any plane. The above algorithm gives the optimal distribution of the set of times into \(p\). Then, if a plan can be constructed with such a makespan, it is optimal for the PPP. As it constructs such plan, we can conclude that the algorithm is optimal for the PPP, thus proving Proposition I. \(\Box\) **Proof for the non-symmetric case**: The above algorithm gives the optimal distribution of the set of times into \(p\) and provides a plan that minimizes the waiting time by starting the plan by the patterns that could lead to a waiting time. \(\Box\) **Complexity**: For a given instance, the size of any PPP is \(2t-p\). Finding the best makespan for a PPP is linear in \(2t-p\). As a result, the complexity to solve an instance is given by \((t-p)\binom{n+k+p-1}{k+p}\binom{n+k-1}{k}\binom{n+t-p-k-1}{t-p-k}\mathcal{O}(2t-p)\). ### Adapting the method to \(B\bar{A}\bar{B}\) situations A PPP cannot be a structure in which there are \(B\bar{A}\bar{B}\) situations. Otherwise, there might exist plans using the cities of such PPP, with an additional city for pattern \(A\), such that the plan is non-dominated by any plan using only the \(2t-p\) cities of the PPP. However, the algorithm is still optimal even if the duration of flights does not only depends on the city but also on the type of patterns. For instance, if we could arbitrarily decide that a pattern \(A\) going through \(C_{i}\) has a larger duration than the counterpart \(\bar{A}\) using the same city. In other words, there would be \(d_{i}^{X}\) and \(\bar{d}_{i}^{X}\) for any \(X\in\{A,\bar{A},B,\bar{B}\}\). Using this idea, we can transform any instance with some \(B\bar{A}\bar{B}\) situations into an instance without any, such that the algorithm A1 is optimal. Consider an instance \(\Pi\) of the non-symmetric MultiZenoTravel problem. For each potential \(B\bar{A}\bar{B}\) situation going through \(C_{i}\) and \(C_{j}\), add a new city \(C_{k}\) such that: \[d_{k}^{B} =2d_{i} \tag{3}\] \[d_{k}^{\bar{B}} =2d_{j}\] (4) \[d_{k}^{A}=\min(d_{i}+\bar{d}_{i},d_{j}+\bar{d}_{j})\] (5) \[d_{k}^{\bar{A}} =\bar{d}_{i}+d_{ij}+d_{j} \tag{6}\] This transformation is illustrated by Figure 10. The transformation has \(\mathcal{O}(n^{2})\) complexity and the resulting instance \(\Pi^{*}\) is a non-symmetric MultiZenoTravel without any \(B\bar{A}\bar{B}\) situation. More precisely, if there is a plan using a \(B\bar{A}\bar{B}\) that is optimal in \(\Pi\), then it is also optimal in \(\Pi^{*}\), but there exists a plan with the same makespan and cost without any \(B\bar{A}\bar{B}\) situation in \(\Pi^{*}\). ## 4 ZenoSolver ZenoSolver is a \(\mathsf{C}^{{}_{+}}\) software dedicated to generate and exactly solve MultiZenoTravel instances. ZenoSolver computes the true Pareto Front using the algorithm described in Section 3. It outputs the corresponding PDDL file5, that can be directly used by most AI planners. Footnote 5: Planning Domain Definition Language [11], almost universally used in the AI Planning to describe domains and instances. We implemented two versions of the algorithms to iterate over the set of PPP, namely the _classic_, by reference to our previous work [10], and the _no-duplicate_ version. Algorithms 1 and 2 present a high-level view of both versions, respectively for classic and no-duplicate. Figure 10: The BAB situation is transformed into a regular \(B\bar{B}\)-pairing through a new city and a \(\bar{A}\) with a null multiplicity. The cost of the new city \(C_{k}\) is \(c_{i}\) for \(B\), \(c_{j}\) for \(\bar{B}\) and \(c_{i}+c_{j}\) for \(\bar{A}\) or \(A\). ### Classic and no-duplicate In the **classic version** (shown on Algorithm 1), we iterate over the set of \(t\) tuples that represents the cities involved in patterns going eastward, and then over the set of \(t-p\) patterns representing the cities involved in patterns going westward. For each couple of tuple \((e,w)\), we compute the powerset of the intersection. This gives all the possibilities for a \(B\bar{B}\)-pairing based on the PPP. Finally, we iterate over this powerset and compute the lower makespan for each triplet \((e\setminus\beta,w\setminus\beta,\beta)\). This version is simple to understand and generates PPP in an approximately increasing order of cost which allows for efficient pruning. However, the method still generate a set of duplicates that grows exponentially with the number of passengers. The duplicates appears both in the space of PPP and the set of plans (because some patterns \(A\) and \(\bar{A}\) can sometimes be swapped). ``` 1:procedureSolver\((n,t,p,d,\bar{d},D,c)\) 2:for\(e\in K_{t}^{n}\)do 3:for\(w\in K_{t-p}^{n}\)do 4:\(C\leftarrow\operatorname{cost}(e,w)\) 5:\(B\leftarrow\mathbb{P}(e\cap w)\) 6:for\(\beta\in B\)do 7:\(M\leftarrow\operatorname{lowestMakespan}(e\setminus\beta,w\setminus\beta,\beta)\) 8:endfor 9:endfor 10:endfor 11:endprocedure ``` **Algorithm 1** Classic version of ZenoSolver The **no-duplicate version** adopts a different view on the construction of PPP. The main loop iterates over the set of tuples \(u\) of size \(2t-p\). Then, we generate all possible subsets of \(p\) elements, without duplicate. This implements the constraint that each of the \(p\) planes will perform a pattern \(A\). Let denotes by \(P\) this set. For each \(m\in P\), it remains a tuple \(v=u\setminus m\) of size \(2(t-p)\). With \(v\), we apply the same method as the classic algorithm: we compute the set of cities possibly involved in a \(B\bar{B}\)-pairing and generate its powerset. Iterating over the powerset, we compute the lowest makespan for each triplet \((m,v,\beta)\). At first sight, the two algorithms are similar, except that the no-duplicate version does "block" the firsts \(p\) occurrences of pattern \(A\). However, this difference decreases by an exponential factor the computation time in several ways: 1) there is no possible duplicate which decreases the computation time with \(n\) by comparison to the classic version, 2) the powerset for the possible \(B\bar{B}\)-pairing is done on a smaller set, 3) increasing \(p\) decreases the cardinality of \(v\). Also, as there is only one main loop, it makes it easier to implement efficient parallelism. The drawback is that the PPP are no longer generated in an approximately increasing order of cost. The _classic_ version seems more efficient in terms of effective computational time on problems showing a small number of passengers, while the _no-duplicate_ one is faster with a growing number of passengers or a compromise between cities and passengers. Also, all other things being equal, the no-duplicate version becomes faster when \(p\) increases while the computational time for the classic version remains unchanged. See Section 4.3 for further details. Both algorithms are based on two costly operations: 1) generating all multicombinations of \(k\) elements among \(n\) elements with repetitions, and 2) generating the powerset of a given set of elements. For 1), the implementation follows the one proposed in [10]. Regarding 2), we used the Same Number of One Bit (Soob) technique [1] that operates bitwise by noticing that equal cardinality subsets have the same number of one bits. Using the \(\Psi\)-domination, ZenoSolver implements a pruning method that checks if the current PPP is dominated by any other PPP already stored. As noted, the optimal makespan is lower or equal than the upper bound \(M_{S}\), leading to an efficient pruning. Indeed, as PPPs are generated in an approximated increasing order [14], this avoids iterating over the whole set to check the domination criterion. Determining if the current PPP is dominated has complexity \(O(h)\) where \(h\) is the number of different total achievable costs. An obvious upper-bound for \(h\) is given by \((2t-p)(\max_{i}(c_{i})-\min_{i}(c_{i}))\). However, in practice, \(S\) seems to have the same order of magnitude than the exact Pareto Front. In addition, \(S\) is the only structure kept in memory, thus, from this point of view, ZenoSolver turns out to be near-optimal regarding the memory usage (see Table 1). ### Handling the non-symmetric instances To handle non-symmetric instances, we need the following additional two steps: 1. We modified the algorithm such that we could specify a different duration and cost for each pattern based on a city. In practice, it does not change anything to the optimality of the algorithm because we only consider the total duration of a pattern rather than individual flights within the pattern. 2. We added an additional preprocessing step prior using the algorithm. The preprocessing step consists in determining the possible \(B\bar{A}\bar{B}\) situations in the given instance. For each \(B\bar{A}\bar{B}\) situations, we add a new city as described in Section 3.3 ### Empirical Performances All experiments have been performed using a VM running Ubuntu, equipped with a 12 cores i7-9750H CPU @ 2.60 GHz, 64 Gb RAM and a NVMe SSD. In Figure 11, we report the time to solve an instance with \(d_{i}=\bar{d}_{i}=c_{i}=i\). On the left, we fixed \(n=3\) and on the right, we fixed \(t=3\). For both both, the number of planes has been fixed to 2. Three versions are displayed: original, no-duplicate and non-symmetric which is a no-duplicate version that takes into account the \(B\bar{A}\bar{B}\) situations. There is no pruning for the last version because the \(\Psi\)-domination introduced in Section 3.1 does not hold for the non-symmetric version in case of \(B\bar{A}\bar{B}\) situations due to the possibility of waiting times. As expected, all curves are exponential in their respective parameters. The no-duplicate version allows to solve similar problems about twice as fast as the classic version when the number of passengers increases. Conversely, the classic version provides a similar speed-up when the number of cities increases. However, when both \(n\) and \(t\) grow together the no-duplicate version is clearly better, even with the overhead implied by dealing with the \(B\bar{A}\bar{B}\) situations. This is clear by looking at Table 1 and 2 that reports several metrics, for both versions, when \(n\) and \(t\) increases simultaneously. The number of generated elements is always one to three order of magnitude lower compared to the classic version. Similarly, the call number to the costly routine lowestMakespan is one order of magnitude lower. Interestingly, the intrinsic cost of the no-duplicate version, with the overhead to handle \(B\bar{A}\bar{B}\) situations is compensated from \(n=t=10\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline n & t & p & Iterations & lowestMakespan calls & \(S\) Size & Front Size & Time \\ \hline 3 & 3 & 2 & 30 & 33 & 9 & 5 & 0ms \\ 4 & 4 & 2 & 350 & 408 & 19 & 10 & 0ms \\ 5 & 5 & 2 & 4410 & 6387 & 33 & 17 & 3ms \\ 6 & 6 & 2 & \(58\times 10^{3}\) & \(10\times 10^{4}\) & 51 & 26 & 79ms \\ 7 & 7 & 2 & \(79\times 10^{4}\) & \(19\times 10^{5}\) & 73 & 37 & 1657ms \\ 8 & 8 & 2 & \(11\times 10^{6}\) & \(34\times 10^{6}\) & 99 & 50 & 31.968s \\ 9 & 9 & 2 & \(15\times 10^{7}\) & \(63\times 10^{7}\) & 129 & 65 & 703.141s \\ 10 & 10 & 2 & \(22\times 10^{8}\) & \(11\times 10^{9}\) & 163 & 82 & 4:33h \\ \hline \end{tabular} \end{table} Table 1: Increasing simultaneously \(n\) and \(t\) with \(d_{i}=\bar{d}_{i}=c_{i}=i\) for the classic version. ### Examples of Instances In Figure 12, we display some examples of the variety of Pareto Fronts that it is possible to obtain by modifying the functions to generate \(d\), \(\bar{d}\) and \(c\). The top left picture shows regular patterns with a uniform disposition of points. This instance is obtained with a cost \(c_{i}=log(i+1)\). The top right figure is obtained by using \(\bar{d}_{i}=i\) and also displays some pattern but with a non-uniform point distribution over the front. By using slightly more complex combination of generators, it is possible to obtain non-regular fronts with non-uniform distribution such as the bottom two figures: on the left, \(\bar{d}_{i}=\sqrt{(}i)\) and \(c_{i}=log(i+1)\), while on the right, \(d_{i}=\sqrt{(}i)\), \(\bar{d}_{i}=log(i+1)\) and \(c_{i}=\frac{5}{3}x+x\) mod 2. ## 5 Application: OpenFlight Data To generate benchmarks with real data, we used the list of airports and routes from OpenFlight Database6. We filtered to keep only the 50 largest airports with regards to the number of passengers per year. We selected as initial airport and goal airport, the largest and second largest airport, namely Hartsfield Jackson Atlanta International Airport (ATL) and Beijing Capital International Airport (PEK). We then filtered to keep only the existing routes between the remaining airports. Footnote 6: [https://openflights.org/data.html](https://openflights.org/data.html) For each route, we calculated the spherical distance between the two airports using Haversine formula. This distance is used for the makespan. The cost of landing in an airport has been defined as follows: for a given airport \(C_{i}\), 1) compute the spherical distances \(d_{\text{ATL},i}\) and \(d_{i,\text{PEK}}\) between the airport and respectively, ATL and PEK; 2) assign the inverse of the average distance i.e. \(c_{i}=\frac{2}{d_{\text{ATL},i}+d_{i,\text{PEK}}}\). Then, we generated all simple paths from ATL to PEK between the remaining airports using the existing routes with a maximal path length of 4 cities. We filtered to keep only the Pareto efficient paths. To generate a symmetric version of MultiZenoTravel, we used the reduction presented in Section 2.4 on the remaining simple path between ATL and PEK. Figure 12: Different instances with \(n=7\), \(t=8\) and \(p=3\) and various \(d\), \(\bar{d}\) and \(c\). The final symmetric instance has 15 central cities that corresponds to 15 different non-dominated paths from ATL to PEK using a total of 12 airports. Some paths uses only one intermediate airport (e.g. ATL -> DXB -> PEK), while some uses two (e.g. ATL -> LAS -> SF0 -> PEK) or three (ATL -> DFW -> LAS -> SF0 -> PEK). Figure 13: Openflight instance with \(n=15\), \(t=6\) and \(p\) from 2 to 5. The instance with 6 travelers and 2 planes has a Pareto front made of 29 distinct objective vectors. The two extreme points (Cost, Makespan) are \((111505,11070)\) and \((88410,16410)\). These two points are obtained by using exclusively one airport, either Dubai (DBX) or Seattle (SEA) and do not perform any \(BA\bar{B}\) pattern: C. 88410 Mk. 16410 (5,5,5,5,5,5,5,)(5,5,){ ATL -> SEA -> PEK -> SEA -> ATL -> SEA -> PEK -> SEA -> ATL -> SEA -> PEK ATL -> SEA -> PEK -> SEA -> PEK C. 111505 Mk. 11070 (0,0,0,0,0,0,0,){ ATL -> DXB -> PEK -> DXB -> ATL -> DXB -> PEK -> DXB -> ATL -> DXB -> PEK ATL -> DXB -> PEK -> DXB -> PEK However, there are non-dominated path using other airports, notable San Francisco (SFO) and performing \(BA\bar{B}\) patterns: C. 97212 Mk. 14238 (0,4,)(4,5,){0,4,4,} ATL -> DXB -> ATL -> SEA -> PEK -> SFO -> PEK -> DXB -> PEK -> SFO -> PEK ATL -> SFO -> ATL -> SFO -> ATL -> SFO -> ATL -> SFO -> PEK -> DXB -> PEK -> SFO -> PEK Finally, for some Pareto optimal plan, planes do not always perform the same number of flights: C. 101854 Mk. 13645 (0,4,)(5,5,){0,0,5,} ATL -> DXB -> ATL -> SEA -> PEK -> SFO -> PEK -> DXB -> PEK ATL -> DXB -> ATL -> SEA -> PEK -> SEA -> PEK -> DXB -> PEK -> SEA -> PEK Of course, in this example, the cost was set up arbitrarily and does not reflect any real cost but one can imagine that the cost is defined to represent some tax, or a sort of risk, i.e. linked with a certain infectious disease or an on-going conflict. As expected, the front is composed of fewer points when the number of planes increases as reported in Figure 13. ## 6 Conclusion and Perspectives In this article, we extended our preliminary work [14] by relaxing the unrealistic assumptions. In particular, in this work, we only assume the triangular inequality to hold for the durations. First, we defined three types of MultiZenoTravel problem: the symmetric clique, the non-symmetric clique and the general version. The algorithm to identify and build Pareto optimal plans relies on a single proposition. Due to the increasing complexity of these problems, we first proved the proposition for the symmetric problem and then extended the proof for the non-symmetric version. For the general version, we showed that any instance can be reduced to a clique instance. From an implementation point of view, we presented an optimized version of the ZenoSolver which allows to tackle problem twice as big as the original version. We also implemented the non-symmetric version of the algorithm and demonstrated its performances and effect of pruning. We demonstrated the diversity of Pareto-fronts which can be obtained by changing the instance parameters. Finally, we provided a concrete application using real-life data. Using OpenFlight database, we used the ZenoSolver to find all the Pareto-optimal plans between the two largest airports in the world. Beside the direct interest for the route and schedule multi-objective optimization for air transport, we believe that the work presented in this paper can be useful in many regards, and in particular for the benchmarking and comparison of algorithms, and for exploratory landscape analysis. Moreover, existing multi-objective planing instances are, as far as we know, not offering the exact Pareto-front, which only allows the comparison between solvers but not to characterize how hard is an instance and how far from an optimal solution the solvers are. On the contrary, ZenoSolver is capable to return the Pareto Front, with at least one plan for each point of the Pareto Front in the objective space. Future work should focus on returning the entire Pareto set, i.e. all feasible plans for any Pareto optimal objective vector. Another possible improvement would consist in characterizing the graphs for which Proposition IV holds, because this would allow to know the _general_ MultiZenoTravel instances for which the reduction to a _clique_ MultiZenoTravel can be done in polynomial time.
2302.01864
Machine Learning-based Early Attack Detection Using Open RAN Intelligent Controller
We design and demonstrate a method for early detection of Denial-of-Service attacks. The proposed approach takes advantage of the OpenRAN framework to collect measurements from the air interface (for attack detection) and to dynamically control the operation of the Radio Access Network (RAN). For that purpose, we developed our near-Real Time (RT) RAN Intelligent Controller (RIC) interface. We apply and analyze a wide range of Machine Learning algorithms to data traffic analysis that satisfy the accuracy and latency requirements set by the near-RT RIC. Our results show that the proposed framework is able to correctly classify genuine vs. malicious traffic with high accuracy (i.e., 95%) in a realistic testbed environment, allowing us to detect attacks already at the Distributed Unit (DU), before malicious traffic even enters the Centralized Unit (CU).
Bruno Missi Xavier, Merim Dzaferagic, Diarmuid Collins, Giovanni Comarela, Magnos Martinello, Marco Ruffini
2023-02-03T17:08:45Z
http://arxiv.org/abs/2302.01864v1
# Machine Learning-based Early Attack Detection Using Open RAN Intelligent Controller ###### Abstract We design and demonstrate a method for early detection of Denial-of-Service attacks. The proposed approach takes advantage of the OpenRAN framework to collect measurements from the air interface (for attack detection) and to dynamically control the operation of the Radio Access Network (RAN). For that purpose, we developed our near-Real Time (RT) RAN Intelligent Controller (RIC) interface. We apply and analyze a wide range of Machine Learning algorithms to data traffic analysis that satisfy the accuracy and latency requirements set by the near-RT RIC. Our results show that the proposed framework is able to correctly classify genuine vs. malicious traffic with high accuracy (i.e., \(95\%\)) in a realistic testbed environment, allowing us to detect attacks already at the Distributed Unit (DU), before malicious traffic even enters the Centralized Unit (CU). machine learning; mobile networks; 4G and 5G ## I Introduction The growing number of services that run on top of cellular networks pose new challenges in ensuring service availability. The common approach to service availability improvement involves network planning, resource allocation optimization, and network densification. However, the source of service outages is often not related to the network configuration but originates from different types of malicious service attacks [1, 2, 3, 4]. These security threats affect user satisfaction and result in financial losses. As highlighted by the authors of [5], security and privacy in cellular networks can be achieved at the air interface, the operator's internal network, and the inter-operator links. Depending on the type of attack (e.g., passive in which the attacker only listens to the traffic, active in which traffic is being modified or injected into the communication), detecting the threat can be a challenge. Passive attacks often result in privacy breaches [6], while active attacks result in service disruptions [7]. The focus of our work is on the early detection of active attacks in cellular networks. Recent publications have taken advantage of the processing power and programmability available in the new generation of switches to introduce new Intrusion Detection System (IDS) and Deep Packet Inspection (DPI) paradigms. Both approaches heavily rely on the use of Machine Learning (ML) techniques to allow the identification of the traffic [8, 9, 10, 11], to detect and to mitigate the volumetric Distributed Denial-of-Service (DDoS) attacks [12, 13, 14] inside communication networks. In cellular networks, it is very important to identify the malicious flow as early as possible to prevent it from reaching the Software-Defined Networking (SDN) architecture [15] or interrupting services on the edge of the network (e.g., mobile base station). As highlighted by the authors of [16], the volumetric DDoS attacks cannot be handled by the victim alone, and require help from the network. In terms of cellular networks, the traditional approach relies on the core network to deal with the malicious flows [17]. However, the fifth (5G) and sixth (6G) generations of cellular networks will move from inflexible and monolithic networks to agile and disaggregated architectures based on softwarization [18]. These changes provide a range of new and efficient methods for early detection and localization of the source of an attack, allowing the operator to cut off the malicious flow before it spreads through the network. Besides the softwarization and disaggregation, the OpenRAN Alliance [18] introduces openness and, more importantly, intelligence that reaches the edge of the network. The research in this area mainly focuses on optimizing radio resources and the dynamic reconfiguration of network entities to match the communication requirements [18, 19]. However, security implications will play a major role in making the architecture a future-proof alternative to traditional Radio Access Network (RAN) deployments [19]. Instead of the common approach of using the core network for DPI, we are carrying out early active attack detection in the RAN. For that purpose, we leverage the new OpenRAN architecture to collect air interface measurements and ML to perform traffic classification (i.e., attack detection). This allows us to detect the attack early on and stop it before it progresses through the network. Our main contributions include: * We developed a near-Real Time (RT) RAN Intelligent Controller (RIC) that allows us to collect measurements from open-source base stations (i.e., _srsRAN_) and dynamically adjust their configuration; * We identify the most important features on the physical and Medium Access Control (MAC) layer for detecting different types of Denial-of-Service (DoS) attacks; * We designed a ML model that allows us to classify different types of DoS attacks with high accuracy based on the air interface measurements collected by the near-RT RIC (i.e., physical and MAC layer measurements). ## II Architecture / Framework Our traffic classification framework was built around two main constraints: (1) latency requirement imposed by the near-RT RIC; and (2) low resolution of the features from the physical and MAC layers needed to describe the network traffic. Both constraints limit the choice of the ML classifiers, i.e., the goal is to train a model that can accurately classify the traffic within the defined latency requirements based on a limited set of features available in the physical and MAC layers. We have developed our own control plane (i.e., near-RT RIC) that allows us to collect measurements and to control the _srsRAN_ virtual Base Station (BS). The framework is implemented as an _xApp_ running on the near-RT RIC. Fig. 1 shows the required steps to deploy an online ML classifier within the proposed architecture. The first step in the process is to identify the frequency of the measurement collection and the type of measurements needed to perform the traffic classification. These steps are implemented in the _Learning Plane_. Additionally, the _Learning Plane_ is responsible for storing the data from the _Operational Plane_. The training of the initial ML model is done offline once enough data is collected from the _Operational Plane_. The first step in the training process is feature extraction. The _Feature Extractor_ component identifies the useful measurements and performs the data pre-processing before the training starts. Then, the _ML Model Training_ component trains the new model on the same time scale that was chosen for the measurement collection. The _ML Model Training_ component also ensures that the new model conforms to the processing requirements set by the near-RT RIC. Once the model is trained, and the target accuracy is achieved, the model can be deployed on the _Prediction Plane_. The _Prediction Plane_ performs multiple tasks: (i) pulls measurements from the _Operational Plane_; (ii) shares the collected measurements with the _Learning Plane_ for further model refinement; (iii) feeds data to the _Online ML Classifier_ for traffic classification; and (iv) depending on the traffic class sends commands to the BS in order to continue or terminate the service. The communication between the various components in our architecture is done through the _Databus_ of our control plane. The _Databus_ is implemented as a _ZMQ_ broker1. Footnote 1: [https://zeromq.org](https://zeromq.org) The _Operational Plane_ consists of the _srsRAN_ BS, the core network and our custom built RIC agent. The RIC agent allows us to share data and read commands with/from the _Databus_ of the near-RT RIC. Once the User Equipment (UE) connects to the BS, the RIC agent starts collecting measurements and sending them as asynchronous messages via the _Databus_ to the _Prediction Plane_. The _Prediction Plane_ performs the traffic classification and, if needed, sends control messages to the RIC agent. The RIC agent interprets these control messages and controls the operation of the BS accordingly (e.g., forward, prioritize, or drop packets from the UE). As highlighted earlier, only the initial model is trained offline. The _Learning Plane_ continuously collects data during the prediction phase and refines the initial model. In terms of ML, attacks are rare events, and as such they are hard to predict. Therefore, as we will present in Section IV, besides inference time, we mainly rely on the \(F1\)-Score to express the model performance. As a reminder the \(F1\)-Score is computed as: \[F1=\frac{2RP}{R+P} \tag{1}\] where \(R\) represents recall and \(P\) represents precision. This allows us to ensure that the model is penalised equally for being biased towards positive predictions as well as missing predictions. Once the \(F1\)-Score improves significantly, the new model gets deployed to the _Prediction Plane_. ### _Feature Extraction_ The traditional approach to identify malicious data flows is to extract features from packet headers. In cellular networks, this is usually performed in the core network. However, our goal is to identify and stop these data flows at the source, i.e., the edge of the network. This introduces additional challenges related to the features being less expressive (i.e., measurements collected from the physical and MAC layers don't contain packet headers). During the initial ML model training phase, the data collected from the BS gets labeled, and the feature extraction process starts. Table I describes the features used in the classification process. All presented features are used to describe the quality of the signal and the traffic volume. Fig. 1: Data collection and ML model training and inference framework. The channel quality is measured by the UE and reported back to the BS as the Channel Quality Indicator (CQI). The CQI index is a scalar value from \(0\) to \(15\) measured on the physical layer. The Modulation and Coding Scheme (MCS) is also a scalar that is determined based on the reported CQI. However, the mapping also depends on the amount of information that is being sent between the UE and BS. The range of values for the _MCS_ is between \(0\) and \(28\), meaning that there is no direct mapping between the _CQI_ and _MCS_. The Physical Uplink Shared Channel (PUSCH) Signal-to-Interference-plus-Noise Ratio (SINR) and Physical Uplink Control Channel (PUCCH) SINR metrics indicate how much the desired signal is stronger compared to the noise and interference in the two channels, i.e., PUSCH and PUCCH. These metrics provide information about the relationship between the channel conditions and the achieved throughput. Unlike the physical layer measurements mentioned above, the MAC layer statistics contain the amount of data and packets exchanged between the UE and BS, providing information about the traffic patterns of the services exchanging information through the network. These measurements include data rate (_brate_), the number of delivered packets (_pkts ok_) and the number of dropped packets (_pkts nok_). ### _ML Model Training_ Once a large enough dataset is collected and the features extracted, we start the ML model training phase. Unlike the work presented in [8, 10, 11], in which the authors focus on traffic classification with limited computational resources, we do not struggle with those limitations. However, unlike the models presented in those papers, our model does not have access to the intrinsic (i.e., packet headers) and extrinsic features (i.e., inter-packet arrival time). It restricts us to works under the limitations highlighted in Section II (i.e., low latency and low resolution of the available features). The near-RT RIC imposes a \(1s\) latency threshold. In other words, the overall latency, including measurement collection, network delay between the BS and the controller, and the processing delay on the _databus_ and the _xApp_ has to be lower than \(1s\). Please notice that the latency requirement is for the whole control loop, i.e., both ways: measurements traveling from the BS to the RIC and control messages traveling in the other direction. Even though the scope of the paper does not include a detailed analysis of best traffic classification models, in Section IV, we included an analysis of the accuracy and performance of the most popular ML models that provide a suitable choice for the online classification under the mentioned restrictions. ### _Online ML Classifier_ The traffic classification starts once the _ML Model Training_ Component converges to a state which achieves the required accuracy within the defined latency bounds. The _Online ML Classifier_ pulls measurements from the _Databus_ and performs the traffic classification for each UE connected to the BS. Once the traffic class is identified, a command is being sent to the BS based on predefined policies (e.g., forward, prioritize, or drop packets from the UE). Additionally, it shares the measurements with the _Learning Plane_ for further model refinement. ## III Proposed Approach This section describes the implementation decisions and the interaction between the control components with the _srsRAN_ architecture (Section III-A). Sections III-B and III-C provide details about the traffic categories and the experimental setup. ### _Experimental Setup_ As shown in Fig. 2, our experimental setup consists of three main components: (1) RAN and core network; (2) near-RT RIC; and (3) non-RT RIC. The system is implemented in the OpenIreland research testbed infrastructure [20]. #### Iii-A1 **RAN and Core Network** For the purpose of our experiments we used Ettus Research Universal Software Radio Peripherals (USRPs) and the _srsRAN_ open-source 4G and 5G software radio library. The _srsRAN_ software was modified, and we developed our own library, i.e., _RIC Agent_, that is compiled with the _srsRAN_ software allowing us to share measurements with the _Databus_ in the control plane and read/execute commands coming from the _xApps_. Our experimental network consists of \(3\) USRPs - \(2\) acting as UEs and \(1\) acting as a BS. These USRPs are connected to Virtual Machines (VMs) running on a Dell PowerEdge R440 server. One additional VM acts as the core network. #### Iii-A2 **Near-RT RIC** We developed a near-RT RIC, that communicates with the RAN, allowing us to read measurements from the BSs and send commands based on the algorithms implemented in the _xApps_. The implementation consists of three main components: (1) the previously mentioned _RIC Agent_ that is compiled with the _srsRAN_ software; (2) a _Databus_ that is implemented as a _ZMQ_ broker, allowing us to exchange measurements and commands between the _xApps_ and BSs; and (3) the _xApps_ that allow us to implement control mechanisms that take measurements as inputs and make decisions based on optimization algorithms, ML models or predefined policies. #### Iii-A3 **Non-RT RIC** Unlike the near-RT RIC, we did not develop a non-RT RIC. For the purpose of our experiments the non-RT aspects were performed through offline learning and Fig. 2: Experimental setup architecture describing the interfaces and communication between all components. hyperparameter tuning. Once the accuracy and classification delay of the ML model fell within the requirements set by the near-RT RIC, we deployed the model as an _xApp_. A closer look at Fig. 2 reveals that the total delay of the control loop \(T_{d}\) depends on the round trip transmission delay (i.e., network delay) between all nodes hosting different parts of the architecture \(t_{n}\), the processing time at the _Databus_\(\delta_{d}\), and the inference time on the near-RT RIC \(\delta_{i}\). This can be expressed with Equation 2: \[T_{d}=t_{n}+2\delta_{d}+\delta_{i} \tag{2}\] The round trip network delay \(t_{n}\) can be expressed as: \[t_{n}=2\delta_{bd}+2\delta_{dr} \tag{3}\] where \(\delta_{bd}\) is the transmission delay between the BS and the _Databus_, and \(\delta_{dr}\) is the trannsision delay between the _Databus_ and the near-RT RIC. To understand the control loop delay margins required to satisfy the near-RT control requirements, we provide delays for all involved components. The delay of the _Databus_ is \(\delta_{d}=45\mu s\) and the round trip network delay of our setup is \(t_{n}=670\mu s\). This results in a total control loop delay \(T_{d}=760\mu s+\delta_{i}\). Please notice that our round trip network delay is very low (i.e., \(<1ms\)) due to the setup being physically collocated. In Section IV, we will further investigate the inference time \(\delta_{i}\) to compute the margin for the overall round trip network delay. ### _Traffic Categories_ In order to collect a realistic dataset, we used well known tools to generate traffic in our network. We considered two traffic categories, i.e. _Benign_ and _Attack_. Each traffic category was represented by the following network applications: * _Benign: Web Browsing_ and _Voice over IP (VoIP)_; * _Attack_: _DDoS Ripper_, _DoS Hulk_ and _Slowloris_. #### Iii-B1 **Web Browsing** Web Browsing is characterized by the variable amount of data transmitted to the web server and received by the client. Since the majority of internet applications use Transmission Control Protocol (TCP), in order to generate _Web Browsing_ traffic, we access the most visited websites online2, and navigate through them randomly. Footnote 2: [https://en.wikipedia.org/wiki/List_of_most_visited_websites](https://en.wikipedia.org/wiki/List_of_most_visited_websites) #### Iii-B2 **VoIP** VoIP traffic has strict Quality of Service (QoS) requirements. It relies on User Datagram Protocol (UDP) due to the lower tolerance to delays than packet drops. A typical voice call requires between \(20\) kbps and \(170\) kbps guaranteed priority bandwidth. To simulate the voice application, we rely on _SIPp3_, which is an open-source traffic generator widely used to test VoIP services in the real environment. Footnote 3: [http://sipp.sourceforge.net/](http://sipp.sourceforge.net/) #### Iii-B3 **DDoS Ripper** _DDoS Ripper_ explores the webserver vulnerability flooding it with requests that result in cutting off targets or surrounding infrastructure. It opens as many connections with the server as possible and keeps them alive, sending a high volume of trash headers. This packet flooding causes an unexpected traffic jam preventing legitimate traffic from reaching its destination. Our code is based on the _PyPI DDoS Ripper4_. Footnote 4: [https://ppi.org/project/ddoś-ripper/](https://ppi.org/project/ddoś-ripper/) #### Iii-B4 **DoS Hulk** This attack relies on obfuscating the traffic as a benign request. However, it sends an unbearable load of TCP SYN messages and a flood of HTTP GET requests through a multi-thread agent. Therefore, this attack requires a considerable throughput. Our DoS Hulk script relies on _PyPI5_. Footnote 5: [https://ppi.org/project/bulk/](https://ppi.org/project/bulk/) #### Iii-B5 **Slowloris** _Slowloris_ is a type of attack that opens multiple connections to the webserver and keeps it open as long as possible. It requires a very low bandwidth to periodically send subsequent HTTP headers for each opened connection. This traffic pattern resembles benign communication by using the legitimate packet headers to keep connections alive. We based our _Slowloris_ Python script on the _PySlorris6_ development. Footnote 6: [https://ppi.org/project/tps/lowloris/](https://ppi.org/project/tps/lowloris/) To ensure that the network traffic behaves according to real world patterns, our approach is based on the techniques recommended by [21]. ### _Dataset Collection_ In order to train and validate our traffic classification _xApp_, we collected two datasets. The first was collected by running the traffic generator scripts on one UE, randomly switching between the different types of traffic patterns. These experiments require only \(2\) USRPs were used, i.e., one UE and one BS. The dataset was collected by the near-RT RIC. The second dataset was collected by randomly switching between different types of traffic on two different UEs. This setup involves \(3\) USRPs, i.e., two UEs and one BS. Again, the near-RT RIC was used to collect the measurements on the BS. In this case, the dataset includes the BS measurements, the traffic label and the label predicted by the _Online Classifier_. Table II shows the number of samples collected for the abovementioned scenarios. Section IV provides details about fitting and evaluating the ML models in the training phase, and details about the performance of the _Online ML Classifier_. ## IV Experimental Results In this section, we investigate the possibility to perform attack detection based on the limited features available in the RAN. The performance of the _xApps_ is evaluated based on the classification accuracy and the inference delay for the most popular ML algorithms used for traffic classification. As previously highlighted, the _xApp_ has to identify the traffic class within the acceptable delay constraints defined by the near-RT RIC, while still maintaining a high level of accuracy. Our evaluation considers six different ML classifiers, i.e., Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Decision Tree, Random Forest, Adaptive Boosting (AdaBoost) and Multilayer Perceptron (MLP). We use the dataset collected by connecting one UE to the BS for training. Therefore, the first column of Table II shows the number of samples used for the training of the classifiers for each traffic class. We use the second dataset (i.e., second column in Table II) for validation and testing purposes. Due to the nature of real network traffic, different traffic classes can exhibit similar properties during short periods of time. Our analysis showed that in such scenarios, traffic classification models based on decision rules have an advantage due to the higher tolerance to outliers. Additionally, they have a lower inference delay - \(\delta_{i}\) due to lower computational requirements. Table III shows the classification accuracy achieved by different ML algorithms. The Random Forest and Decision Tree algorithms outperformed the others with an accuracy of \(0.95\) and \(0.93\), respectively. As shown in Table III, the accuracy of the the other algorithms is considerably lower with the minimum being \(0.56\) for MLP. A closer inspection of Table III reveals that the inference time - \(\delta_{i}\) of all tested classifiers falls within the acceptable bound set by the near-RT RIC. It ranges from \(1.25ms\) for the Decision Three to \(3.05ms\) for the AdaBoost classifier. Due to the high accuracy discussed above and the fact that the inference time of all tested classifiers is acceptable, the Random Forest is the best choice for our use-case. We argue that the Decision Tree classifier would be the second choice considering that its inference time is much lower, with the accuracy being lower by \(2\%\). Considering the analysis presented above, we deployed the Random Forest classifier as an _xApp_ to the near-RT RIC. The chosen model consists of \(100\) decision trees with a maximum depth of \(15\). The minimum number of samples required to split an internal node is \(5\), and the minimum number of samples required to reach a leaf node is \(1\). The experiments were executed in our testbed with the setup described in Section III-A. Even though the real testbed setup exhibits variations in the channel quality, and therefore achievable throughput, the deployed classifier was able to achieve the same classification accuracy that was achieved during the offline training and testing phase, i.e., \(0.95\). Table IV provides details about the precision, recall and \(F1\)-Score for all traffic classes used in our experiments. These metrics show that the proposed model can predict different traffic classes based on measurements taken from the physical and MAC layer. Fig. 3 provides further evidence of the network traffic classification quality. For the individual traffic classes the model achieves a \(0.93\)\(F1\)-Score (Fig. 3, left). So far, we focused on identifying the individual traffic classes. However, in many applications, the goal is to distinguish between _Benign_ traffic and an _Attack_. The right hand side of Fig. 3 shows the confusion matrix for these two categories. It clearly shows that our classifier correctly identifies \(96\%\) of the _Attacks_ while achieving a very high \(0.96\)\(F1\)-Score. Due to the lack of comparable solutions in the RAN, we compare these results to IDS solutions deployed in the core network. Even though IDS solutions have an advantage due to the availability of packet headers, our Open-RAN approach achieves results comparable to the work in [8] and [11] in which the authors report 97% \(F1\)-Score and 93% accuracy, respectively. The authors of [8] report that their IDS solution introduces additional \(3.7\mu s\) of delay per packet. In contrast, the near-RT RIC does not interfere with the traffic flow, meaning that no additional delay is introduced to the network. Fig. 4 shows that after \(500ms\), more than \(90\%\) of the service executions will be classified correctly. This delay is due to the time needed for the traffic to normalize its behavior, i.e., all analyzed traffic patterns exhibit an initial transition phase in which they can't be correctly classified. Please note that in our experiments we collect measurements every \(100ms\). This means that after \(5\) measurements and a negligible network - \(t_{n}\) and _Databus_ processing delay - \(\delta_{d}\) (see Section III-A), we correctly classify more than \(90\%\) of the traffic executions. As highlighted throughout the paper, real-time traffic classification is important for on-demand resource allocation, dynamic network reconfiguration and, in case of the detection of potential network attacks, it is useful for IDSs. Motivated by the idea of real-time traffic classification, we measured the Fig. 3: Confusion matrices for the individual classes (left), and the binary problem (right). time needed to correctly label each traffic type. Table III shows that the inference time is \(\delta_{i}=2.86ms\). Once this value is added to the previously computed control loop delay, we get to a total of \(T_{d}=760\mu s+2.86ms=3.62ms\), meaning that there is a high margin for the network delay (i.e., \(996ms\)) that still allows us to operate within the \(1s\) control loop delay as defined by the Open-RAN Alliance. In terms of detecting network attacks it allows us to terminate the service for the malicious UE before any damage is made. Based on the proposed architecture and the availability of measurements, the detection of the attacks can be done at the Distributed Unit (DU), while the attack could be stopped at the Centralized Unit (CU). The BS can indicate the _Radio Resource Control (RRC) Connection Release_ procedure to the UE, which will effectively terminate the service for the malicious user, by moving the UE from _RRC_Connected_ to _RRC_Idle_ state. ## V Conclusions and Future Work We developed a near-RT RIC and modified the open-source base station implementation from _srsRAN_ to perform early DoS attack detection. This framework allows us to collect measurements and control the RAN configuration with delays between \(10ms\) and \(1s\). In order to comply with these delay requirements, we developed a fast _Databus_ with a delay of \(45\mu\)s and tested the inference delay of various ML algorithms. In our experiments, we also extract the most important air interface measurements/features required to correctly classify the analysed network traffic. It is important to notice that these measurements don't provide details about the packet headers. For the purpose of testing the proposed architecture, we collected a realistic dataset in our testbed. The real setup allowed us to validate the performance of our ML model in an environment with varying channel conditions. Our results show that the proposed model generalizes well and it was able to correctly classify the traffic with high accuracy. The results indicate that early DoS attack detection is possible at the edge of the network, allowing us to isolate the malicious users and stop the attack before it does any damage to the rest of the network. Future work on this topic would include the implementation of the Real Time RIC, which operates on even lower delay requirements (i.e. up to \(10ms\)). This would also affect the type of ML model that could perform the inference. ## Acknowledgment Financial support from Science Foundation Ireland 17/C-DA/4760, 18/RI/5721 and 13/RC/2077_p2 is acknowledged. Financial support from Brazilian agencies: CNPq, CAPES, FAPESP/MCTI/CGLbr (PORVIR-5G 20/05182-3, and SAWI 20/05174-0), FAPES (94/2017, 281/2019, 515/2021, 284/2021, 1026/2022). CNPq fellows Dr. Martinello 306225/2020-4.
2304.03325
ChatGPT-Crawler: Find out if ChatGPT really knows what it's talking about
Large language models have gained considerable interest for their impressive performance on various tasks. Among these models, ChatGPT developed by OpenAI has become extremely popular among early adopters who even regard it as a disruptive technology in many fields like customer service, education, healthcare, and finance. It is essential to comprehend the opinions of these initial users as it can provide valuable insights into the potential strengths, weaknesses, and success or failure of the technology in different areas. This research examines the responses generated by ChatGPT from different Conversational QA corpora. The study employed BERT similarity scores to compare these responses with correct answers and obtain Natural Language Inference(NLI) labels. Evaluation scores were also computed and compared to determine the overall performance of GPT-3 \& GPT-4. Additionally, the study identified instances where ChatGPT provided incorrect answers to questions, providing insights into areas where the model may be prone to error.
Aman Rangapur, Haoran Wang
2023-04-06T18:42:47Z
http://arxiv.org/abs/2304.03325v1
# ChatGPT-Crawler: Find out if ChatGPT really knows what it's talking about. 1 ###### Abstract Large language models have gained considerable interest for their impressive performance on various tasks. Among these models, ChatGPT developed by OpenAI has become extremely popular among early adopters who even regard it as a disruptive technology in many fields like customer service, education, healthcare, and finance. It is essential to comprehend the opinions of these initial users as it can provide valuable insights into the potential strengths, weaknesses, and success or failure of the technology in different areas. This research examines the responses generated by ChatGPT from different Conversational QA corpora. The study employed BERT similarity scores to compare these responses with correct answers and obtain Natural Language Inference(NLI) labels. Evaluation scores were also computed and compared to determine the overall performance of GPT-3 & GPT-4. Additionally, the study identified instances where ChatGPT provided incorrect answers to questions, providing insights into areas where the model may be prone to error. ChatGPT NLI BERT Corpus ConversationalQA ## 1 Introduction In recent years, large language models have revolutionized the field of natural language processing. They have intricate neural network models that can produce text with specific tones and content. They are trained on vast amounts of data to anticipate the most fitting text to continue a given prompt, resulting in a natural-sounding output. Among these models, ChatGPT, developed by OpenAI, has gained immense popularity due to its remarkable performance on various language tasks. ChatGPT is a large pre-trained language model that uses deep learning techniques to generate responses to natural language queries[1]. Its ability to understand and generate coherent responses has made it a valuable tool for a wide range of applications, including chatbots, language translation, and question-answering systems. ChatGPT differs from conventional chatbots in several ways: it has the ability to recall a previous conversation with users, decline unsuitable requests, and correct inaccurate responses. ChatGPT can provide detailed answers, suggestions, and explanations to complex queries, such as coding, optimization, and layout issues[2]. Owing to its superior capabilities, ChatGPT garnered more than one million users within the first week of its launch, surpassing other well-known online platforms. ChatGPT has been pre-trained on a massive corpus of text data and has shown a remarkable ability to generate human-like text in response to natural language inputs[3]. The pre-training process of ChatGPT involves three stages: unsupervised pre-training supervised fine-tuning and having a "human-in-the-loop" to finetune the model's ability to understand human instruction better. In the unsupervised pre-training stage, ChatGPT is trained on a massive dataset of text to learn the patterns and structure of natural language[4]. This process involves training the model on a diverse range of language tasks such as language modeling, masked language modeling, and next-sentence prediction. ChatGPT is a groundbreaking technology that has the potential to transform the way we interact with machines. It can be used for a variety of applications, including chatbots, language translation, and text summarization[5]. Numerous industries have already adopted technology, including e-commerce, customer service, and healthcare, to provide personalized and efficient customer support[6]. This research paper aims to explore the performance of ChatGPT and its potential use in various domains. The study analyzes the accuracy and consistency of ChatGPT's responses to different datasets and investigates the areas where the model may be prone to error. We aim to analyze the reliability of ChatGPT's output for conversational QA tasks[7; 8; 9; 10]. To achieve this, we developed a pipeline that generates large-scale responses and conducted a thorough comparison between ChatGPT's responses and existing QA corpora. We calculated Jaccard[11], BLEU[12], ROUGE[13], NIST[14], METEOR[15], BART[16] and TER[17] scores of ChatGPT's responses to assess the golden ratio and fluency of its output. ## 2 Background Study For challenges involving natural language processing, foundation models are now a common research and application paradigm. As foundation models are trained on massive amounts of data, they significantly outperform earlier models on a variety of downstream tasks like sentiment analysis, question answering, automated diagnosis, logical reasoning, and sequence tagging. Earlier studies assessed ChatGPT in various ways[18]. An assessment of ChatGPT on various tasks that is multi-task, multi-modal, and multilingual is suggested by[19]. They demonstrated that while ChatGPT performs ok on the majority of jobs, it struggles on low-resource activities. [20] provide comparable empirical assessments in 2023. [21] specifically conducted a number of assessments. With regard to findings, [21] discovered that ChatGPT performs poorly on fine-grained downstream tasks like sequence tagging. As double-edged swords, ChatGPT, and other big models should be monitored, according to [22] and [23]. The research of ethics is carried out in [24]. Human-computer interaction (HCI)[25], education [26],[27], [28], [29], medical [30], and writing [31] are all discussed and reflected upon [32]. To the best of our knowledge, there hasn't been much research done on Conversational QA corpora. Conversational QA corpora aim to mimic human conversation, they often include a variety of conversational elements, such as small talk, humor, and emotion. This makes it more challenging for chatbots to reply, as they need to be able to understand not only the literal meaning of the words being spoken, but also the context, tone, and intent behind them. Furthermore, conversational QA often involves a level of ambiguity and uncertainty that is not present in normal QA. For example, in a conversation, one person may ask for clarification or further information, or they may express uncertainty or confusion. Chatbots need to be able to handle these types of situations and respond appropriately. The findings of the paper are what ChatGPT does and does not do well on conversational QA corpora. **Strengths:** Ability to understand the context: ChatGPT is able to understand the context in which a question is being asked, and can generate responses that are appropriate to that context. Handling of natural language: ChatGPT is capable of understanding and generating responses in natural language, which makes the conversation more natural and engaging. Flexibility: ChatGPT can handle a wide variety of topics and questions, and can generate responses that are both informative and engaging. **Weakness:** Lack of specific knowledge: While ChatGPT has a vast amount of knowledge, it may not have specific knowledge on certain topics. This lead to inaccurate responses. Lack of common sense: ChatGPT does not have the same level of common sense as a human, which lead to responses that are technically correct but do not make sense in the context of the conversation. Difficulty with ambiguity: ChatGPT has difficulty understanding and responding to ambiguous or unclear questions or statements, which resulted in inaccurate or nonsensical responses. ## 3 Methodology To conduct our study, we designed and implemented a pipeline that utilizes ChatGPT to generate large-scale responses. The pipeline consists of two main modules, namely the question generation module and the response generation module. The question generation module generates a diverse set of questions that represent typical conversational QA tasks. To ensure wide coverage of topics, we employed various techniques such as paraphrasing, augmentation, and sampling from existing QA corpora. The generated questions were then used to query ChatGPT to generate responses. The response generation module utilizes ChatGPT's language generation capabilities to generate responses to the questions posed by the question generation module. To ensure the quality of the responses, we employed various techniques such as beam search and top-k sampling. The generated responses were then evaluated for their relevance and specificity to the questions. To assess the effectiveness of our pipeline, we evaluated it on four widely used datasets, namely CoQA [7], DialFact [8], FaVIQ [9], and CoDAH [10]. These datasets are popular benchmarks for conversational QA tasks, covering a broad range of topics and domains. To evaluate the quality of ChatGPT's responses, we employed various metrics such as BLEU, METEOR, BART, NIST, Jaccard, ROUGE-L, and TER scores. These metrics measure the accuracy, fluency, and coherence of the generated responses. We also compared the performance of our pipeline against existing state-of-the-art models on these datasets. Our pipeline also demonstrated a high level of scalability and flexibility, enabling it to handle a wide range of conversational QA tasks. Overall, our study demonstrates the effectiveness of utilizing ChatGPT to generate large-scale responses for conversational QA tasks. Our pipeline provides a robust and scalable solution for generating high-quality responses, which can be utilized in a wide range of applications such as virtual assistants, customer service chatbots, and conversational agents. ## 4 Datasets CoQA (Conversational Question Answering) is a dataset for developing and evaluating conversational question-answering systems[7]. The CoQA corpus consists of over 127,000 questions and answers and includes conversations between a human and a machine about a given passage of text. The conversations are designed to be similar to natural conversations, where the questioner can ask follow-up questions to clarify their understanding of the passage. DialFact is a natural language processing (NLP) dataset that was introduced in 2020[8]. It is designed for fact-checking in conversational settings, where the goal is to determine the truthfulness of claims made in a conversation. The dataset consists of 10,221 conversations, each of which includes a claim made by one participant and a response from the other participant indicating whether the claim is true, false, or unknown. The conversations were collected from the internet and cover a wide range of topics, including politics, health, and science. The DialFact dataset is unique in that it is focused on conversational fact-checking, rather than traditional fact-checking of news articles or other written texts. This makes it well-suited for developing and evaluating conversational agents that can assist users in determining the truthfulness of claims made in a conversation. FaVIQ (Fact Verification in the Implicit Query) is a natural language processing (NLP) corpus designed for fact verification in conversational settings[9]. The dataset consists of 3,772 dialogues, each containing one or more implicit claims that need to be verified. The claims are related to a variety of topics, including science, politics, and entertainment. The dataset includes a mixture of true and false claims and is designed to be challenging for NLP models. CoDAH (COmmonsense Data for Automatic Humor recognition) is a natural language processing (NLP) corpus designed for developing and evaluating humor recognition models[10]. It was introduced in 2018 and is unique in that it focuses on commonsense humor recognition, rather than more straightforward forms of humor. The CODAH dataset consists of 38,269 short texts, each containing a humorous sentence or phrase. The dataset is split into two parts: a training set of 28,269 texts and a test set of 10,000 texts. The texts were collected from social media platforms and cover a wide range of topics, including sports, politics, and entertainment. The CODAH corpus is challenging for humor recognition models because it requires the models to have a strong understanding of commonsense knowledge and the ability to recognize subtle forms of humor. Table 1 shows the number of responses obtained from ChatGPT. ## 5 Results Our study evaluated the potential of ChatGPT for conversational QA tasks and identified its limitations. Our results showed that ChatGPT-3 generates high-quality responses, with an average BLEU score of 0.79 and an average ROUGE-L score of 0.53. However, we also observed that ChatGPT-3's responses could be generic and irrelevant, reducing their usefulness for practical applications. To address these limitations, we investigated the effectiveness of the newly released GPT-4 model in generating more relevant and specific responses. Our evaluation results showed that GPT-4 outperformed ChatGPT-3 in terms of accuracy, relevance, and consistency. The GPT-4 model showed significant improvements in generating more coherent and contextually relevant responses, making it a promising candidate for conversational QA tasks. \begin{table} \begin{tabular}{||c c c c||} \hline CoQA & DialFact & FaVIQ & CoDAH \\ \hline \hline 991 & 3147 & 58 & 1887 \\ \hline \end{tabular} \end{table} Table 1: Number of responses obtained on each corpus. Table 2 and 3 show the evaluation scores of ChatGPT-3 and GPT-4 on various conversational QA corpora, including CoQA, DialFact, FaVIQ, and CoDAH. Our results indicate that GPT-4 achieves higher scores across all metrics, including BLEU, ROUGE-L, and METEOR, demonstrating its superiority in generating high-quality responses. Since these metrics are not perfect and do not always align with human judgments of text similarity or accuracy. Therefore, human evaluation scores (1- similar meaning, 0-dissimilar meaning) were calculated to obtain a more comprehensive assessment of the generated responses. Despite the promising results of ChatGPT-3 and GPT-4, we also observed some limitations of these models. In particular, we noticed that ChatGPT-3's responses could be inconsistent and sometimes misleading, especially when answering the same question based on the same context. This inconsistency could reduce the reliability of the model for practical applications, where accurate and consistent answers are crucial. However, we found that GPT-4 addresses this issue and generates more consistent and reliable responses. Fig. 1 illustrates the improvement of GPT-4. In summary, our study demonstrates the potential of ChatGPT and its successor, GPT-4, in generating large-scale responses for conversational QA tasks. While ChatGPT-3 showed promising results, we identified limitations that need to be addressed. Our evaluation of GPT-4 showed significant improvements in generating more relevant, specific, and consistent responses, making it a promising candidate for conversational QA tasks. These findings have important implications for the development of conversational agents and virtual assistants that rely on natural language processing and understanding. ### Case Study The development of natural language processing (NLP) systems has been accelerated by the emergence of large-scale pre-trained language models such as GPT-3 and GPT-4. These models are capable of generating fluent and coherent texts across various domains and tasks. However, evaluating the quality and performance of these models is not a trivial task. Different evaluation metrics have been proposed to measure various aspects of text generation, such as fluency, coherence, relevance, informativeness, diversity, and factual consistency. However, there is no consensus on which metrics are the most reliable and valid for comparing different models or settings. In this case study, a context was given to GPT-3 and GPT-4 to compare how GPT-4 was significantly enhanced. _Once upon a time, in a barn near a farmhouse, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer's horses slept. But Cotton wasn't alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters. All of her sisters were cute and fluffy, like Cotton. But she was the only white one in the bunch. The rest of her sisters were all orange with beautiful white tiger stripes like Cotton's mommy._ _Being different made Cotton quite sad. She often wished she looked like the rest of her family. So one day, when Cotton found a can of the old farmer's orange paint, she used it to paint herself like them. When her mommy and sisters found \begin{table} \end{table} Table 2: Evaluation scores of each Conversational QA corpus. \begin{table} \end{table} Table 3: Average BERT similarity score. her they started laughing._ _"What are you doing, Cotton?!" "I only wanted to be more like you". Cotton's momny rubbed her face on Cotton's and said "Oh Cotton, but your fur is so pretty and special, like you. We would never want you to be any other way". And with that, Cotton's momny picked her up and dropped her into a big bucket of water._ _When Cotton came out she was herself again. Her sisters licked her face until Cotton's fur was all dry._ _"Don't ever do that again, Cotton!" they all cried. "Next time you might mess up that pretty white fur of yours and we wouldn't want that!"Then Cotton thought, "I change my mind. I like being special"._ Table 4 shows the responses to queries while ChatGPT was manipulated. ## 6 Discussion The findings of our study indicate that ChatGPT shows promise in the field of conversational QA, but also reveal the need for enhancements to increase the accuracy and specificity of its responses. To achieve this, future research could consider including external knowledge sources, like knowledge bases, by proposing methods for fact-checking ChatGPT-generated text. It may also be beneficial to investigate alternative approaches for fine-tuning ChatGPT for conversational QA tasks, as this could lead to favorable outcomes. ## 7 Conclusion In this study, we conducted a thorough comparison between ChatGPT's responses and existing QA corpora to analyze the reliability and suitability of its output for conversational QA tasks. We developed a pipeline that generates large-scale responses and calculated BLEU, ROUGE, and TER scores of ChatGPT's responses. Our results suggest that ChatGPT has great potential for conversational QA tasks, but also highlighted the improvements in the latest GPT-4 model. We Figure 1: Comparison of GPT-3 and GPT-4 performance on different corpora. hope that our study will contribute to the development of more effective and reliable conversational QA systems based on large-scale language models like ChatGPT.
2303.10498
A structured input-output approach to characterizing optimal perturbations in wall-bounded shear flows
This work builds upon recent work exploiting the notion of structured singular values to capture nonlinear interactions in the analysis of wall-bounded shear flows. In this context, the structured uncertainty can be interpreted in terms of the flow structures most likely to be amplified (the optimal perturbations). Here we further analyze these perturbations through a problem reformulation that decomposes this uncertainty into three components associated with the streamwise, wall-normal and spanwise velocity correlations. We then demonstrate that the structural features of these correlations are consistent with nonlinear optimal perturbations and results from secondary stability analysis associated with streamwise streaks. These results indicate the potential of structured input-output analysis for gaining insight into both linear and nonlinear behavior that can be used to inform sensing and control strategies for transitional wall-bounded shear flows.
Chang Liu, Yu Shuai, Aishwarya Rath, Dennice F. Gayme
2023-03-18T20:53:26Z
http://arxiv.org/abs/2303.10498v1
A structured input-output approach to characterizing optimal perturbations in wall-bounded shear flows ###### Abstract This work builds upon recent work exploiting the notion of structured singular values to capture nonlinear interactions in the analysis of wall-bounded shear flows. In this context, the structured uncertainty can be interpreted in terms of the flow structures most likely to be amplified (the optimal perturbations). Here we further analyze these perturbations through a problem reformulation that decomposes this uncertainty into three components associated with the streamwise, wall-normal and spanwise velocity correlations. We then demonstrate that the structural features of these correlations are consistent with nonlinear optimal perturbations and results from secondary stability analysis associated with streamwise streaks. These results indicate the potential of structured input-output analysis for gaining insight into both linear and nonlinear behavior that can be used to inform sensing and control strategies for transitional wall-bounded shear flows. ## I Introduction Input-output analysis (e.g., \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty}\) analysis of the spatio-temporal frequency response operator associated with the linearized Navier Stokes equations) has been widely employed to characterize energy amplification and the dominant structural features in both transitional and turbulent wall-bounded shear flows, see e.g. the review articles [1], [2], [3], [4]. A substantial benefit of these approaches in studying transitional flows versus eigenvalue analysis is their ability to capture transient growth mechanisms that have been shown to play a role in subcritical transition [5], [6], [7], [8]. For example, these methods have been used to highlight the importance of streamwise elongated structures, as well as to identify the spacing of the streamwise vortices and streaks, which play an important role in the dynamics of the flow, see e.g., [9], [6], [10], [11]. Jovanovic and Bamieh [12] used a decomposition of the input-output pairs mapping body forcing applied to each of the three velocity components to the componentwise energy density that allowed them to characterize the importance of cross-stream forcing in redistributing background mean shear across the channel height [12]. The associated analysis of streamwise streak formation provided new insight into the 'lift-up mechanism' that was already known to be important in energy growth and organization of the flow [13], [14], [3]. However, comparisons with studies based on direct numerical simulations (DNS) of the full nonlinear Navier-Stokes equations [15], experiments [16], and nonlinear methods [17], [18] have indicated that linear analysis tends to over-emphasize this lift-up effect [19], [14] and thereby obscure the contributions of streamwise varying structures. In order to provide a more complete characterization of the flow, a number of researchers have sought to include nonlinear effects in the input-output approach; e.g., through harmonic balance methods [20] that build upon techniques for analyzing systems with spatio-temporal periodic coefficients [21], [22], [23], [24], [25]. Nonlinearity has also been included in stability analysis using quadratic constraints within a linear matrix inequality formulations [26], [27], [28], [29], [30]. Liu & Gayme [31] proposed an alternative approach that employs an input-output model of the nonlinearity placed within a feedback interconnection with the linearized dynamics (in the spirit of a Lure decomposition [32], [33] of the problem [34], [2]). They then reformulate the spatio-temporal response operator to isolate the unknown input-output gain associated with the model of the nonlinearity as a structured singular value [35], [36], [37]. They refer to this approach as structured input-output analysis (SIOA), as the model of the nonlinearity restricts the'structured uncertainty' (gain operator) to be block diagonal to mirror the form of the nonlinearity in the Navier-Stokes equations. SIOA has been shown to provide the required weakening of the lift-up mechanism [31, SS3.3] that enables the analysis to recover the streamwise varying structures shown most likely to trigger transition in experiments, DNS and nonlinear optimal perturbation (NLOP) analysis [16], [17], [15], [18]. It has also been used to identify both the horizontal length scales and inclination angles associated with the oblique laminar-turbulent patterns observed in transitioning channel flow [31], [38], stably stratified plane Couette flows [39] and spanwise rotating plane Couette flow [40]. SIOA has also shown promise in reproducing characteristic time scales in channel flows [38], as well as in extracting dominant forcing and response modes [41] that can be used to inform control strategies. This work further develops the SIOA approach to provide additional physical insight regarding the block diagonal'structured uncertainty' operator, which describes the perturbations most likely to destabilize the feedback interconnection in the input-output sense [37]. In order to interrogate this operator, we first reformulate the feedback interconnection originally proposed in [31] to isolate the three velocity components associated with the structured uncertainty operator. We interpret the resulting full-block components as velocity correlations between two wall-normal locations associated with the largest structured response; a number of studies e.g., [42], [27], [43], [44] have shown the benefits of analyzing similar correlations in different contexts. We then study the component-wise velocity profiles and the wall-normal variation of these structures. We refer to these velocity fields as "optimal perturbations" in analogy with NLOP and linear analysis aimed at identifying perturbations leading to the largest energy growth [45]. The results demonstrate that the velocity correlations and the profile of the autocorrelation for plane Couette flow show a maximum magnitude near the channel center, which is consistent with the behavior of the eigenfunction associated with the secondary instability of streamwise streaks [15]. The real parts of these autocorrelations also show the same sign reversal near the channel center as the NLOP [46]. In plane Poiseuille flow, the destabilizing velocity autocorrelation instead vanishes near the channel center, which is consistent with both the behavior of the NLOP [47] and results analyzing optimal secondary energy growth [48]. In contrast, linear optimal perturbations of plane Poiseuille flow peak near the channel center; see e.g. the comparison in [47]. The agreement of these results with those from various nonlinear analysis approaches provides further evidence that behavior associated with nonlinear effects can be captured using SIOA-based approaches. In what follows, SSII describes the feedback interconnection used to compute the components of the "optimal perturbations". The structured response and associated structured uncertainty are analyzed in SSIII and SSIV, respectively. We then conclude the paper and discuss future work in SSV. ## II Structured input-output analysis In this section, we describe the spatio-temporal frequency response operator of the linearized Navier-Stokes equations and formulate the model of the nonlinearity. We then place them in a feedback interconnection structure that enables us to analyze the perturbations that are most likely to induce transition using the structured singular value formalism [36], [37]. We consider two types of flow between two infinite horizontal parallel plates; plane Couette flow, which is driven by the relative motion of the plates, and plane Poiseuille flow, which is pressure driven. The coordinates \(x\), \(y\) and \(z\) respectively define the streamwise, wall-normal and spanwise directions. The three components of the velocity vector field \(\boldsymbol{u}_{T}(x,y,z,t)=\begin{bmatrix}u_{T}&v_{T}&w_{T}\end{bmatrix}^{\rm T}\) are respectively associated with the \(x\), \(y\) and \(z\) directions. For each flow configuration, we decompose the velocity field \(\boldsymbol{u}_{T}\) into a laminar base flow \((U,W,W)\), where \(V=W=0\) and \(U(y)=y\) for plane Couette flow and \(U(y)=1-y^{2}\) for plane Poiseuille flow, and fluctuations \(\boldsymbol{u}=(u,v,w)\) about the base flow. The dynamics of the velocity fluctuations \(\boldsymbol{u}\) are governed by the non-dimensional Navier-Stokes equations: \[\partial_{t}\boldsymbol{u}+U\partial_{x}\boldsymbol{u}+v\,U^{\prime} \boldsymbol{e}_{x}+\boldsymbol{\nabla}p-\frac{1}{Re}\nabla^{2}\boldsymbol{u}=- \boldsymbol{u}\cdot\boldsymbol{\nabla}\boldsymbol{u}, \tag{1}\] and \(\boldsymbol{\nabla}\cdot\boldsymbol{u}=0\). Here, \(U^{\prime}:=dU(y)/dy\) and \(p(x,y,z,t)\) denotes the pressure fluctuations associated with the decomposed pressure field \(p_{T}=P+p\). The Reynolds number is \(Re=U_{n}h/\nu\), where \(\nu\) is the kinematic viscosity and \(h\) is channel half-height. Here, \(\pm U_{n}\) is the velocity at the channel walls for plane Couette flow and \(U_{n}\) is the channel centerline velocity for plane Poiseuille flow. We impose no-slip and no penetration boundary conditions; i.e., \(\boldsymbol{u}(y=\pm 1)=\boldsymbol{0}\). In order to build the feedback interconnection of interest, we model the nonlinear terms (nonlinearity) \(-\boldsymbol{u}\cdot\boldsymbol{\nabla}\boldsymbol{u}\) in (1) as a forcing: \[\boldsymbol{f}_{\xi}:=-\boldsymbol{u}_{\xi}\cdot\boldsymbol{\nabla}\boldsymbol {u}=\begin{bmatrix}-\boldsymbol{u}_{\xi}\cdot\boldsymbol{\nabla}u\\ -\boldsymbol{u}_{\xi}\cdot\boldsymbol{\nabla}v\\ -\boldsymbol{u}_{\xi}\cdot\boldsymbol{\nabla}w\end{bmatrix}=:\begin{bmatrix} f_{x,\xi}\\ f_{y,\xi}\\ f_{z,\xi}\end{bmatrix}.\] (2a) In this input-output model of the nonlinearity, \[-\boldsymbol{u}_{\xi}=-[u_{\xi},\,v_{\xi},\,w_{\xi}]^{\rm T}\] acts as a gain mapping the gradient of each velocity component to the corresponding component of the modeled forcing driving the linearized dynamics. We use the standard practice of transforming the forced linearized dynamics to a divergence-free reference frame, where the new states are the wall-normal velocity \[v\] and wall-normal vorticity \[\omega_{y}:=\partial_{z}u-\partial_{x}w\], see details in e.g., [9]. We then exploit the shift-invariance of these flows in the \[(x,z,t)\] directions to perform a triple Fourier transform of \[v\], \[\omega_{y}\] and the components of the forcing model \[f_{x,\xi}\], \[f_{y,\xi}\], and \[f_{z,\xi}\], where for a variable \[\psi\], \[\widehat{\psi}(y;k_{x},k_{z},\omega):=\begin{bmatrix}\int\limits_{-\infty}^{ \infty}\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty}\psi(x,y,z,t )e^{-{\rm i}(k_{x}x+k_{z}z+\omega t)}\,dx\,dz\,dt.\] Here \[{\rm i}=\sqrt{-1}\] is the imaginary unit, \[\omega\] is the temporal frequency, and \[k_{x}\] and \[k_{z}\] are the respective dimensionless \[x\] and \[z\] wavenumbers. The resulting equations describing the transformed linearized equations subject to the modeled forcing are \[{\rm i}\omega\begin{bmatrix}\widehat{\omega}\\ \widehat{\omega}_{y}\end{bmatrix}=\widehat{\mathcal{A}}\begin{bmatrix}\widehat{ \omega}\\ \widehat{\omega}_{y}\end{bmatrix}+\widehat{\mathcal{B}}\begin{bmatrix}\widehat{f }_{x,\xi}\\ \widehat{f}_{y,\xi}\\ \widehat{f}_{z,\xi}\end{bmatrix},\ \ \begin{bmatrix}\widehat{u}\\ \widehat{v}\\ \widehat{w}\end{bmatrix}=\widehat{\mathcal{C}}\begin{bmatrix}\widehat{v}\\ \widehat{\omega}_{y}\end{bmatrix}.\] (3a,b) The operators in (3) are defined in the standard way, see e.g. [12], \[\widehat{\mathcal{A}}:= \widehat{\mathcal{M}}^{-1}\begin{bmatrix}\mathcal{L}_{11}&0\\ -\mathrm{i}k_{z}U^{\prime}&\mathcal{L}_{22}\end{bmatrix},\;\widehat{\mathcal{M}}:= \begin{bmatrix}\widehat{\nabla}^{2}&0\\ 0&\mathcal{I}\end{bmatrix}, \tag{4a}\] \[\widehat{\mathcal{B}}:= \widehat{\mathcal{M}}^{-1}\begin{bmatrix}-\mathrm{i}k_{x}\partial _{y}&-(k_{x}^{2}+k_{z}^{2})&-\mathrm{i}k_{z}\partial_{y}\\ \mathrm{i}k_{z}&0&-\mathrm{i}k_{x}\end{bmatrix},\] (4b) \[\widehat{\mathcal{C}}:= \frac{1}{k_{x}^{2}+k_{z}^{2}}\begin{bmatrix}\mathrm{i}k_{x} \partial_{y}&-\mathrm{i}k_{z}\\ k_{x}^{2}+k_{z}^{2}&0\\ \mathrm{i}k_{z}\partial_{y}&\mathrm{i}k_{x}\end{bmatrix}, \tag{4c}\] where \(\mathcal{L}_{11}:=-\mathrm{i}k_{x}U\widehat{\nabla}^{2}+\mathrm{i}k_{x}U^{ \prime\prime}+\widehat{\nabla}^{4}/Re\) and \(\mathcal{L}_{22}:=-\mathrm{i}k_{x}U+\widehat{\nabla}^{2}/Re\). The associated boundary conditions are \(\widehat{v}(y=\pm 1)=\frac{\partial\widehat{\mathcal{C}}}{\partial y}(y=\pm 1)= \widehat{\omega}_{y}(y=\pm 1)=0\). We write the transformed forcing model as \[\begin{bmatrix}\widehat{f}_{x,\xi}\\ \widehat{f}_{y,\xi}\\ \widehat{f}_{z,\xi}\end{bmatrix}= \boldsymbol{P}\;\widehat{\boldsymbol{u}}_{\Xi,c}\mathrm{diag} \left(\widehat{\boldsymbol{\nabla}},\widehat{\boldsymbol{\nabla}},\widehat{ \boldsymbol{\nabla}}\right)\begin{bmatrix}\widehat{u}\\ \widehat{v}\\ \widehat{w}\end{bmatrix},\;\;\text{where} \tag{5}\] \[\boldsymbol{P}:= \mathrm{diag}\left(\mathcal{I}_{1\times 3},\mathcal{I}_{1\times 3}, \mathcal{I}_{1\times 3}\right),\] (6) \[\widehat{\boldsymbol{u}}_{\Xi,c}:= \mathcal{I}_{3\times 3}\otimes\mathrm{diag}\left(-\widehat{u}_{\xi},- \widehat{v}_{\xi},-\widehat{w}_{\xi}\right), \tag{7}\] and \(\otimes\) denotes Kronecker product. Here, \(\mathcal{I}_{1\times 3}:=[\mathcal{I},\mathcal{I},\mathcal{I}]\) and \(\mathcal{I}_{3\times 3}:=\mathrm{diag}(\mathcal{I},\mathcal{I},\mathcal{I})\), where \(\mathcal{I}\) is the identity operator and \(\mathrm{diag}(\cdot)\) indicates a block diagonal operation. This decomposition allows us to isolate a block-diagonal operator \(\widehat{\boldsymbol{u}}_{\Xi,c}\), which is the structured uncertainty that we will investigate in this work. The spatio-temporal frequency response operator of the system in (3), defined as \[\mathcal{H}(y;k_{x},k_{z},\omega):=\widehat{\mathcal{C}}\left( \mathrm{i}\omega\,\mathcal{I}_{2\times 2}-\widehat{\mathcal{A}}\right)^{-1} \widehat{\mathcal{B}} \tag{8}\] maps the input forcing to the velocity vector at the same spatial-temporal wavenumber-frequency triplet; i.e., \(\widehat{\boldsymbol{u}}(y;k_{x},k_{z},\omega)=\mathcal{H}(y;k_{x},k_{z}, \omega)\widehat{\boldsymbol{f}}_{\xi}(y;k_{x},k_{z},\omega)\). In order to isolate the structured uncertainty \(-\widehat{\boldsymbol{u}}_{\Xi,c}\) that we seek to analyze, we combine the linear gradient operator and matrix \(\boldsymbol{P}\) with the spatio-temporal frequency response \(\mathcal{H}\) to obtain a modified operator \[\widehat{\mathcal{H}}_{\nabla}(y;k_{x},k_{z},\omega)\!:=\!\mathrm{diag}\left( \widehat{\boldsymbol{\nabla}},\widehat{\boldsymbol{\nabla}},\widehat{ \boldsymbol{\nabla}}\right)\!\!\mathcal{H}(y;k_{x},k_{z},\omega)\boldsymbol{P}. \tag{9}\] Fig. 1(a) shows the feedback interconnection between \(\widetilde{\mathcal{H}}_{\nabla}\) in (9) (outlined with a red dashed line) and the structured uncertainty \(\widehat{\boldsymbol{u}}_{\Xi,c}\) in (7). The slight reformulations of this feedback interconnection structure and forcing expression (5) versus those in [31] provide a decomposition of \(\widehat{\boldsymbol{u}}_{\Xi,c}\) into the three components \(\widehat{u}_{\xi}\), \(\widehat{v}_{\xi}\) and \(\widehat{w}_{\xi}\), which enables analysis of characteristics associated with the three velocity components. The operators in equation (4) are discretized using the Chebyshev collocation method with derivatives computed using the Chebyshev differential matrices generated by the MATLAB routines of [49]. The number of wall-normal grid points is denoted as \(N_{y}\). Fig. 1(b) shows the discretized version of the feedback interconnection between the modified spatio-temporal frequency response and the structured uncertainty, where \(\widehat{\boldsymbol{H}}_{\nabla}\) and \(\widehat{\boldsymbol{u}}_{\Xi,c}\) respectively represent the numerical discretizations in the wall-normal direction of \(\widetilde{\mathcal{H}}_{\nabla}\) in (9) and \(\widehat{\boldsymbol{u}}_{\Xi,c}\) in (7). We characterize the perturbations associated with the most amplified flow structures under structured forcing in terms of the structured singular value associated with \(\widetilde{\boldsymbol{H}}_{\nabla}\). This quantity is defined following e.g., [36, definition 3.1]. **Definition 1**.: _Given the wavenumber and frequency triplet \((k_{x},k_{z},\omega)\), the structured singular value is defined as \(\mu_{\widetilde{\boldsymbol{\eta}}_{\Xi,c}}\left[\widetilde{\boldsymbol{H}}_{ \nabla}(k_{x},k_{z},\omega)\right]:=\)_ \[\frac{1}{\mathit{min}\{\widehat{\sigma}[\widetilde{\boldsymbol{u}}_{\Xi,c}] \,:\,\widetilde{\boldsymbol{u}}_{\Xi,c}\in\widetilde{\boldsymbol{U}}_{\Xi,c}, \,\mathit{det}[\boldsymbol{l}-\widetilde{\boldsymbol{H}}_{\nabla}\widetilde{ \boldsymbol{u}}_{\Xi,c}]=0\}}. \tag{10}\] _If no \(\widetilde{\boldsymbol{u}}_{\Xi,c}\in\widetilde{\boldsymbol{U}}_{\Xi,c}\) makes \(\boldsymbol{l}-\widetilde{\boldsymbol{H}}_{\nabla}\widetilde{\boldsymbol{u}}_{ \Xi,c}\) singular, then \(\mu_{\widetilde{\boldsymbol{u}}_{\Xi,c}}[\widetilde{\boldsymbol{H}}_{\nabla}] :=0\). Here, \(\widehat{\sigma}[\cdot]\) is the largest singular value, \(\mathit{det}[\cdot]\) is the determinant of the argument, and \(\boldsymbol{l}\) is the identity matrix._ The model of the nonlinearity prescribes the set that contains structured uncertainty (i.e., the \(\widehat{\boldsymbol{u}}_{\Xi,c}\in\widetilde{\boldsymbol{U}}_{\Xi,c}\)) as: \[\widehat{\boldsymbol{U}}_{\Xi,c}:= \Big{\{}\boldsymbol{l}_{3N_{y}}\otimes\mathrm{diag}\big{(}- \widehat{\boldsymbol{u}}_{\xi},-\widehat{\boldsymbol{v}}_{\xi},-\widehat{ \boldsymbol{w}}_{\xi}\big{)}\] \[\,:-\widetilde{\boldsymbol{u}}_{\xi},-\widehat{\boldsymbol{v}}_{ \xi},-\widehat{\boldsymbol{w}}_{\xi}\in\mathbb{C}^{N_{y}\times N_{y}}\Big{\}}, \tag{11}\] Fig. 1: (a) Block diagram describing the SIOA feedback interconnection structure. Panel (b) redraws panel (a) after discretization. where \(\mathbf{l}_{3N_{y}}\in\mathbb{C}^{3N_{y}\times 3N_{y}}\) is the identity matrix with the corresponding size. However, this set involves repeated complex blocks, which is a constraint that cannot be enforced in the off-the-shelf mussv command in the Robust Control Toolbox [50] in MATLAB. In order to apply this toolbox, we relax the structured uncertainty containing set to \[\widetilde{\mathbf{U}}_{\Xi,c}:=\Big{\{}\operatorname{diag}\!\big{(} -\widehat{\mathbf{u}}_{\xi,1},-\widehat{\mathbf{v}}_{\xi,1},-\widehat{\mathbf{w}}_{\xi,1},\] \[-\widehat{\mathbf{u}}_{\xi,2},-\widehat{\mathbf{v}}_{\xi,2},-\widehat{ \mathbf{w}}_{\xi,2},-\widehat{\mathbf{u}}_{\xi,3},-\widehat{\mathbf{v}}_{\xi,3},-\widehat{ \mathbf{w}}_{\xi,3}\big{)}\] \[:-\widehat{\mathbf{u}}_{\xi,j},-\widehat{\mathbf{v}}_{\xi,j},-\widehat{ \mathbf{w}}_{\xi,j}\in\mathbb{C}^{N_{y}\times N_{y}},\ \ (j=1,2,3)\Big{\}}. \tag{12}\] We then compute \[\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}(k_{x},k_{z}):=\sup_{ \omega\in\mathbb{R}}\,\mu_{\widetilde{\mathbf{u}}_{\Xi,c}}\left[\widetilde{\mathbf{H} }_{\nabla}(k_{x},k_{z},\omega)\right] \tag{13}\] for each wavenumber pair \((k_{x},k_{z})\) using the mussv command. The arguments we employ include the state-space model of \(\widetilde{\mathbf{H}}_{\nabla}\) that samples the frequency domain adaptively. We use the 'Uf' algorithm option. **Remark 2**.: _This uncertainty set in (12) corresponds to a BlockStructure argument comprising nine (not necessarily repeated) full \(N_{y}\times N_{y}\) complex matrices when using the mussv command. In other words, there is no imposed relationship between \(\widehat{\mathbf{u}}_{\xi,j},\widehat{\mathbf{v}}_{\xi,j},\widehat{\mathbf{w}}_{\xi,j}\) for different values of \(j=1,2,3\), and as discussed above this is the relaxation being made to enable computation using the existing MATLAB toolbox. Previous work suggests that this relaxed problem produces results consistent with nonlinear analysis [31, 39, 38, 40] but there may be further insights gained by extending the numerical techniques to enforce equality of these components. Such extensions based on the original formulation in [31] are described [51]. Further quantification of the effect of this relaxation is a direction for future work._ ## III Structured Response Fig. 2 shows contour plots of \(\log_{10}[\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}(k_{x},k_{z})]\) for plane Couette flow at \(Re=358\) and plane Poiseuille flow at \(Re=690\), in panels (a) and (b) respectively. The results shown use 48 logarithmically spaced grid points in \(k_{x}\in[10^{-4},10^{0.48}]\), 36 logarithmically spaced grid points in \(k_{z}\in[10^{-2},10^{1.2}]\) and \(N_{y}=30\), which was shown to be adequate in the convergence study performed in [31]. As expected these results mirror those in [31, Figs. 4(a) and 5(a)], demonstrating that the modification to the feedback interconnection structure proposed here does not significantly affect the horizontal length scales associated with the maximal structured response. In particular, panel (a) indicates that the largest structured response corresponds to streamwise varying flow structures consistent with the oblique waves shown to require the least energy to induce transition in DNS studies [15]. The aspect ratios of these structures are consistent with NLOP [17] and experimental observations [16]. Panel (b) similarly identifies oblique flow structures with wavenumber pairs consistent with those identified as most likely to induce transition in DNS [15]. The large amplitudes in the peak region and lower left quadrant of Fig. 2(b) are consistent with the spatially localized structures identified through NLOP analysis [18]. For more detail regarding these figures, see [31, SS3]. ## IV Optimal Perturbation Structure We now analyze the characteristics of optimal perturbations in terms of structured uncertainty \(\widetilde{\mathbf{u}}_{\Xi,c}\). As discussed in the previous section our computation relaxes the problem such that the structured uncertainty comprises nine components \(\widehat{\mathbf{u}}_{\xi,j},\widehat{\mathbf{v}}_{\xi,j},\widehat{\mathbf{w}}_{\xi,j},(j =1,2,3)\). We are interested in the behavior of the three velocity components associated with the peak response in Fig. 2, so it would be preferable to have equal values for each of the three components for all \(j\) allowing us to extract a single value for \(\widehat{\mathbf{u}}_{\xi}\), \(\widehat{\mathbf{v}}_{\xi}\) and \(\widehat{\mathbf{w}}_{\xi}\). Since this is not the case, it is of interest to determine which set of the block diagonal velocity components, i.e., associated with \(j=1\), 2 or 3, is responsible for producing the peak values in Fig. 2. In an effort to estimate the relative contribution of each set \(j\) we alter the \(\mathbf{P}\) matrix to compute the response associated with the three block diagonal triplets \((\widehat{\mathbf{u}}_{\xi,j},\widehat{\mathbf{v}}_{\xi,j},\widehat{\mathbf{w}}_{\xi,j})\) as follows: \[\widetilde{\mathcal{H}}_{\nabla,1}:= \widetilde{\mathcal{H}}_{\nabla}\,\operatorname{diag}(\mathcal{I} _{3\times 3},\mathbf{0}_{3\times 3},\mathbf{0}_{3\times 3}), \tag{14a}\] \[\widetilde{\mathcal{H}}_{\nabla,2}:= \widetilde{\mathcal{H}}_{\nabla}\,\operatorname{diag}(\mathbf{0}_{3 \times 3},\mathcal{I}_{3\times 3},\mathbf{0}_{3\times 3}),\] (14b) \[\widetilde{\mathcal{H}}_{\nabla,3}:= \widetilde{\mathcal{H}}_{\nabla}\,\operatorname{diag}(\mathbf{0}_{3 \times 3},\mathbf{0}_{3\times 3},\mathcal{I}_{3\times 3}). \tag{14c}\] Here, the operator in (14a) corresponds to forcing only in the \(x\) momentum equation, i.e., setting \(\widehat{\mathbf{u}}_{\xi,j}=\mathbf{0},\widehat{\mathbf{v}}_{\xi,j}=\mathbf{0},\widehat{\mathbf{w }}_{\xi,j}=\mathbf{0},\ (j=2,3)\) in (12). Similarly, (14b) corresponds to setting \(\widehat{\mathbf{u}}_{\xi,j}=\mathbf{0},\widehat{\mathbf{v}}_{\xi,j}=\mathbf{0},\ \widehat{\mathbf{w}}_{\xi,j}=\mathbf{0},\ (j=1,3)\) and (14c) corresponds to \(\widehat{\mathbf{u}}_{\xi,j}=\mathbf{0},\widehat{\mathbf{v}}_{\xi,j}=\mathbf{0},\widehat{\mathbf{ w}}_{\xi,j}=\mathbf{0},\ (j=1,2)\). Fig. 3 shows \(\|\widetilde{\mathcal{H}}_{\nabla,j}\|_{\mu}\) (\(j=1,2,3\)) for plane Couette flow. These results suggest that \(\|\widetilde{\mathcal{H}}_{\nabla,1}\|_{\mu}\) and \(\|\widetilde{\mathcal{H}}_{\nabla,3}\|_{\mu}\) show a similar peak region and overall behavior to \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}\) in Fig. 2(a). Both \(\|\widetilde{\mathcal{H}}_{\nabla,1}\|_{\mu}\) and \(\|\widetilde{\mathcal{H}}_{\nabla,3}\|_{\mu}\) achieve peak values at the same \(k_{x}=0.22\) and \(k_{z}=0.67\), as \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}\) in Fig. 2(a). This analysis suggests that the maximum structured response is associated with the \(j=1\) and \(j=3\), with the response associated with \(j=1\) having the most similarity to the overall response. A full analysis of how this relates to the input-output pathways in the equations is a topic of ongoing work. Fig. 4 shows \(\|\widetilde{\mathcal{H}}_{\nabla,j}\|_{\mu}\) (\(j=1,2,3\)) for plane Poiseuille flow. Comparing these results to those in Fig. 2(b) indicates similar trends to those seen for plane Couette flow, where the response associated with the first set in Fig. 4(a) most closely resembles that in Fig. 2(b). This response also has a peak value at the same \(k_{x}=0.65\), \(k_{z}=1.56\) as \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}\) in Fig. 2(b). In this case while the peak region of \(\|\widetilde{\mathcal{H}}_{\nabla,3}\|_{\mu}\) in Fig. 4(c) also resembles that of \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}\), it achieves the peak value at \(k_{x}=0.81\) and \(k_{z}=1.56\). This analysis suggests that the maximal structured response is likely associated with the set \(\widehat{\boldsymbol{u}}_{\xi,1},\widehat{\boldsymbol{v}}_{\xi,1},\widehat{ \boldsymbol{v}}_{\xi,1}\). We compute the matrix \(\widetilde{\boldsymbol{u}}_{\Xi,c}(y,y^{\prime};k_{x},k_{z},\omega)\in\boldsymbol {U}_{\Xi,c}\) based on (9) and (12) using unsverxtract in the Robust Control Toolbox [50], which outputs this value as \(\mathtt{VDelta}\). Our computations use \(N_{y}=120\) and all values in this section are multiplied by \(10^{3}\) for ease of visualization. The Clenshaw-Curtis quadrature [52, chapter 12] is implemented on the frequency response operator and structured uncertainty to ensure that the resulting \(\widetilde{\boldsymbol{u}}_{\Xi,c}\) is independent of the number of Chebyshev spaced wall-normal grid points. The nine components of this block diagonal matrix \(\widetilde{\boldsymbol{u}}_{\Xi,c}\) are full blocks corresponding to an input-output mapping at two different wall-normal locations, which we denote as \(y\) and \(y^{\prime}\). These blocks can therefore be interpreted as spatial velocity correlations in the wall-normal direction. Based on the results in Figs. 3 and 4, we focus on the three elements \(\widehat{\boldsymbol{\iota}}_{\xi,1},\widehat{\boldsymbol{v}}_{\xi,1}, \widehat{\boldsymbol{v}}_{\xi,1}\), although all nine components were computed. In all of the results in this section, we use \(|\cdot|\) to denote absolute value and employ \(\mathcal{R}e[\cdot]\) and \(\mathcal{I}m[\cdot]\) to indicate the respective real and imaginary parts of the complex entries of \(\widetilde{\boldsymbol{u}}_{\Xi,c}(y,y^{\prime};k_{x},k_{z},\omega)\). Fig. 5 shows the real and imaginary parts of the streamwise velocity correlations \(\widehat{\boldsymbol{u}}_{\xi,1}\) for plane Couette flow at \(Re=358\) for the wavenumber triplet \(k_{x}=0.22\), \(k_{z}=0.67\) and \(\omega=0\) approximately corresponding to the largest \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}(k_{x},k_{z})\), i.e., the darkest region in Fig. 2(a). Here, we can see that the magnitude of \(\widehat{\boldsymbol{u}}_{\xi,1}\) shows a peak near the channel center \(y\approx 0\). This behavior is consistent with results from secondary instability analysis of streamwise streaks, which show that the least-stable mode is located at the center of the channel [15, Fig. 8]. Moreover, the real part, shown in Fig. 5(a) changes its sign at the channel center, which is also consistent with results from NLOP analysis showing streamwise velocities and streamwise vorticity reversing sign near the channel center [46, Fig. 7]. Fig. 6 shows the auto-correlation (i.e., the correlation at the same wall-normal location \(y^{\prime}=y\)) of \(\widehat{\boldsymbol{\iota}}_{\xi,1}\), \(\widehat{\boldsymbol{v}}_{\xi,1}\), and \(\widehat{\boldsymbol{w}}_{\xi,1}\). Here, we again find that these components show a peak absolute value near the channel center and the real part reversing the sign at the channel center, consistent with observations from NLOP analysis [46]. Similar trends are observed in the autocorrelations of the other components, with the greatest similarities observed in \(\widehat{\boldsymbol{u}}_{\xi,3}\), \(\widehat{\boldsymbol{v}}_{\xi,3}\) and \(\widehat{\boldsymbol{w}}_{\xi,3}\), which is consistent with those being associated with a similar peak response in Fig. 3. Fig. 7 presents \(\widetilde{\boldsymbol{u}}_{\xi,1}\) for plane Poiseuille flow at \(Re=690\) corresponding to the wavenumber pair \(k_{x}=0.65\), \(k_{z}=1.56\) showing the largest \(\|\widetilde{\mathcal{H}}_{\nabla}\|_{\mu}(k_{x},k_{z})\) in Fig. 2(b). The results are plotted for \(c=-\omega/k_{x}=0.53\), which is the phase speed leading to the largest \(\mu_{\widetilde{\boldsymbol{v}}_{\Xi,c}}\left[\widehat{\boldsymbol{H}}_{\nabla }(k_{x},k_{z},\omega)\right]\) at wavenumber pair \(k_{x}=0.65\), \(k_{z}=1.56\). Fig. 7 shows the real and imaginary parts of \(\widehat{\boldsymbol{u}}_{\xi,1}\), which are shown to vanish at the channel center (\(y=0\)). This trend is consistent with observations from NLOP analysis of plane Poiseuille flow [47, Fig. 3] and analysis of optimal secondary energy growth [48]. Fig. 8 shows the corresponding autocorrelation (\(y^{\prime}=y\)), where all of the components illustrate a vanishing velocity at the channel center. In contrast, linear optimal perturbations of plane Poiseuille flow peak near the channel center; see e.g. the comparison in [47]. Similar behavior is also observed in the other six components, and consistent with our conjecture based on Fig. 4 that the autocorrelatins and structures associated with \(\widehat{\boldsymbol{u}}_{\xi,3}\), \(\widehat{\boldsymbol{v}}_{\xi,3}\) and \(\widehat{\boldsymbol{w}}_{\xi,3}\) show the greatest overall similarities. This difference between \(\widetilde{\boldsymbol{u}}_{\Xi,c}\) in plane Couette flow and plane Poiseuille flow is likely because the laminar base flow \(U(y)=y\) in plane Couette flow is odd symmetric over wall-normal locations \(y\in[-1,1]\), while \(U(y)=1-y^{2}\) in plane Poiseuille flow is even symmetric. ## V Conclusions and future work This work builds upon recently introduced structured input-output analysis (SIOA) [31] to further examine transitional wall-bounded shear flows. Here we modify the SIOA feedback interconnection to decompose the structured uncertainty operator into block diagonal elements associated with the individual velocity components. We interpret the resulting full block structures as two-point spatial velocity correlations (in the wall-normal direction) associated with the optimal perturbations; i.e. most destabilizing to the imposed feedback interconnection structure. In plane Couette flow the magnitudes of these correlations show maximum values near the channel center in accordance with results from secondary instability analysis of streamwise streaks [15]. The real parts of these correlations reverse the sign near the channel center in a manner similar to NLOP [46]. For plane Poiseuille flow, the optimal perturbations show vanishing values near the channel center consistent with NLOP [47] and results obtained from optimal secondary energy growth [48]. In contrast, linear optimal perturbations of plane Poiseuille flow peak near the channel center; see e.g. the comparison in [47]. The results provide further evidence that behavior associated with nonlinear effects can be captured through a SIOA framework. Applications of this framework to wider parameter regimes and flow configurations such as compressible flows [53] are directions of ongoing work. Extracting corresponding optimal forcing and response mode shapes [41] are expected to provide insights for flow control applications. Another important direction of future study is to exploit the recent work of [51] to enforce equality of the velocity correlations. ## Acknowledgement The authors gratefully acknowledge partial support from the US National Science Foundation (CBET 1652244). C.L. would like acknowledge the travel support from Berkeley Postdoctoral Association (BPA) Professional Development Award.
2305.14570
Feed Me: Robotic Infiltration of Poison Frog Families
We present the design and operation of tadpole-mimetic robots prepared for a study of the parenting behaviors of poison frogs, which pair bond and raise their offspring. The mission of these robots is to convince poison frog parents that they are tadpoles, which need to be fed. Tadpoles indicate this need, at least in part, by wriggling with a characteristic frequency and amplitude. While the study is in progress, preliminary indications are that the TadBots have passed their test, at least for father frogs. We discuss the design and operational requirements for producing convincing TadBots and provide some details of the study design and plans for future work.
Tony G. Chen, Billie C. Goolsby, Guadalupe Bernal, Lauren A. O'Connell, Mark R. Cutkosky
2023-05-23T23:21:06Z
http://arxiv.org/abs/2305.14570v1
# Feed Me: Robotic Infiltration ###### Abstract We present the design and operation of tadpole-mimetic robots prepared for a study of the parenting behaviors of poison frogs, which pair bond and raise their offspring. The mission of these robots is to convince poison frog parents that they are tadpoles, which need to be fed. Tadpoles indicate this need, at least in part, by wriggling with a characteristic frequency and amplitude. While the study is in progress, preliminary indications are that the TadBots have passed their test, at least for father frogs. We discuss the design and operational requirements for producing convincing TadBots and provide some details of the study design and plans for future work. Keywords:Biomimetic Robotic Animal Studies ## 1 Introduction Complex behavioral interactions govern animal social life, especially in relation to parental care and coordination critical for the survival of a species. The mimetic poison frog, _Ranitomeya imitator_, a monogamous poison frog native to the north-central region of eastern Peru, is biparental, meaning that both mothers and fathers must work together as a team for their offspring to have the greatest chances of survival [1, 2, 3]. _R. imitator_ fathers transport their tadpoles piggy-back style into pools of water situated in bromeliad plant cavities, which they visit and guard at least daily [1, 2, 3]. The father deposits one tadpole per pool because the tadpoles are cannibalistic to their sibling conspecifics as a consequence of their low-resource environments [3]. At any time, _R. imitator_ parents normally care for one to three tadpoles [22]. When the father observes that a tadpole needs to be fed, he calls for his partner to provision one to two unfertilized egg meals [15, 22]. The tadpoles signal that they are in need of nutritional resources by intensely wriggling, which both parents can observe [22]. When a frog makes contact with the pond, the tadpole also vibrates against the frog's abdomen to elicit care. Rather than using kin recognition, poison frogs use spatial memory of the pool sites to determine which tadpoles to provide with care [16, 17, 19]. We exploit this characteristic to add robotic tadpole infiltrators into poison frog families, to study parenting and explore which tadpole signals are relevant to care (Fig. 1). In other work, model frogs, robotic frogs, and even electrodynamic shakers have been used across multiple species to test social decision-making, including treefrogs and tungara frogs [3, 4, 5, 11, 14, 20]. In the present case, we are interested in producing tadpole-mimetic robots that can influence parental decision-making. In this context, the test for a robot is whether it can convince _R. imitator_ parents that it is a tadpole that needs to be guarded and fed. ### Characteristics of begging in _Ranitomeya imitator_ Begging as a form of parent-offspring communication has independently evolved multiple times across the vertebrate lineages and also within the Amphibia class [6, 11, 12, 21, 22]. However, research on what begging actually signals in poison frogs has reached conflicting conclusions. In the strawberry poison frog, _Oophaga pumilio_, which is a female uniparental system, only tadpoles of greater fitness beg, meaning that begging is a signal of quality rather than need [6]. In _R. imitator_, which are biparental and spread the parental burden of care, smaller and more nutritionally needy tadpoles beg more, suggesting, that begging in _R. imitator_ is a signal of need [22]. These conflicting signals of quality versus need reflect a theoretical schism concerning the function of begging in parental decision-making. In either case, what has not been demonstrated is whether begging _intensity_ acts as a signal that influences parental care. To investigate whether begging intensity is a signal that influences parental effort, we need a tractable method of modifying single or multiple features of the begging signal. _R. imitator_ poison frogs are an ideal model to investigate what information begging signals contain, as poison frogs do not recognize individual Figure 1: Infiltration of Poison Frog Families with TadBots. (A) Frogs enter the pools of water with their heads facing away from the tadpole. (B) Young tadpoles, approximately 3/4 the length of a frog, approach the vent of the frog to beg. (C) Parents attempt to coordinate TadBot care. (D) Typical experiment chamber with camera 1� offspring but remember the spatial locations of their nurseries [19]. Therefore, we hypothesized that it would be possible to cross-foster biological offspring with robotic tadpole imposters. In order to maximize our chances with a robotic infiltrator, we first considered the sensory modalities that poison frogs likely employ when interacting with their offspring: olfactory, visual, and tactile. Nursery water for tadpoles is often dirty, as it contains detritus, algae, and dirt - all of which are desirable to poison frogs for hosting offspring in well-resourced environments with stagnant water. Studies suggest that poison frogs likely prioritize olfactory cues in decision-making about which pools of water to populate [18]. Other studies across Anurans have shown that touch, especially vibrational processing, is critical for life-or-death decision-making [6; 9]. In red-eyed treefrogs, vestibular mechanoception dictates the escape-hatch response of embryos [10]. Mechanoreception presents an evolutionary response to detecting predators like snakes, that may eat embryos growing on rainforest canopy leaves [10]. Early experimental studies showed that vibration - not olfaction or vision - was the necessary cue to stimulate egg feeding behavior [11]. In summary, these findings illustrate that vibration and touch are an important language for frogs with the capacity to have different meanings in different social contexts. Begging in _R. imitator_ is a dazzling visual and tactile display. During six minutes of exposure to mothers, tadpoles can beg for 1-4 minutes intensely wriggling and vibrating their bodies against a parent entering the nursery [22]. For comparison, studies have shown in other species of poison frog tadpoles that the mean duration of a begging bout is 12-15 seconds [6]. ### Design Requirements As noted above, olfactory, visual, and haptic (vibrational and tactile) cues evidently play a role in _R. imitator_ parenting. To match olfactory signals, the robot should function in tadpole-conditioned water, achieved when a tadpole has lived in the water for at least 24 h, supplemented with detritus such as waste from frogs and dead flies, which are common to tadpole nurseries. For scale, the neutrally buoyant body should be roughly 75%-100% of the length of a parent (adult size: 16.0 - 17.5mm, [2]) to proportionally mimic a tadpole between the Gosner stages 30-40 [8]. To encapsulate the body we require a soft material to match the feel of a tadpole's skin which contains its viscera. It is also desirable to match the tadpole's color, as poison frogs appear to rely on contrast for visual detection. Finally, we want to mimic the stereotypical begging motion in which the tail undulates with respect to the head, which also vibrates side-to-side. We desire to match the frequencies, amplitudes, and durations recorded in previous observations of tadpoles [6] and our own observations [5]. These parameters are summarized in Table 1 and govern the mechanism design in Section 2.1. ## 2 Methods ### Mechanism Design To meet the design requirements, we have designed and built TadBots that mimic the appearance and begging dynamics of _R. imitator_ tadpoles. The body of the TadBot has four major components (Fig. 2). To keep the body small, and isolate any noise and high-frequency motor vibrations from the nursery canister, the TadBot is driven remotely by a motor and crank mechanism that connects to the body using a 30 cm long tendon running through a soft plastic sleeve (Fig. 3C). The tendon acts upon a lever inside the TadBot body that rotates about a dowel pin as a pivot. An elastic band maintains tension and restores the lever position as the tendon relaxes. The TadBot body is suspended inside a water-filled plastic canister, the usual habitat of _R. imitator_ tadpoles in the laboratory. This is achieved by having the soft plastic sleeve glued to the TadBot body at one end and to the canister wall at the other end. The tendon terminates at the motor-crank and tensioner mechanism (Fig. 3). TadBots have a mass of 0.3 g and outer dimensions of 20 mm by 8 mm by 5 mm (LxWxD). For adjustment, the motor-crank assembly is mounted on a platform that slides on a fixed stand, and the relative position of the two can be adjusted \begin{table} \begin{tabular}{||c|c||c|c||} \hline Requirement & Range & Requirement & Range \\ \hline \hline overall length & \(\leq\)20 mm & mass & 0.15-0.3 g \\ \hline head major diameter & \(\approx\)8 mm & minor diameter & \(\approx\)6 mm \\ \hline oscillation freqs. & 5-25 Hz & amplitude & \(\approx\)5mm \\ \hline skin & dark gray/brown & hardness & \(\approx\)Shore A 00-20 \\ \hline \end{tabular} \end{table} Table 1: Required Parameters for TadBot Design Figure 2: Tadbot resides inside a plastic canister (A). The body (B) encloses a tail lever (D) that rotates about a pivot under the action of a motor-driven tendon (C) and restoring elastic band. by turning a tensioning screw to ensure that (i) the motor-crank assembly is providing enough range of motion to the oscillating lever and (ii) the tendon tension does not exceed the buckling strength of the plastic sleeve. ### Body and Skin A soft silicone skin approximates the texture and feel of tadpole skin when it comes into contact with a parent. A black pigment is mixed into Ecoflex 00-20 to match the dark gray skin tone of a tadpole. The skin is 1 mm thick and is made in pieces, shown in Fig. 4B. The top and bottom pieces are identical. The side piece is molded to provide the desired profile and enclose the moving parts. The tail is made from a two-part mold so that it can slip onto the end of the oscillating lever. The silicone pieces are glued together with cyanoacrylate adhesive (it is not necessary to form a watertight seal). Figure 4: TadBot fabrication: (A) Moving components (lever and rubber band) are contained in a SmoothOn Eco-Flex 00-20 skin. (B) The assembly is cast from multiple 3D printed molds and glued using cyanoacrylate adhesive. Figure 3: CAD rendering of TadBot system. TadBot is suspended inside a canister, mounted on a platform in the terrarium where the parents live. The actuation assembly is mounted remotely and consists of a motor-crank mechanism and tensioner. ### Actuation Dynamics A frequency and amplitude characterization was conducted to ensure that TadBot's tail achieves the desired wiggling displacement at the desired operating frequencies (Table 1). Two markers were drawn on the head of a TadBot spaced \(4.2\,\mathrm{mm}\) apart, with a \(15^{\circ}\) offset from the transverse plane (Fig. 5). Then a line is drawn from the bisection point between these two points and the pivot point of the tail to establish the median line along the sagittal plane. An additional marker is placed at the tip of the tail, and the amplitude is measured between this marker and the median line. The results are plotted in Fig. 5. At frequencies below \(8\,\mathrm{Hz}\), consistent with gentle, non-begging swimming, the tail appears relatively free of inertial effects and undergoes a low amplitude oscillation. As the frequency increases above \(10\,\mathrm{Hz}\), there is some additional displacement due to the inertia of the silicone tail. The behavior, however, is not noticeably resonant, and the amplitude plateaus between \(15\,-28\,\mathrm{Hz}\), which covers the upper limit of observed begging frequencies in tadpoles (Table 1). ### Experiment Setup and Procedure Multiple aquarium tanks are set up for the pair-bonded frog parents (Fig 1). Tadpoles inside the habitat are swapped out with TadBots after successful feeding behaviors from the parents are observed. To normalize care efforts across pairs, the number of tadpoles was controlled by limiting the number of nurseries available for parents to deposit. Any extra tadpoles deposited were subsequently removed from the tank. The experiment setup is shown in Fig. 1D. All _R. imitator_ used in the laboratory study were captive-bred in our poison frog colony or purchased from Ruffing's Ranitomeya (Tiffon, Ohio, USA). One adult male and female are housed together in a \(45.72\) x \(30.48\) x \(30.48\) cm terarium (Exoterra, Rolf C. Hagen USA, Mansfield, MA) containing sphagnum moss substrate, driftwood, live Pothos plants, horizontally mounted film canisters as Figure 5: Relationship between wiggling frequency and amplitude (A) of the tail displacement, measured with a camera at \(240\,\mathrm{fps}\) tracking markers on the head and tail. egg deposition sites, and additional film canisters filled with water, treated with reverse osmosis (R/O Rx, Josh's Frogs, Owosso, MI) for tadpole deposition. Terraria were automatically misted ten times daily for 20 seconds each, and frogs were fed live _Drosophila melanogaster_ flies dusted with vitamin powder, thrice weekly. The tanks were also supplemented with _Folsomia candida_ and _Trichorhina tomentosa_. The observation housing was set on a 12:12 light cycle from 07:00 to 19:00 hrs. The average temperature and humidity were recorded for each day of observation, usually around 25\({}^{\circ}\) and 95 % humidity within the tank. The experiment is approved under Stanford APLAC protocol 34242. Wyze v3 cameras were adhered by velcro onto the side of the Exoterra tanks and suspended above the tadpole canisters, with the face of the camera approximately 17.5 cm above the bottom of the canister. Cameras were given 256 GB SD cards to store a month of recording. The camera observation methods are described in previous work [7]. A motion detection notification is sent to the experimenters when the Wyze camera detects any frog movements. Then, based on the reaction of the frogs, and at the experimenter's discretion, TadBot is activated using one of two modes through the use of a mobile app. The two modes are intended to make TadBot more analogous to living tadpoles, with different paradigms of movement to reflect affiliative and neutral behaviors. The _swimming_ mode commands TadBot to intermittently wiggle its tail in 8 Hz (15 seconds on, 10 seconds off, repeat 3 times). The _begging_ mode issues a wiggling signal at 16 Hz with the same pattern. Using swimming mode versus begging mode enables the experimenter to test which frequencies a frog uses to make care decisions. The microcontrollers used in this experiment, Particle Argons, are connected to the cloud through local WiFi (Fig. 6). To determine the influence of begging on parental decision-making, poison frog families are placed into randomized trials after a biological tadpole has been Figure 6: System diagram of the experiment. A camera observes the interaction between the frog parents and the TadBot. Once a motion is detected, a notification is sent to the experimenter, who can control the operation of TadBot through a mobile app. deposited into a nursery and is confirmed to be fed at least once. Parents are exposed to, in randomized order: a cross-foster living tadpole (a positive control); a TadBot with no actuation assembly (negative control); and a TadBot with an actuation assembly (experimental group). Parents are provided with two weeks per experimental stimulus, the approximate time necessary to observe repeated bouts of paternal monitoring, and at least one bout of maternal provisioning. ## 3 Conclusions and Future Work We have described how a tadpole-mimetic robot, TadBot, was developed for studying the parenting behaviors of _R. imitator_ poison frogs. TadBots physically resemble _R. imitator_ tadpoles, operate in tadpole nurseries, and mimic the tadpoles' begging behavior which includes vigorous tail wiggling at characteristic frequencies. To evaluate parenting response to begging intensity, an experiment involving multiple _R. imitator_ parents has been constructed in which TadBots are substituted for live tadpoles and controlled remotely using cameras and a mobile app to produce swimming or begging motions. A preliminary study is in progress with \(n=4\) parenting pairs. In all of these pairs we have observed on multiple occasions that fathers, after observing begging signals from TadBots, have begun to coordinate care, distinguished by calling and soliciting mothers to tadpole nurseries to provision them (Fig. 1C). The calls are consistent with those documented in [15]. Examples of the behaviors can be seen in videos posted at [http://bdl.stanford.edu/TadBot](http://bdl.stanford.edu/TadBot) to accompany this paper. Tadpoles beg to both their mothers and fathers [22]. Mothers decide to provision eggs based on signals that are not entirely clear but may include vibrational signals from the tadpoles (which may include physical contact) and acoustic signals from the fathers. Thus far we have observed the mothers visiting the begging tadbot, but no unfertilized eggs were deposited More longitudinal work is necessary to determine quantitatively the amount of care that robotic tadpoles receive versus biological offspring. Ongoing work includes refinement of TadBots based on the preliminary study. In the next generation we will employ a softer and more flexible tubing for the tendon, to allow more movement inside of canister - in part so that TadBots can more convincingly vibrate their heads against the mothers. Another refinement will be to coat the skin with a hydrogel, as described in [13], to increase tactile realism. #### Acknowledgements. The authors acknowledge that this research was conducted on the ancestral lands of the Muwekma Ohlone people at Stanford. We thank the Laboratory of Organismal Biology for support, assistance, and input. We thank Dave Ramirez and Madison Lacey for their continued care of our poison frog colony. _Funding:_ T.G.C was supported by a NSF Graduate Research Fellowship. B.C.G. was supported by an HHMI Gilliam Fellowship (GT15685) and a NIH Cellular Molecular Biology Training Grant (T32GM007276). This research was funded with grants from the NIH (DP2HD102042) and the New York Stem Cell Foundation. LAO is a New York Stem Cell Foundation-Robertson Investigator.
2307.05204
Ranging Sensor Fusion in LISA Data Processing: Treatment of Ambiguities, Noise, and On-Board Delays in LISA Ranging Observables
Interspacecraft ranging is crucial for the suppression of laser frequency noise via time-delay interferometry (TDI). So far, the effects of on-board delays and ambiguities on the LISA ranging observables were neglected in LISA modelling and data processing investigations. In reality, on-board delays cause offsets and timestamping delays in the LISA measurements, and pseudo-random noise (PRN) ranging is ambiguous, as it only determines the range up to an integer multiple of the PRN code length. In this article, we identify the four LISA ranging observables: PRN ranging, the sideband beatnotes at the interspacecraft interferometer, TDI ranging, and ground-based observations. We derive their observation equations in the presence of on-board delays, noise, and ambiguities. We then propose a three-stage ranging sensor fusion to combine these observables in order to gain accurate and precise ranging estimates. We propose to calibrate the on-board delays on ground and to compensate the associated offsets and timestamping delays in an initial data treatment (stage 1). We identify the ranging-related routines, which need to run continuously during operation (stage 2), and implement them numerically. Essentially, this involves the reduction of ranging noise, for which we develop a Kalman filter combining the PRN ranging and the sideband beatnotes. We further implement crosschecks for the PRN ranging ambiguities and offsets (stage 3). We show that both ground-based observations and TDI ranging can be used to resolve the PRN ranging ambiguities. Moreover, we apply TDI ranging to estimate the PRN ranging offsets.
Jan Niklas Reinhardt, Martin Staab, Kohei Yamamoto, Jean-Baptiste Bayle, Aurélien Hees, Olaf Hartwig, Karsten Wiesner, Sweta Shah, Gerhard Heinzel
2023-07-11T12:19:44Z
http://arxiv.org/abs/2307.05204v3
Ranging Sensor Fusion in LISA Data Processing: Treatment of Ambiguities, Noise, and On-Board Delays in LISA Ranging Observables ###### Abstract Interspacecraft ranging is crucial for the suppression of laser frequency noise via time-delay interferometry (TDI). So far, the effect of on-board delays and ambiguities in the LISA ranging observables was neglected in LISA modelling and data processing investigations. In reality, on-board delays cause offsets and timestamping delays in the LISA measurements, and PRN ranging is ambiguous, as it only determines the range up to an integer multiple of the pseudo-random noise (PRN) code length. In this article, we identify the four LISA ranging observables: PRN ranging, the sideband beatnotes at the interspacecraft interferometer, TDI ranging, and ground-based observations. We derive their observation equations in the presence of on-board delays, noise, and ambiguities. We then propose a three-stage ranging sensor fusion to combine these observables in order to gain optimal ranging estimates. We propose to calibrate the on-board delays on ground and to compensate the associated offsets and timestamping delays in an initial data treatment (stage 1). We identify the ranging-related routines, which need to run continuously during operation (stage 2), and implement them numerically. Essentially, this involves the reduction of ranging noise, for which we develop a Kalman filter combining the PRN ranging and the sideband beatnotes. We further implement crosschecks for the PRN ranging ambiguities and offsets (stage 3). We show that both ground-based observations and TDI ranging can be used to resolve the PRN ranging ambiguities. Moreover, we apply TDI ranging to estimate the PRN ranging offsets. ## I Introduction The Laser Interferometer Space Antenna (LISA), due for launch in 2034, is an ESA-led mission for space-based gravitational-wave detection in the frequency band between \(0.1\,\mathrm{mHz}\) and \(1\,\mathrm{Hz}\)[1]. LISA consists of three satellites forming an approximate equilateral triangle with an armlength of \(2.5\,\mathrm{Gm}\), in a heliocentric orbit that trails or leads Earth by about 20 degrees. Six infrared laser links with a nominal wavelength of \(1064\,\mathrm{nm}\) connect the three spacecraft (SC), whose relative motion necessitates the usage of heterodyne interferometry. Phasemeters are used to extract the phases of the corresponding beatnotes [2], in which gravitational-waves manifest in form of microcycle deviations equivalent to picometer variations in the interspacecraft ranges. The phasemeter output, however, is obscured by various instrumental noise sources. They must be suppressed to fit in the LISA noise budget of \(10\,\mathrm{pm}\,\mathrm{Hz}^{-0.5}\) (single link) [3], otherwise they would bury the gravitational-wave signals. Dedicated data processing algorithms are being developed for each of these instrumental noise sources, their subsequent execution is referred to as initial noise reduction pipeline (INReP). The dominating noise source in LISA is by far the laser frequency noise due to the armlength differences in the order of \(1\%\) (\(25\,000\,\mathrm{km}\)). It must be reduced by more than 8 orders of magnitude. This is achieved by time-delay interferometry (TDI), which combines the various beatnotes with the correct delays to virtually form equal-optical-path-length interferometers, in which laser frequency noise naturally cancels [4; 5]. The exact definition of these delays depends on the location of TDI within the INReP (see fig. 1) [6], but wherever we place it, some kind of information about the absolute interspacecraft ranges is required. Yet, absolute ranges are not a natural signal in a continuous-wave heterodyne laser interferometer such as LISA. Therefore, a ranging scheme based on pseudo-random noise (PRN) codes is implemented [7; 8; 9]. Each SC houses a free-running ultra-stable oscillator (USO) as timing reference. It defines the spacecraft elapsed time (SCET). PRN codes generated according to the respective SCETs are imprinted onto the laser beams by phase-modulating the carrier. The comparison of a PRN code received from a distant SC, hence generated according to the distant SCET, with a local copy enables a measurement of the pseudorange: the pseudorange is commonly defined as the difference between the SCET of the receiving SC at the event of reception and the SCET of the emitting SC at the event of emission [10]. It represents a combination of the true geometrical range (light travel time) with the offset between the two involved SCETs (see eq. A5). In the baseline TDI topology (upper row in fig. 1), TDI is performed after SCET synchronization to the barycen tric coordinate time (TCB), the light travel times are used as delays. The pseudoranges comprise information about both the light travel times and the SCET offsets required for synchronizing the clocks (see appendix A). A Kalman filter can be used to disentangle the pseudoranges in order to retrieve light travel times and SCET offsets [11]. In the alternative TDI topology (lower row in fig. 1), the pseudoranges are directly used as delays. In that topology, TDI is executed on the unsynchronized beatnotes sampled according to the respective SCETs [6]. However, PRN ranging (PRNR) does not directly provide the pseudoranges but requires three treatments. First, due to the finite PRN code length (we assume 400 km), PRNR measures the pseudoranges modulo an ambiguity [7]. Secondly, PRNR is limited by white ranging noise with an RMS amplitude of about 1 m when sampled at 4 Hz [9]. Thirdly, on-board delays due to signal propagation and processing cause offsets and time-stamping delays in the PRNR. There are three additional pseudorange observables to resolve these difficulties: ground-based observations provide inaccurate but unambiguous pseudorange estimates; time-delay interferometric ranging (TDIR) turns TDI upside-down seeking a model for the delays that minimizes the laser frequency noise in the TDI combinations [12]; the sideband beatnotes include information about the time derivatives of the pseudoranges [6]. The combination of these four pseudorange observables in order to form optimal pseudorange estimates is referred to as _ranging sensor fusion_ in the course of this article. It is common to both TDI topologies (see fig. 1) and consequently a crucial stage of the INReP. In section II, we first specify the pseudorange definition. We then derive the observation equations of the four pseudorange observables carefully considering the effects of the on-board delays. In section III, we introduce a three-stage ranging sensor fusion consisting of an initial data treatment, a core ranging processing, and cross-checks. In the initial data treatment, we propose to compensate for the offsets and timestamping delays caused by the on-board delays. We identify PRNR unwrapping and noise reduction as the ranging processing steps that need to run continuously during operation. In parallel to this core ranging processing, we propose crosschecks of the PRNR ambiguities and offsets. We implement the core ranging processing and the crosschecks numerically. In section IV we discuss the performance of this implementation, and conclude in section V. ## II Ranging Measurements Each SC houses an ultra-stable oscillator (USO) generating an 80 MHz clock signal, the phasemeter clock (PMC). The PMC can be considered as the timing reference on board the SC (see fig. 3), its associated counter is referred to as spacecraft elapsed time (SCET): \[\text{SCET}(n)=\sum_{1}^{n}\,\frac{1}{80\,\text{MHz}}. \tag{1}\] The SCET, denoted by \(\hat{\tau}_{i}\), differs from the barycentric coordinate time (TCB), denoted by \(t\), due to instrumental clock drifts and jitters, and due to relativistic effects. Following the notation of [13], we use superscripts to indicate a quantity to be expressed as function of a certain time scale, e.g., \(\hat{\tau}_{1}^{t}\) denotes the SCET of SC 1 as function of TCB. Note that \[\hat{\tau}_{i}^{\hat{\tau}_{i}}(\tau)=\tau. \tag{2}\] Each SC contains two movable optical sub-assemblies (MOSAs) connected by an optical fibre (see fig. 2 for Figure 1: In the baseline TDI topology (upper part) we perform TDI after clock synchronization to TCB, the delays are given by the light travel times. In the alternative TDI topology (lower part) we execute TDI without clock synchronization and apply the pseudoranges as delays [6]. Both topologies rely on a ranging sensor fusion. Figure 2: LISA labeling conventions (from [14]). The SC are labeled clockwise. The MOSAs are labeled by 2 indices: the first one indicates the SC they are located at, the second one the SC they are oriented to. The measurements and related quantities (optical links, pseudoranges, etc.) share the indices of the MOSAs they are measured at. Below, we distinguish between left-handed MOSAs (12, 23, 31) and right-handed MOSAs (13, 32, 21). the labeling conventions). Each MOSA has an associated laser and houses a telescope, a free-falling test mass marking the end of the corresponding optical link, and an optical bench with three interferometers: the interspace-craft interferometer (ISI), in which the gravitational-wave signals eventually appear, the reference interferometer (RFI) to compare local and adjacent lasers, and the test-mass interferometer (TMI) to sense the optical bench motion with respect to the free-falling test mass in direction of the optical link. The MHz beatnotes in these interferometers are detected with quadrant-photo-receivers (QPRs). They are digitized in analog-to-digital converters (ADCs) driven by the PMCs. Phasemeters extract the beatnote phases1 using digital phase-locked loops (DPLLs), which are then downsampled to \(4\,\mathrm{Hz}\) in a multi-stage decimation procedure (DEC) and telemetered to Earth. Footnote 1: In the current design, the phasemeters deliver the beatnote frequencies with occasional phase anchor points. ### The pseudorange and on-board delays The pseudorange, denoted by \(R_{ij}^{\tau_{i}}\), is commonly defined as the difference between the SCET of the receiving SC at the event of reception and the SCET of the emitting SC at the event of emission [10]. It represents a combination of the light travel time between the emission at SC \(j\) and the reception at SC \(i\), and the differential SCET offset (see eq. A5). However, considering the complexity of the LISA metrology system, this definition appears to be rather vague: to what exactly do we relate the events of emission and reception? Two specifications are required here: we need to locate emission and reception, and we need to define the actual events. It is convenient to con Figure 3: We trace a local laser (red arrows) and a distant laser (yellow arrows) to the ISI BSs on both SC, where they interfere and form beatnotes (orange arrows). Before, the carriers are phase-modulated with the GHz clock and the PRN signals (follow the arrows from the PMC and the PRN to the EOM). We show the USO frequency distribution (follow the blue arrows after the USOs) and illustrate the on board signal processing (follow the arrows after the QPRs). Constituents of the pseudorange are marked purple. These are the light travel time between the PBSs (at the telescopes) and the transformation between the two SCETs (considered at the PBS of the receiving SC). In light blue, we mark the PRN ranging offset from the pseudorange. We identify the common carrier, sideband, and PRN timestamp delays in green, dark yellow, and pink, respectively. sider emission and reception at the respective polarizing beam splitters (PBSs) in front of the telescopes (denoted PBS1 in [15]), and to treat the on-board signal propagation and processing on both SC as on-board delays. Thus, we clearly separate the pseudorange from on-board delays. This definition is not unique, the events of emission and reception could be located elsewhere, assuming that the on-board delays are defined accordingly. The LISA optical links do not involve delta-pulse-like events. In order to define the actual events of emission and reception we, instead, use the instants when the light phase changes at the beginning of the first PRN code chip. At first glance, the PRN code might seem unfavorable for the pseudorange definition, as PRN and carrier phase are oppositely affected by the solar wind: the PRN phase is delayed by the group-delay, while the carrier phase is advanced by the phase delay. However, these effects are at the order of \(10\,\mathrm{p}\mathrm{m}\) (see appendix C), whereas our best pseudorange estimates are at \(0.1\,\mathrm{m}\mathrm{m}\) accuracy. Consequently, the solar wind dispersion can be neglected in the pseudorange definition. When expressing the interferometric measurements according to this specified pseudorange definition, we need to consider the excluded on-board signal propagation and processing. For that purpose, we introduce two kinds of delay operators by their action on a function \(f^{\hat{\tau}_{j}}\). The on-board delay operator describes delays due to on-board signal propagation and processing and is defined on the same SCET as the function it is acting on: \[\mathbf{D}_{x}^{\hat{\tau}_{j}}\,f^{\hat{\tau}_{j}}(\tau)=f^{\hat{\tau}_{j}} \left(\tau-d_{x}^{\hat{\tau}_{j}}(\tau)\right). \tag{3}\] \(x\) is a place holder for any on-board delay, e.g., \(\mathbf{D}_{\mathrm{pbs}\;\leftarrow\;1}\) denotes the optical path length from the laser to the PBS and \(\mathbf{D}_{\mathrm{dec}}\) the decimation filter group delay. The interspacecraft delay operator is defined on a different SCET than the function it is acting on and applies the pseudo-range as delay: \[\mathbf{D}_{ij}^{\hat{\tau}_{i}}\,f^{\hat{\tau}_{j}}(\tau)=f^{\hat{\tau}_{j}} \left(\tau-R_{ij}^{\hat{\tau}_{i}}(\tau)\right). \tag{4}\] For on-board delays that differ between carrier, PRN, and sideband signals, we add the superscripts _car_, _prn_, and _sb_, respectively. To trace the full path of a signal from the distant SC, we need to combine the interspacecraft delay operator for the interspacecraft signal propagation and the SCET conversion (considered at the PBS of the receiving SC) with on-board delay operators on both SC. The application of a delay operator to another time-dependent delay operator results in nested delays: \[\mathbf{D}_{x}^{\hat{\tau}_{i}}\,\mathbf{D}_{ij}^{\hat{\tau}_{i}}f^{\hat{ \tau}_{j}}(\tau)=f^{\hat{\tau}_{j}}\left(\tau-d_{x}^{\hat{\tau}_{i}}(\tau)-R_ {ij}^{\hat{\tau}_{i}}\left(\tau-d_{x}^{\hat{\tau}_{i}}(\tau)\right)\right). \tag{5}\] For a constant delay operator \(\mathbf{D}_{x}\) we can define the associated advanced operator \(\mathbf{A}_{x}\) acting as its inverse: \[\mathbf{A}_{x}^{\hat{\tau}_{j}}\,f^{\hat{\tau}_{j}}(\tau) =f^{\hat{\tau}_{j}}\left(\tau+d_{x}^{\hat{\tau}_{j}}\right), \tag{6}\] \[\mathbf{A}_{x}\,\mathbf{D}_{x}\,f^{\hat{\tau}_{j}}(\tau) =f^{\hat{\tau}_{j}}\left(\tau-d_{x}^{\hat{\tau}_{j}}+d_{x}^{\hat{ \tau}_{j}}\right)=f^{\hat{\tau}_{j}}(\tau). \tag{7}\] For advancement operators associated to propagation delays, e.g., the optical path length from the laser to the PBS, we write \[\mathbf{D}_{\mathrm{pbs}\;\leftarrow\;1}^{-1}=\mathbf{A}_{\mathrm{l}\; \leftarrow\;\mathrm{pbs}}, \tag{8}\] the subscript underlines that the advancement operator undoes the signal propagation. Below, we consider on-board delays as constant or slowly time varying so that their associated advancement operators are well-defined. What does the specified pseudorange definition imply for TDI in the context of on-board delays? In [6] the pseudoranges are said to be the delays that are to be applied in TDI in the alternative topology. To find out whether this statement holds, we write down the ISI carrier beatnotes in the presence of on-board delays using the above defined delay operators: \[\mathrm{ISI}_{ij}^{\hat{\tau}_{i}}(\tau) =\mathbf{D}_{\mathrm{dec}\;\leftarrow\;\mathrm{bbs}}^{\mathrm{ car},\;\hat{\tau}_{i}}\Big{(}\mathbf{D}_{\mathrm{bs}\;\leftarrow\;\mathrm{ pbs}}^{\hat{\tau}_{i}}\,\mathbf{D}_{ij}^{\hat{\tau}_{j}}\,\mathbf{D}_{ \mathrm{pbs}\;\leftarrow\;1}^{\hat{\tau}_{j}}\Phi_{ji}^{\hat{\tau}_{j}}(\tau)\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\mathbf{D }_{\mathrm{bs}\;\leftarrow\;1}^{\hat{\tau}_{i}}\Phi_{ij}^{\hat{\tau}_{i}}( \tau)\Big{)}. \tag{9}\] \(\mathbf{D}_{\mathrm{pbs}\;\leftarrow\;1}\) denotes the optical path length from the laser to the PBS (before transmission), \(\mathbf{D}_{\mathrm{bs}\;\leftarrow\;\mathrm{pbs}}\) is the optical path length from the PBS to the recombining beam splitter of the interspacecraft interferometer (ISI BS) (after reception), and \(\mathbf{D}_{\mathrm{bs}\;\leftarrow\;1}\) denotes the optical path length from the local laser to the ISI BS. These optical path lengths are in the order of \(10\,\mathrm{cm}\) to \(1\,\mathrm{m}\)[15]. \(\mathbf{D}_{\mathrm{dec}\;\leftarrow\;\mathrm{bs}}^{\mathrm{car},\;\hat{\tau}_{i}}\) denotes the delay from the ISI BS to the decimation filters, it differs for sideband and PRN signals. The dominating part of \(\mathbf{D}_{\mathrm{dec}\;\leftarrow\;\mathrm{bs}}^{\mathrm{car}}\) is the group delay of the decimation filters in the order of \(1\,\mathrm{s}\). To identify the delay we need to apply in TDI, it is convenient to split the delays in eq. 9 into a common and an uncommon delay by inserting \(\mathbf{D}_{\mathrm{bs}\;\leftarrow\;1}^{\hat{\tau}_{i}}\,\mathbf{A}_{\mathrm{ l}\;\leftarrow\;\mathrm{bs}}^{\hat{\tau}_{i}}=\mathbf{1}\) in front of the bracket: \[\mathrm{ISI}_{ij}^{\hat{\tau}_{i}}(\tau) =\mathbf{C}_{i}^{\mathrm{car},\;\hat{\tau}_{i}}\left(\mathbf{U} _{ij}^{\hat{\tau}_{i}}\,\Phi_{ji}^{\hat{\tau}_{j}}(\tau)-\Phi_{ij}^{\hat{\tau}_ {i}}(\tau)\right), \tag{10}\] \[\mathbf{C}_{i}^{\mathrm{car},\;\hat{\tau}_{i}} =\mathbf{D}_{\mathrm{dec}\;\leftarrow\;\mathrm{bbs}}^{\mathrm{ car},\;\hat{\tau}_{i}}\,\mathbf{D}_{\mathrm{bs}\;\leftarrow\;1}^{\hat{\tau}_{i}},\] (11) \[\mathbf{U}_{ij}^{\hat{\tau}_{i}} =\mathbf{A}_{\mathrm{l}\;\leftarrow\;\mathrm{bs}}^{\hat{\tau}_{i}} \,\mathbf{D}_{\mathrm{bs}\;\leftarrow\;\mathrm{pbs}}^{\hat{\tau}_{i}}\,\mathbf{D}_{ ij}^{\hat{\tau}_{j}}\,\mathbf{D}_{\mathrm{bs}\;\leftarrow\;1}^{\hat{\tau}_{i}}. \tag{12}\] \(\mathbf{C}_{i}^{\mathrm{car}}\) denotes the common delay of the local and the distant carrier phase. \(\mathbf{U}_{ij}\) is the uncommon delay that only applies to the distant carrier phase. We refer to \(\mathbf{C}_{i}^{\mathrm{car}}\) and \(\mathbf{U}_{ij}\) as common and uncommon carrier delay, respectively. To see how these delays affect the carrier beatnotes, we expand eq. 10: \[\mathrm{ISI}_{ij}^{\hat{\tau}_{i}}(\tau) =\Phi_{ji}^{\hat{\tau}_{j}}\left(\tau-c_{i}^{\hat{\tau}_{i}}-u_{ij}^{ \hat{\tau}_{i}}\big{(}\tau-c_{i}^{\hat{\tau}_{i}}\big{)}\right)\] \[-\Phi_{ij}^{\hat{\tau}_{i}}\left(\tau-c_{i}^{\hat{\tau}_{i}}\right). \tag{13}\] The common carrier delay causes a timestamping delay in both the laser phases and the uncommon carrier delay (essentially the pseudorange). It can be compensated by application of its associated advancement operator: \[\left(\mathbf{C}_{i}^{\text{car},\;\hat{\tau}_{i}}\right)^{-1}\text{ ISI}_{ij}^{\hat{\tau}_{i}}(\tau) =\mathbf{U}_{ij}^{\hat{\tau}_{i}}\Phi_{ji}^{\hat{\tau}_{i}}(\tau)- \Phi_{ij}^{\hat{\tau}_{i}}(\tau), \tag{14}\] \[\left(\mathbf{C}_{i}^{\text{car},\;\hat{\tau}_{i}}\right)^{-1} =\left(\mathbf{D}_{\text{bs}\;\leftarrow\;1}^{\text{z}_{i}} \right)^{-1}\left(\mathbf{D}_{\text{dec}\;\leftarrow\;\text{bs}}^{\text{car}, \;\hat{\tau}_{i}}\right)^{-1}\] \[=:\mathbf{A}_{\text{l}\;\leftarrow\;\text{dec}}^{\text{car},\; \hat{\tau}_{i}}. \tag{15}\] TDI is blind to the common carrier delay, as it equally delays the laser phases and the pseudorange. Hence, from the perspective of TDI eq. 9 and eq. 14 are equivalent. Nevertheless, the compensation of the common carrier delay is important for the synchronization of the measurements to TCB. We propose to calibrate \(\mathbf{C}_{i}^{\text{car}}\) on ground, so that during operation it can be compensated in an initial data treatment by application of its associated advancement operator (see eq. 14). After this initial data treatment, the uncommon carrier delay constitutes the delay that is to be applied in TDI in the alternative topology. It is composed of the optical path length delay from the distant laser source to the local ISI BS and the optical path length advancement from the ISI BS to the local laser source. Hence, it can be thought of as the differential optical path length from both lasers to the ISI BS. To construct the uncommon carrier delay, we need to measure the optical path lengths laser to PBS, PBS to ISI BS, and laser to ISI BS on ground, and we need to measure the pseudorange during operation. The sections II.2 to II.5 cover the four pseudorange observables. Before, we close this section with a few comments on the common carrier delay. Parts of the common carrier delay are slowly time varying. To analyze the origin of this time dependence we decompose \(\mathbf{C}_{i}^{\text{car}}\) into \[\mathbf{C}_{i}^{\text{car}} =\mathbf{D}_{\text{dec}}^{\text{car}}\,\mathbf{D}_{\text{dpll}}^{ \text{car}}\,\mathbf{D}_{\text{dpll}\;\leftarrow\;\text{abee}}\,\mathbf{D}_{ \text{abee}}^{\text{car}}\] \[\mathbf{D}_{\text{abee}\;\leftarrow\;\text{qpr}}\,\mathbf{D}_{ \text{qpr}}^{\text{car}}\,\mathbf{D}_{\text{qpr}\;\leftarrow\;\text{bs}}\, \mathbf{D}_{\text{bs}\;\leftarrow\;\text{l}}, \tag{16}\] these constituents are marked green in fig. 3. The dominating contribution is by far the decimation filter group delay \(\mathbf{D}_{\text{dec}}^{\text{car}}\) in the order of 1 s. It is constant and pre-determined by the design of the decimation filters. The group delays of the quadrant-photo-receiver \(\mathbf{D}_{\text{qpr}}^{\text{car}}\) and the analog backend electronics2\(\mathbf{D}_{\text{abee}}^{\text{car}}\) depend amongst others on the beatnote frequency [16]. Hence, they change over time and differ between carrier, sideband, and PRN signals. Together with the cable delays \(\mathbf{D}_{\text{abee}\;\leftarrow\;\text{qr}}\) and \(\mathbf{D}_{\text{dpll}\;\leftarrow\;\text{abee}}\) they can amount to 10 m. The DPLL delay \(\mathbf{D}_{\text{dpll}}^{\text{char}}\) depends on the time-dependent beatnote amplitude. The higher this amplitude the smaller \(\mathbf{D}_{\text{dpll}}^{\text{car}}\)[17; 2], \(\mathbf{D}_{\text{qpr}\;\leftarrow\;\text{bs}}\) and \(\mathbf{D}_{\text{bs}\;\leftarrow\;\text{l}}\), for completeness, denote the optical path lengths from the local laser to the QPR in the order of 10 cm to 1 m [15]. We propose to individually calibrate all constituents of \(\mathbf{C}_{i}^{\text{car}}\) on ground. The time-dependent ones should be calibrated for all combinations of the time-dependent parameters. Hence, during operation they can be constructed with the help of the SC monitors, which provide the corresponding parameter values, e.g., beatnote frequency and amplitude. Footnote 2: The analog backend electronics comprise analog signal amplifiers, analog low-pass filters, and the ADC. ### PRN ranging (PRNR) A set of 6 pseudo-random noise (PRN) sequences has been computed such that the cross-correlations and the auto-correlations for nonzero delays are minimized. These PRN codes are associated to the 6 optical links in the LISA constellation. The PRN codes are generated according to the respective PMCs and imprinted onto the laser beams by phase-modulating the carriers in electro-optical modulators (EOMs). In each phasemeter, DPLLs are applied to extract the beatnote phases. The PRN codes show up in the DPLL error signals since the DPLL bandwidth of 10 kHz to 100 kHz is lower than the PRN chipping rate of about 1 MHz. In a delay-locked loop (DLL), these error signals are correlated with PRN codes generated according to the local SCET. This correlation yields a pseudorange measurement, we refer to it as PRN ranging (PRNR) [7; 8]. We now derive the PRNR observation equation carefully taking into account on-board delays. We model the path of the PRN code from the distant SC to the local DLL by applying delay operators to the distant SCET: \[\mathbf{D}_{\text{dll}\;\leftarrow\;\text{ps}}^{\text{prn},\;\hat{\tau}_{i}} \,\mathbf{D}_{ij}^{\hat{\tau}_{i}}\,\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{ pmc}}^{\text{prn},\;\hat{\tau}_{j}}(\tau). \tag{17}\] The two on-board delays can be decomposed into \[\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{pmc}}^{\text{prn}} =\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{com}}\,\mathbf{D}_{ \text{com}\;\leftarrow\;\text{prn}}\] \[\mathbf{D}_{\text{prn}}\,\mathbf{D}_{\text{prn}\;\leftarrow\; \text{pmc}}, \tag{18}\] \[\mathbf{D}_{\text{dll}\;\leftarrow\;\text{pbs}}^{\text{prn}} =\mathbf{D}_{\text{dll}}\,\mathbf{D}_{\text{dpll}}^{\text{prn}} \,\mathbf{D}_{\text{dpll}\;\leftarrow\;\text{abee}}\,\mathbf{D}_{\text{abee}}^{ \text{prn}}\,\mathbf{D}_{\text{abee}\;\leftarrow\;\text{qpr}}\] \[\mathbf{D}_{\text{qpr}}^{\text{prn}}\,\mathbf{D}_{\text{qpr}\; \leftarrow\;\text{bs}}\,\mathbf{D}_{\text{bs}\;\leftarrow\;\text{pbs}}. \tag{19}\] \(\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{pmc}}^{\text{prn}}\) consists of the cable delays from the PMC to the EOM, the processing delay due to the PRN code generation, and the optical path length from the EOM to the PBS. All these delays are constant at the sensitive scale of PRNR, so that we do not have to consider delay nesting in \(\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{pmc}}^{\text{prn}}\). We added the superscript \(\text{prn}\) because this path is different for the sideband signal. \(\mathbf{D}_{\text{dll}\;\leftarrow\;\text{pbs}}^{\text{prn}}\) is explained in the next paragraph as part of the PRN timestamping delay. At the DLL, the received PRN codes are correlated with identical codes generated according to the local SCET. We model this correlation as the difference between the local SCET and the delayed distant SCET (eq. 17), and we apply \(\mathbf{D}_{\text{dec}}^{\text{prn}}\) to model the group delay of the decimation filters applicable to PRN ranging: \[\mathbf{D}_{\text{dec}}^{\text{prn}}\left(\hat{\tau}_{i}^{\hat{\tau}_{i}}(\tau)- \mathbf{D}_{\text{dll}\;\leftarrow\;\text{pbs}}^{\text{prn},\;\hat{\tau}_{i}} \,\mathbf{D}_{ij}^{\hat{\tau}_{i}}\,\mathbf{D}_{\text{pbs}\;\leftarrow\;\text{pmc}}^{ \text{prn},\;\hat{\tau}_{j}}\,\hat{\tau}_{j}^{\hat{\tau}_{j}}(\tau)\right). \tag{20}\] To see how the on-board delays affect the PRNR we expand eq. 20 applying eq. 2: \[\mathbf{D}^{\mathrm{prn}}_{\mathrm{dec}}\Big{(}\hat{\tau}_{i}^{ \hat{\tau}_{i}}(\tau)-\hat{\tau}_{j}^{\hat{\tau}_{j}}\big{(} \tau-d^{\hat{\tau}_{i}}_{\mathrm{dll}\leftarrow\mathrm{pbs}}\] \[-R^{\hat{\tau}_{i}}_{ij}\big{(}\tau-d^{\hat{\tau}_{i}}_{\mathrm{ dll}\leftarrow\mathrm{pbs}}\big{)}\] \[-d^{\hat{\tau}_{j}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}}\big{)} \Big{)}\] \[=\mathbf{D}^{\mathrm{prn},\,\hat{\tau}_{i}}_{\mathrm{dec} \leftarrow\mathrm{pbs}}\,R^{\hat{\tau}_{i}}_{ij}\left(\tau\right)+O^{\mathrm{ prn}}_{ij}. \tag{21}\] The on-board delays cause a timestamping delay \(\mathbf{D}^{\mathrm{prn}}_{\mathrm{dec}\leftarrow\mathrm{pbs}}\), the PRN timestamping delay, and an offset \(O^{\mathrm{prn}}_{ij}\), the PRNR offset: \[\mathbf{D}^{\mathrm{prn}}_{\mathrm{dec}\leftarrow\mathrm{pbs}} =\mathbf{D}^{\mathrm{prn}}_{\mathrm{dec}}\,\mathbf{D}_{\mathrm{ dll}}\,\mathbf{D}^{\mathrm{prn}}_{\mathrm{dll}\leftarrow\mathrm{abee}}\] \[\mathbf{D}^{\mathrm{prn}}_{\mathrm{abee}}\,\mathbf{D}_{\mathrm{ abee}\leftarrow\mathrm{qpr}}\,\mathbf{D}^{\mathrm{prn}}_{\mathrm{qpr}}\] \[\mathbf{D}_{\mathrm{qpr}\leftarrow\mathrm{bbs}}\,\mathbf{D}_{ \mathrm{bs}\leftarrow\mathrm{pbs}}, \tag{22}\] \[O^{\mathrm{prn}}_{\mathrm{dll}}=d^{\hat{\tau}_{i}}_{\mathrm{dll} \leftarrow\mathrm{pbs}}+d^{\hat{\tau}_{i}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}}. \tag{23}\] The PRN timestamping delay has similar constituents as the common carrier delay, they are marked pink in fig. 3. However, most of them are frequency or amplitude dependent. Therefore, they differ between carrier and PRN signals. As for the common carrier delay, we propose to individually calibrate all constituents of the PRN timestamping delay on ground before mission start. Hence, during operation \(\mathbf{D}^{\mathrm{prn}}_{\mathrm{dec}\leftarrow\mathrm{pbs}}\) can be compensated in an initial data treatment by application of its associated advancement operator \(\mathbf{A}^{\mathrm{prn}}_{\mathrm{pbs}\leftarrow\mathrm{dec}}\). After that, the PRNR observation equation including ranging noise and PRN ambiguity can be written as: \[\mathbf{A}^{\mathrm{prn},\,\hat{\tau}_{i}}_{\mathrm{pbs} \leftarrow\mathrm{dec}}\,\mathrm{PRNR}^{\hat{\tau}_{i}}_{ij}(\tau) =R^{\hat{\tau}_{i}}_{ij}(\tau)+O^{\mathrm{prn}}_{ij}+N^{\mathrm{ prn}}_{ij}(\tau)\] \[-a^{\mathrm{prn}}_{ij}(\tau)\cdot l. \tag{24}\] \(l\) denotes the finite PRN code length. We use 400 km as a placeholder, the final value has not been decided. The finite PRN code length leads to an ambiguity, \(a^{\mathrm{prn}}_{ij}\) denote the associated ambiguity integers [8]. \(N^{\mathrm{prn}}_{ij}\) is the white ranging noise with an RMS amplitude of about 1m at 4 Hz. This ranging noise is mainly due to shot noise and PRN code interference [9]. The PRNR offset \(O^{\mathrm{prn}}_{ij}\) involves contributions on the emitter and on the receiver side (see eq. 23), they are marked light blue in fig. 3. It can amount to 10 m and more [17; 18]. Similar to the common carrier and the PRN timestamping delay, we propose to calibrate the PRNR offset on ground, so that it can be subtracted in an initial data treatment. ### Sideband ranging (SBR) For the purpose of in-band clock noise reduction in the INReP, a clock noise transfer between the SC is implemented [9]: the 80 MHz PMC signals are up-converted to \(\nu^{\mathrm{m}}_{l}=2.400\) GHz and \(\nu^{\mathrm{m}}_{r}=2.401\) GHz for left and right-handed MOSAs, respectively (see fig. 2 for the definition of left and right-handed MOSAs). The EOMs phase-modulate the carriers with the up-converted PMC signals, thereby creating clock sidebands.3 We show below that the beatnotes between these clock sidebands constitute a pseudorange observable. Footnote 3: We focus on the first order upper clock sidebands, because the lower sidebands contain almost the same information. Considering on-board delays, the difference between carrier and sideband beatnotes can be written as \[\mathrm{ISI}^{\hat{\tau}_{i}}_{ij}(\tau)-\mathrm{IS}^{\hat{\tau} _{i}}_{\mathrm{sb},\,ij}(\tau)=-\,\mathbf{D}^{\mathrm{cb},\,\hat{\tau}_{i}}_{ \mathrm{dec}\leftarrow\mathrm{bs}}\] \[\Big{\{}\mathbf{D}^{\hat{\tau}_{i}}_{\mathrm{bbs}\leftarrow \mathrm{pbs}}\,\mathbf{D}^{\hat{\tau}_{i}}_{ij}\left(\mathbf{D}^{\mathrm{sb}, \,\hat{\tau}_{j}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}}\,\nu^{\mathrm{m}}_{ji} \,\hat{\tau}^{\hat{\tau}_{j}}_{j}(\tau)+\nu^{\mathrm{m}}_{ji}\,M^{\hat{\tau}_{ j}}_{ji}(\tau)\right)\] \[-\Big{(}\mathbf{D}^{\mathrm{sb},\,\hat{\tau}_{i}}_{\mathrm{bbs} \leftarrow\mathrm{pmc}}\,\nu^{\mathrm{m}}_{ij}\,\hat{\tau}^{\hat{\tau}_{i}}_{ i}(\tau)+\nu^{\mathrm{m}}_{ij}\,M^{\hat{\tau}_{i}}_{ij}(\tau)\Big{)}\Big{\}}. \tag{25}\] \(\mathbf{D}^{\mathrm{sb}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}}\) and \(\mathbf{D}^{\mathrm{sb}}_{\mathrm{bs}\leftarrow\mathrm{pmc}}\) are the delay operators associated to the paths from the PMC to the PBS and to the ISI BS, respectively. They can be decomposed into: \[\mathbf{D}^{\mathrm{sb}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}}=\mathbf{D}_{ \mathrm{(p)bs}\leftarrow\mathrm{eom}}\,\mathbf{D}_{\mathrm{eom}\leftarrow \mathrm{pmc}}\,\mathbf{D}_{\mathrm{up}}, \tag{26}\] \(\mathbf{D}_{\mathrm{up}}\) is the up-conversion delay due to phase-locking a 2.40(1) GHz oscillator to the the 80 MHz PMC signal, \(\mathbf{D}_{\mathrm{eom}\leftarrow\mathrm{pmc}}\) is the cable delay from the PMC to the EOM. \(\nu^{\mathrm{m}}_{ij}\) is the up-converted USO frequency associated to MOSA\({}_{ij}\). Since eq. 25 is expressed in the SCET, all clock imperfections are included in \(\hat{\tau}^{\hat{\tau}_{i}}_{i}(\tau)\). The modulation noise \(M^{\hat{\tau}_{i}}_{ij}\) contains any additional jitter collected on the path \(\mathbf{D}^{\mathrm{sb}}_{\mathrm{(p)bs}\leftarrow\mathrm{pmc}}\), e.g., due to the electrical frequency up-converters. The amplitude spectral densities (ASDs) of the modulation noise for left and right-handed MOSAs are specified to be below [19; 6] \[\sqrt{S_{M_{l}}(f)} =2.5\times 10^{-6}\mathrm{m\,Hz^{-0.5}}\left(\frac{f}{\mathrm{Hz}} \right)^{-2/3}, \tag{27}\] \[\sqrt{S_{M_{r}}(f)} =2.5\times 10^{-5}\mathrm{m\,Hz^{-0.5}}\left(\frac{f}{\mathrm{Hz}} \right)^{-2/3}. \tag{28}\] The modulation noise on left-handed MOSAs is one order of magnitude lower, because the pilot tone used for the ADC jitter correction, hence being the ultimate phase reference, is derived from the 2.400 GHz clock signal. To derive a pseudorange observation equation from the sideband beatnote we expand eq. 25 using eq. 2. We apply the advancement operator \(\mathbf{A}^{\mathrm{sb}}_{\mathrm{pbs}\leftarrow\mathrm{dec}}\) to avoid nested delays in the pseudorange: \[\mathbf{A}^{\mathrm{sb},\,\hat{\tau}_{i}}_{\mathrm{pbs}\leftarrow \mathrm{dec}}\left(\mathrm{ISI}^{\hat{\tau}_{i}}_{ij}(\tau)-\mathrm{ISI}^{ \hat{\tau}_{i}}_{\mathrm{sb},\,ij}(\tau)\right)\] \[= \nu^{\mathrm{m}}_{ij}\,\mathbf{A}^{\hat{\tau}_{i}}_{\mathrm{ pbs}\leftarrow\mathrm{bbs}}\left(\mathbf{D}^{\mathrm{sb},\,\hat{\tau}_{i}}_{ \mathrm{bs}\leftarrow\mathrm{pmc}}\,\hat{\tau}^{\hat{\tau}_{i}}_{i}(\tau)+M^{ \hat{\tau}_{i}}_{ij}\right)\] \[- \nu^{\mathrm{m}}_{ji}\,\mathbf{D}^{\hat{\tau}_{i}}_{ij}\left( \mathbf{D}^{\mathrm{sb},\,\hat{\tau}_{j}}_{\mathrm{pbs}\leftarrow\mathrm{pmc}} \,\hat{\tau}^{\hat{\tau}_{j}}_{j}(\tau)+M^{\hat{\tau}_{j}}_{ji}(\tau)\right)\] \[= \left(\nu^{\mathrm{m}}_{ij}-\nu^{\mathrm{m}}_{ji}\right)\tau+\nu^{ \mathrm{m}}_{ji}\,R^{\hat{\tau}_{i}}_{ij}(\tau)\] \[+ \nu^{\mathrm{m}}_{ji}\cdot d^{\hat{\tau}_{j}}_{\mathrm{pbs}\leftarrow \mathrm{pmc}}-\nu^{\mathrm{m}}_{ij}\cdot\left(d^{\hat{\tau}_{i}}_{\mathrm{ bbs}\leftarrow\mathrm{pmc}}-d^{\hat{\tau}_{i}}_{\mathrm{pbs}\leftarrow\mathrm{bs}}\right)\] \[+ \nu^{\mathrm{m}}_{ij}\,\mathbf{A}^{\hat We subtract the \(1\,\mathrm{MHz}\) ramp and then refer to eq. 29 as sideband ranging (SBR). Taking into account that the SBR phase is defined up to a cycle, the SBR can be written as \[\mathrm{SBR}_{ij}^{\hat{\tau}_{i}}(\tau) =\mathbf{A}_{\mathrm{pbs\ \leftarrow\ dec}}^{\mathrm{sb,\ \hat{\tau}_{i}}}\left(\mathrm{ISI}_{ij}^{\hat{\tau}_{i}}(\tau)-\mathrm{ISI}_{ \mathrm{sb,\ }ij}^{\hat{\tau}_{i}}(\tau)\right)\pm 1\,\mathrm{MHz}\,\tau\] \[=\nu_{ji}^{\mathrm{m}}\,R_{ij}^{\hat{\tau}_{i}}(\tau)+O_{ij}^{ \mathrm{sb}}+N_{ij}^{\mathrm{sb}}(\tau)-a_{ij}^{\mathrm{sb}}(\tau). \tag{30}\] \(a_{ij}^{\mathrm{sb}}\) denote the SBR ambiguity integers. Expressed as length, the SBR ambiguity is \(12.5\,\mathrm{cm}\) corresponding to the wavelength of the GHz sidebands. The SBR offset \[O_{ij}^{\mathrm{sb}} =\nu_{ji}^{\mathrm{m}}\cdot d_{\mathrm{pbs\ \leftarrow\ pmc}}^{\hat{\tau}_{j}}\] \[-\nu_{ij}^{\mathrm{m}}\cdot\left(d_{\mathrm{bs\ \leftarrow\ pmc}}^{\hat{\tau}_{i}}-d_{ \mathrm{pbs\ \leftarrow\ bs}}^{\hat{\tau}_{i}}\right) \tag{31}\] can be thought of as the differential phase accumulation of local and distant PMC signals on their paths to the respective PBSs. Similar to the PRNR offset and the various delays, the SBR offset should be measured on ground. \(N_{ij}^{\mathrm{sb}}\) denotes the appearance of the modulation noise in the SBR: \[N_{ij}^{\mathrm{sb}}(\tau)=\nu_{ij}^{\mathrm{m}}\,\mathbf{A}_{\mathrm{pbs\ \leftarrow\ bs}}^{\hat{\tau}_{i}}(\tau)-\nu_{ji}^{\mathrm{m}}\,\mathbf{D}_{ij}^ {\hat{\tau}_{i}}M_{ji}^{\hat{\tau}_{j}}(\tau). \tag{32}\] This is a combination of left and right-handed modulation noise, their RMS amplitudes are \(2.9\times 10^{-5}\,\mathrm{m}\) and \(2.9\times 10^{-4}\,\mathrm{m}\), respectively. As shown in [6], it is possible to combine carrier and sideband beatnotes from the RFI to form measurements of the dominating right-handed modulation noise, which can, thus, be subtracted from the SBRs (see appendix B). The advancement operator \(\mathbf{A}_{\mathrm{pbs\ \leftarrow\ dec}}^{\mathrm{sb}}\) (see eq. 29) is associated to the delay operator \(\mathbf{D}_{\mathrm{dec\leftarrow\ bs}}^{\mathrm{sb}}\), to which we refer as sideband timestamping delay. The sideband timestamping delay can be decomposed into: \[\mathbf{D}_{\mathrm{dec\ \leftarrow\ \mathrm{pbs}}}^{\mathrm{sb}} =\mathbf{D}_{\mathrm{dec}}^{\mathrm{sb}}\,\mathbf{D}_{\mathrm{ dpll\ \leftarrow\ \mathrm{abee}}}^{\mathrm{sb}}\,\mathbf{D}_{\mathrm{abee}}^{\mathrm{sb}}\] \[\mathbf{D}_{\mathrm{abee\ \leftarrow\ qpr}}\,\mathbf{D}_{\mathrm{ qpr}}^{\mathrm{sb}}\,\mathbf{D}_{\mathrm{qpr\ \leftarrow\ bs}}, \tag{33}\] these constituents are marked dark yellow in fig. 3. As for the common carrier and the PRN timestamping delay, we propose to individually calibrate all its constituents on ground. The sideband timestamping delay can then be compensated in an initial data treatment by application of its associated advancement operator (see eq. 29). In reality, the beatnotes are expected to be delivered not in phase, but in frequency with occasional phase anchor points. Therefore, we consider the derivative of eq. 30, we refer to it as sideband range rate (SBR): \[\mathrm{SBR}_{ij}^{\hat{\tau}_{i}}(\tau)=\nu_{ji}^{\mathrm{m}}\,\dot{R}_{ij}^ {\hat{\tau}_{i}}(\tau)+\dot{N}_{ij}^{\mathrm{sb}}(\tau). \tag{34}\] The sideband range rates are an offset-free and unambiguous measurement of the pseudorange time derivatives. Phase anchor points enable their integration, so that we recover eq. 30. ### Time-delay interferometric ranging (TDIR) TDI builds combinations of delayed ISI and RFI carrier beatnotes to virtually form equal-arm interferometers, in which laser frequency noise is suppressed. In the alternative TDI topology, the corresponding delays are given by the pseudoranges in combination with the small optical path lengths between laser, PBS, and ISI BS (see the uncommon carrier delay eq. 12). Time delay interferometric ranging (TDIR) turns this approach upside-down: it minimizes the power integral of the laser frequency noise in the TDI combinations by varying the delays that are applied to the beatnotes [12]. When doing this before clock synchronization to TCB, i.e., with the beatnotes sampled according to the respective SCETs, the uncommon delays show up at the very minimum of that integral. Thus, TDIR constitutes a pseudorange observable. Below, we consider TDI in frequency [14]. We introduce the Doppler-delay operator, which can be considered as the time derivative of the interspacecraft delay operator (see eq. 4): \[\dot{\mathbf{D}}_{ij}^{\hat{\tau}_{i}}\,f^{\hat{\tau}_{j}}(\tau)=\left(1-\dot{ R}_{ij}^{\hat{\tau}_{i}}(\tau)\right)\cdot f^{\hat{\tau}_{j}}\left(\tau-R_{ij}^{ \hat{\tau}_{i}}(\tau)\right). \tag{35}\] We use the shorthand notation \[\dot{\mathbf{D}}_{ijk}^{\hat{\tau}_{i}}=\dot{\mathbf{D}}_{ij}^{\hat{\tau}_{i}} \,\dot{\mathbf{D}}_{jk}^{\hat{\tau}_{j}} \tag{36}\] to indicate chained interspacecraft Doppler-delay operators. In this paper we neglect on-board delays in the RFI beatnotes. We start our consideration of TDIR from the intermediary TDI variables \(\eta_{ij}\). These are combinations of the ISI and RFI carrier beatnotes to eliminate the laser frequency noise contributions of right-handed lasers. In terms of the \(\eta_{ij}\) the second-generation TDI Michelson variables can be expressed as [20] \[X_{2}^{\hat{\tau}_{1}}=\left(1-\dot{\mathbf{D}}_{121}^{\hat{\tau }_{1}}-\dot{\mathbf{D}}_{12131}^{\hat{\tau}_{1}}+\dot{\mathbf{D}}_{1312121}^{ \hat{\tau}_{1}}\right)\left(\eta_{13}^{\hat{\tau}_{1}}-\dot{\mathbf{D}}_{13 }^{\hat{\tau}_{1}}\eta_{31}^{\hat{\tau}_{3}}\right)\] \[-\left(1-\dot{\mathbf{D}}_{131}^{\hat{\tau}_{1}}-\dot{\mathbf{D}}_ {13121}^{\hat{\tau}_{1}}+\dot{\mathbf{D}}_{1213131}^{\hat{\tau}_{1}}\right) \left(\eta_{12}^{\hat{\tau}_{1}}-\dot{\mathbf{D}}_{12}^{\hat{\tau}_{1}}\eta_{2 1}^{\hat{\tau}_{2}}\right) \tag{37}\] \(Y_{2}^{\hat{\tau}_{2}}(\tau)\) and \(Z_{2}^{\hat{\tau}_{3}}(\tau)\) are obtained by cyclic permutation of the indices. For later reference, we also state the first generation TDI Michelson variables: \[X_{1}^{\hat{\tau}_{1}} =(1-\dot{\mathbf{D}}_{121}^{\hat{\tau}_{1}})\left(\eta_{13}^{ \hat{\tau}_{1}}-\dot{\mathbf{D}}_{13}^{\hat{\tau}_{1}}\eta_{31}^{\hat{\tau}_{ 3}}\right)\] \[-(1-\dot{\mathbf{D}}_{131}^{\hat{\tau}_{1}})\left(\eta_{12}^{\hat{ \tau}_{1}}-\dot{\mathbf{D}}_{12}^{\hat{\tau}_{1}}\eta_{21}^{\hat{\tau}_{2}} \right). \tag{38}\] In the framework of TDIR, the delays applied in TDI are parameterized by a model, e.g., by a polynomial model. We minimize the power integral of the TDI combinations by varying the model parameters. TDIR attempts to minimize the in-band laser frequency noise residual. Therefore, we apply a band-pass filter to first remove other contributions appearing out-of-band, i.e., slow drifts and contributions above 1Hz that are dominated by aliasing and interpolation errors. The TDIR pseudorange observables for the second generation TDI Michelson variables can then be expressed as \[\mathrm{TDIR}_{ij}^{\tilde{\tau}_{i}}=\min_{\Theta}\frac{1}{T}\int_{ \frac{1}{T}}^{T}\left[\tilde{X}_{2}^{\tilde{\tau}_{1}}\right]^{2}+\left[\tilde{Y }_{2}^{\tilde{\tau}_{2}}\right]^{2}+\left[\tilde{Z}_{2}^{\tilde{\tau}_{3}} \right]^{2}\;\mathrm{d}t, \tag{39}\] \(\Theta\) denotes the parameters of the delay model, the tilde indicates the filtered TDI combinations. The TDIR accuracy, we denote it by \(\sigma^{\mathrm{tdir}}\), increases with the integration time \(T\) (length of telemetry dataset). It is in the order of [12]: \[\sigma^{\mathrm{tdir}}(T)\propto 10\,\mathrm{cm}\,\sqrt{\frac{\mathrm{d}}{T}}, \tag{40}\] where d stands for day. ### Ground-observation based ranging (GOR) The mission operation center (MOC) provides orbit determinations (ODs) via the ESA tracking stations and MOC time correlations (MOC-TCs). When combined properly, these two on-ground measurements form a pseudorange observable referred to as ground-observation based ranging (GOR). It has an uncertainty of about \(50\,\mathrm{km}\) due to uncertainties in both the OD and the MOC-TC. Yet, it yields valuable information. It is unambiguous, hence it allows to resolve the PRNR ambiguities. The OD yields information about the absolute positions and velocities of the three SC. New orbit determinations are published every few days. For the position and velocity measurements in the line of sight, radial (with respect to the sun) and cross-track direction conservative estimations by ESA state the uncertainties as \(2\,\mathrm{km}\) and \(4\,\mathrm{mm}\,\mathrm{s}^{-1}\), \(10\,\mathrm{km}\) and \(4\,\mathrm{mm}\,\mathrm{s}^{-1}\), \(50\,\mathrm{km}\) and \(5\,\mathrm{cm}\,\mathrm{s}^{-1}\), respectively [21]. The MOC-TC is a measurement of the SCET desynchronization from TCB. It is determined during the telemetry contacts via a comparison of the SCET associated to the emission of a telemetry packet and the TCB of its reception on Earth taking into account the down link delay. We expect the accuracy of the MOC-TC to be better than \(0.1\,\mathrm{ms}\) (corresponds to \(30\,\mathrm{km}\)). This uncertainty is due to unexact knowledge of the SC-to-ground-station separation, as well as inaccuracies in the time tagging process on board and on ground. As shown in appendix A, the pseudoranges can be expressed in TCB as functions of the reception time: \[R_{ij}^{t}(t)=(1+\delta\hat{\tau}_{j}^{t}(t))\cdot d_{ij}^{t}(t)+\delta\hat{ \tau}_{ij}^{t}(t). \tag{41}\] \(d_{ij}^{t}\) denotes the light travel time from SC \(j\) to SC \(i\), \(\delta\hat{\tau}_{ij}^{t}\) the offset between the involved SCETs, and \(\delta\hat{\tau}_{j}^{t}\) the SCET drift of the emitting SC with respect to TCB. The light travel times can be expressed in terms of the ODs [22]: \[d_{\mathrm{od},\,ij}^{t}(t) =\frac{1}{c}L_{ij}^{t}(t)+\frac{1}{c^{2}}\;\vec{L}_{ij}^{t}(t) \cdot\vec{v}_{j}^{t}(t)+O(c^{-3}), \tag{42}\] \[\vec{L}_{ij} =\vec{r}_{i}-\vec{r}_{j},\;L_{ij}=|\vec{L}_{ij}|, \tag{43}\] \(\vec{r}_{i}\) denoting the position of the receiving SC, \(\vec{r}_{j}\) and \(\vec{v}_{j}\) the position and the velocity of the emitting one, respectively. The terms of order \(O(c^{-3})\) contribute to the light travel time at the order of \(10\,\mathrm{m}\) and are therefore negligible compared to the large uncertainties of the orbit determination. Combining the light travel times obtained this way with the MOC-TC allows to write the GOR as \[\mathrm{GOR}_{ij}^{t}(t)=d_{\mathrm{od},\,ij}^{t}(t)+\delta\hat{ \tau}_{\mathrm{tc},\,i}^{t}(t)-\delta\hat{\tau}_{\mathrm{tc},\,j}^{t}(t)+N_{ ij}^{\mathrm{gor}}(t). \tag{44}\] \(\delta\hat{\tau}_{\mathrm{tc},\,i}^{t}\) denotes the MOC-TC of SC \(i\) and \(N_{\mathrm{gor}}^{t}\sim 50\,\mathrm{km}\) the GOR uncertainty. Note that OD and MOC-TC, and hence also the GOR, are given in TCB, while all other pseudorange observables are sampled in the respective SCETs. This desynchronization is negligible: the desynchronization can amount up to \(10\,\mathrm{s}\) after the ten year mission time, the pseudoranges drift with \(10\) to \(100\,\mathrm{m}\,\mathrm{s}^{-1}\) (see central plot in fig. 5). Hence, neglecting the desynchronization leads to an error in the order of \(100\) to \(1000\,\mathrm{m}\), which is negligible compared to the large GOR uncertainty. ## III Ranging Sensor Fusion To combine the four pseudorange observables, we propose a three-stage ranging sensor fusion consisting of an initial data treatment, a ranging processing, and cross-checks. The ranging processing (central part of fig. 4) refers to the ranging-related routines, which need to run continuously during operation. These are the PRNR unwrapping, and the reduction of ranging and right-handed modulation noise. Simultaneously, the PRNR ambiguities and offsets are steadily crosschecked using TDIR and GOR (lower part of fig. 4). Both ranging processing and crosschecks rely on a preceding initial data treatment (upper part of fig. 4), in which the various delays and offsets are compensated for. Ranging processing and crosschecks can be categorized into four parts demonstrated below: PRNR ambiguity, noise, PRNR offset, and SBR ambiguity. ### PRNR ambiguity As part of the ranging processing, the PRNR needs to be steadily unwrapped: due to the finite PRNR code length, the PRNR jumps back to \(0\,\mathrm{km}\) when crossing \(400\,\mathrm{km}\) and vice versa (see upper plot in fig. 5). These jumps are unphysical but easy to track and to remove. Apart from that, the PRNR ambiguities need to be cross-checked regularly. For that purpose we propose two independent methods below. The combination of PRNR and GOR enables an identification of the PRNR ambiguity integers \(a_{ij}^{\text{prn}}\): \[\text{GOR}_{ij}^{t}(t)-\text{PRNR}_{ij}^{\hat{\tau}_{i}}(\tau)=N_{ ij}^{\text{gor}}+a_{ij}^{\text{prn}}(\tau)\cdot 400\,\text{km}\] \[\qquad\qquad\qquad\qquad+\underbrace{R_{ij}^{t}(t)-R_{ij}^{\hat{ \tau}_{i}}(\tau)-O_{ij}^{\text{prn}}-N_{ij}^{\text{prn}}(\tau)}_{\text{negligible}}, \tag{45}\] \[a_{ij}^{\text{prn}}(\tau)=\text{round}\left[\frac{\text{GOR}_{ij}^{t}(t)-\text {PRNR}_{ij}^{\hat{\tau}_{i}}(\tau)}{400\,\text{km}}\right], \tag{46}\] \(400\,\text{km}\) is the value we assumed for the PRN code length. However, this procedure only succeeds if \(|N_{ij}^{\text{gor}}|\) does not exceed the PRN code's half length, i.e., \(200\,\text{km}\). Otherwise, a wrong value for the associated PRN ambiguity integer is selected resulting in an estimation error of \(400\,\text{km}\) in the corresponding link. Note that \(\text{GOR}_{ij}^{t}(t)\) and \(\text{PRNR}_{ij}^{\hat{\tau}_{i}}(\tau)\) are sampled according to different time frames, but this desynchronization is negligible considering the low accuracy that needs to be reached here (see section II.5). TDIR constitutes an unambiguous pseudorange observable too. It can be applied as an independent cross-check of the PRNR ambiguities. We linearly detrend the ISI, RFI, and TMI beatnotes. We then form the first-generation TDI Michelson variables (see eq. 38) assuming constant delays. It is not necessary to apply second-generation TDI, the first-generation already accomplishes the task (see fig. 8). The pseudoranges are actually drifting by \(10\) to \(100\,\text{m}\,\text{s}^{-1}\) mainly due to differential USO frequency offsets (see central plot in fig. 5). Therefore, we choose a short integration time (we use \(150\,\text{s}\)), otherwise the constant delay model is not sufficient. We use the GOR for the initial delay values of the TDIR estimator. The TDIR pseudorange estimates can then be used to crosscheck the PRNR ambiguity integers: \[a_{ij}^{\text{prn}}(\tau)=\text{round}\left[\frac{\text{TDIR}_{ij}^{\hat{\tau} _{i}}(\tau)-\text{PRNR}_{ij}^{\hat{\tau}_{i}}(\tau)}{400\,\text{km}}\right]. \tag{47}\] ### Noise reduction For the ranging noise reduction in the ranging processing, we propose to combine PRNR and sideband range rates in a linear Kalman filter (KF). The conventional KF requires all measurements to be sampled according to one overall time grid. However, in LISA each SC involves Figure 4: We illustrate the three-stage ranging sensor fusion. Processing elements are drawn with a black frame. In the upper part we show the initial data treatment. Products of the on-ground calibration (the various delays and offsets) are drawn green. Raw datasets are drawn yellow, after the initial data treatment we add a green frame. In the central part we show the core ranging processing. Its output, the pseudoranges, are drawn with a blue frame. In the right box we show how the pseudoranges are combined with the small optical path length to form the uncommon delays (the delays for TDI). In the lower part we show simultaneous crosschecks of PRNR ambiguity, PRNR offset, and SBR ambiguity. Products of these crosschecks are drawn with a red frame. We do not consider the on-board delays of the RFI and TMI beatnotes. its own SCET. We circumvent this difficulty by splitting up the system and build one KF per SC. Each KF only processes the measurements taken on its associated SC, so that the individual SCETs serve as time-grids. The state vector of the KF belonging to SC 1 and its associated linear system model can be expressed as \[x^{\hat{\tau}_{1}} =(R_{12}^{\hat{\tau}_{1}},\,R_{13}^{\hat{\tau}_{1}},\,\dot{R}_{12} ^{\hat{\tau}_{1}},\,\dot{R}_{13}^{\hat{\tau}_{1}},\,\ddot{R}_{12}^{\hat{\tau}_ {1}},\,\ddot{R}_{13}^{\hat{\tau}_{1}})^{\intercal}, \tag{48}\] \[x_{k+1}^{\hat{\tau}_{1}} =\begin{pmatrix}1&0&\Delta t&0&\frac{\Delta t^{2}}{2}&0\\ 0&1&0&\Delta t&0&\frac{\Delta t^{2}}{2}\\ 0&0&1&0&\Delta t&0\\ 0&0&0&1&0&\Delta t\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\end{pmatrix}.x_{k}^{\hat{\tau}_{1}}+w_{k}^{\hat{\tau}_{1}}, \tag{49}\] \(k\) being a discrete time index. Eq. 49 describes the time evolution of the state vector from \(k\) to \(k+1\). \(w_{k}^{\hat{\tau}_{1}}\) denotes the process noise vector, its covariance matrix is given by \[\mathrm{E}\left[w_{k}\cdot w_{l}^{\mathrm{T}}\right] =\delta_{k,\,l}\,W, \tag{50}\] \[W =\mathrm{diag}\Big{(} 0,\,0,\,0,\,0,\] (51) \[10^{-15}\mathrm{s}^{-1},\,10^{-15}\mathrm{s}^{-1}\Big{)}^{2}.\] \(\delta_{k,\,l}\) denotes the Kronecker delta. Hence, eq. 50 indicates that each component of \(w_{k}^{\hat{\tau}_{1}}\) is a white random process. The process noise covariance matrix we used in our implementation is given in eq. 51. The measurement vector and the associated observation model are given by \[y^{\hat{\tau}_{1}} =(\mathrm{PRNR}_{12}^{\hat{\tau}_{1}},\,\mathrm{PRNR}_{13}^{\hat{ \tau}_{1}},\,\mathrm{S}\dot{\mathrm{R}}_{12}^{\hat{\tau}_{1}},\,\mathrm{S} \dot{\mathrm{R}}_{13}^{\hat{\tau}_{1}})^{\intercal}, \tag{52}\] \[y_{k}^{\hat{\tau}_{1}} =\begin{pmatrix}1&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&2.401\,\mathrm{GHz}&0&0\\ 0&0&0&2.400\,\mathrm{GHz}&0&0\end{pmatrix}\cdot x_{k}^{\hat{\tau}_{1}}+v_{k} ^{\hat{\tau}_{1}}. \tag{53}\] Eq. 53 relates the measurement vector to the state vector. \(v_{k}^{\hat{\tau}_{1}}\) denotes the measurement noise vector, its covariance matrix is given by \[\mathrm{E}\left[v_{k}\cdot v_{l}^{\mathrm{T}}\right] =\delta_{k,\,l}\,V, \tag{54}\] \[V =\mathrm{diag}\Big{(} 3\cdot 10^{-9}\mathrm{m}\,\mathrm{s}^{-1},\,3\cdot 10^{-9} \mathrm{m}\,\mathrm{s}^{-1},\] (55) \[5.2\cdot 10^{-13},\,\,5.2\cdot 10^{-13}\Big{)}^{2}.\] The measurement noise covariance matrix we used in our implementation is given in eq. 55. The diagonal entries denote the variances of the respective measurements. We assume the measurements to be uncorrelated, so that the off-diagonal terms are zero. The KFs for SC 2 and SC 3 are defined accordingly. In this manner, we remove the ranging noise and obtain estimates for the six pseudo-ranges and their time derivatives. These pseudorange estimates are dominated by the right-handed modulation noise, which is one order of magnitude higher than the left-handed one. As pointed out in [6], the right-handed modulation noise can be subtracted (see appendix B): we combine the RFI measurements to form the \(\Delta M_{i}\), which are measurements of the right-handed modulation noise on SC \(i\) (see eq. B4). For right-handed MOSAs, the local right-handed modulation noise enters the sideband range rates and we just need to subtract the local \(\Delta M_{i}\) (see eq. B5b). For left-handed MOSAs the Doppler-delayed right-handed modulation noise from the distant SC appears in the sideband range rates. Here we need to apply the Kalman filter estimates for the pseudoranges and their time derivatives to form the Doppler-delayed distant \(\Delta M_{i}\), which then can be subtracted (see eq. B5a). We then process the three KFs again, this time with the corrected sideband range rates. Nowe they are limited by left-handed modulation noise, so that the respective noise levels are lower. Therefore, we need to adjust the measurement noise covariance matrix for the second run of the KFs: \[V_{\,\mathrm{cor}} =\mathrm{diag}\Big{(}3\cdot 10^{-9}\mathrm{m}\,\mathrm{s}^{-1},\,3 \cdot 10^{-9}\mathrm{m}\,\mathrm{s}^{-1},\] \[7.4\cdot 10^{-14},\,\,7.4\cdot 10^{-14}\Big{)}^{2}. \tag{56}\] In this way we obtain estimates for the pseudoranges and their time derivatives, which are limited by the left-handed modulation noise. ### PRNR offset The PRNR offset is calibrated on ground before mission start. During operation, it is constructed with the help of SC monitors and subtracted in the initial data treatment. TDIR can be used as a crosscheck for residual PRNR offsets, as it is sensitive to offsets in the delays. To obtain optimal performance we choose the second-generation TDI Michelson variables to be ultimately limited by secondary noises. In-band clock noise is sufficiently suppressed, since we operate on beatnotes in total frequency and make use of the in-band ranging information provided by the preceding noise reduction step. Accordingly, the offset delay model is parameterized by \[d_{ij}^{\hat{\tau}_{i}}(\tau)=\hat{R}_{ij}^{\hat{\tau}_{i}}(\tau)-O_{ij}, \tag{57}\] \(\hat{R}_{ij}^{\hat{\tau}_{i}}\) denote the pseudorange estimates after noise reduction, \(O_{ij}\) are the 6 offset parameters. As discussed in section II.4, computing TDI in total frequency units generally results in a variable with residual trends. Those trends need to be removed prior to calculation of the TDIR integral to be sensitive to residual laser noise in band. This is achieved by an appropriate band-pass filter with a pass-band from \(0.1\,\mathrm{Hz}\) to \(1\,\mathrm{Hz}\). The TDIR integral then reads \[\hat{O}_{ij}=\operatorname*{arg\,min}_{O_{ij}}\int_{0}^{T}\!\tilde{X}^{2}(t)+ \tilde{Y}^{2}(t)+\tilde{Z}^{2}(t)\,\mathrm{d}t \tag{58}\] where tilde indicates the filtered quantity. ### SBR ambiguity Phase anchor points, together with the pseudorange estimates after noise reduction, enable the resolution of the SBR ambiguity (see eq. 30): \[a_{ij}^{\text{sb}}(\tau)=\text{round}\left[\nu_{ji}^{\text{m}}\,\hat{R}_{ij}^{ \hat{\tau}_{i}}(\tau)-\text{SBR}_{ij}^{\hat{\tau}_{i}}(\tau)\right]. \tag{59}\] \(\text{SBR}_{ij}^{\hat{\tau}_{i}}\) are the phase anchor points, \(\hat{P}_{ij}^{\hat{\tau}_{i}}\) the pseudorange estimates after noise reduction. Thus, we obtain estimates of the SBR ambiguity integers \(a_{ij}^{\text{sb}}\). The resolution is successful if the pseudorange estimates are more accurate than \(6.25\,\text{cm}\) (half the ambiguity). From the perspective of noise reduction, this is feasible (see section IV). Having resolved the SBR ambiguity, the pseudorange estimates associated to the phase anchor points serve as initial values for the integration of the sideband range rates. The resolution of the SBR ambiguity is worthwhile: SBR constitutes a very accurate pseudo-range observable, as both its stability and accuracy are limited by the modulation noise. ## IV Results In this section, we demonstrate the performance of our implementation of the core ranging processing and the crosschecks as proposed in section III (central and lower part of fig. 4). We did not implement the initial data treatment. Instead we assume that the common carrier, PRN, and sideband timestamping delays are compensated beforehand. We further consider offset-free PRNR and apply TDIR as a crosscheck for residual offsets. We use telemetry data simulated by LISA Instrument [23] and LISANode [24] based on orbits provided by ESA [25, 21]. We simulate phase anchor points for the SBR (see eq. 30). The SCET deviations from the respective proper times are modeled by \[\delta\hat{\tau}_{i}(\tau)=\delta\hat{\tau}_{i,\,0}+y_{i}\,\tau+ \frac{\dot{y}_{i}}{2}\,\tau^{2}+\frac{\ddot{y}_{i}}{3}\,\tau^{3}+\int_{\tau_{ 0}}^{\tau}\!\text{d}\tilde{\tau}\,y_{i}^{\epsilon}(\tilde{\tau}), \tag{60}\] the \(\delta\hat{\tau}_{i,\,0}\) denote the initial SCET deviations set to \(1\,\text{s}\), \(-1.2\,\text{s}\), and \(0.6\,\text{s}\) for SC 1, 2, and 3, respectively. The \(y_{i}\) model the PMC frequency offsets corresponding to linear clock drifts. They are set to \(10^{-7}\), \(-2\times 10^{-7}\), and \(0.6\times 10^{-7}\) for SC 1, 2, and 3, respectively. \(\dot{y}_{i}\sim 10^{-14}\,\text{s}^{-1}\) and \(\ddot{y}_{i}\sim 10^{-23}\,\text{s}^{-2}\) are constants modeling the linear and quadratic PMC frequency drifts. The \(y_{i}^{\epsilon}\) denote the stochastic clock noise in fractional frequency deviations, the associated ASD is given by \[\sqrt{S_{y^{\epsilon}}(f)}=6.32\times 10^{-14}\text{Hz}^{-0.5} \left(\frac{f}{\text{Hz}}\right)^{-0.5}. \tag{61}\] We simulate laser frequency noise with an ASD of \[\sqrt{S_{N^{p}}(f)}=30\,\text{Hz}\,\text{Hz}^{-0.5}, \tag{62}\] and ranging and modulation noise as specified in the sections II.2 and II.3. Furthermore, we consider test-mass acceleration noise \[\sqrt{S_{N^{s}}(f)}=4.8\times 10^{-15}\text{m}\,\text{s}^{-2}\,\text{Hz}^ {-0.5}\sqrt{1+\left(\frac{0.4\,\text{mHz}}{f}\right)^{2}} \tag{63}\] and readout noise \[\sqrt{S_{N^{ro}}(f)}=A\,\sqrt{1+\left(\frac{2\,\text{mHz}}{f} \right)^{4}}, \tag{64}\] where \(A=6.35\times 10^{-12}\text{m}\,\text{Hz}^{-0.5}\) for the ISI carrier and \(A=1.25\times 10^{-11}\text{m}\,\text{Hz}^{-0.5}\) for the ISI sideband beatnotes. For the readout noise we set a saturation frequency of \(f_{\text{sat}}=0.1\,\text{mHz}\), below which we whiten. The orbit determinations are simulated by LISA Ground Tracking with the noise levels specified in section II. Figure 5: Upper plot: raw PRNR. The ambiguity jumps at \(0\,\text{km}\) and \(400\,\text{km}\) can be seen. Central plot: ambiguous PRNR, the jumps have been removed but the PRNR ambiguities have not been resolved yet. The large slopes are mainly due to USO frequency offsets. Lower plot: unambiguous PRNR. The large differences between the links are caused by differential SCET offsets. ### Ranging processing Here we demonstrate the performance of our implementation of the core ranging processing for one day of telemetry data simulated by LISA Instrument [23]. The first ranging processing step covers the PRNR unwrapping (see fig. 5). The upper plot shows the raw PRNR, which jumps back to \(0\,\mathrm{km}\) when crossing \(400\,\mathrm{km}\) and vice versa. These jumps are easy to track and to remove. In our implementation we remove all PRNR jumps bigger than \(200\,\mathrm{km}\). The central plot shows the unwrapped but yet ambiguous PRNR. Here you can see PRNR drifts of the order of \(10\) to \(100\,\mathrm{m}\,\mathrm{s}^{-1}\), which are mainly due to differential USO frequency offsets. Inserting the PRNR ambiguity integers obtained from GOR and TDIR yields the unambiguous PRNR shown in the lower plot. In the second step, we use the Kalman filter presented in section III to reduce the ranging noise. Subsequently, we subtract the right-handed modulation noise applying the \(\Delta M\) measurements constructed from the RFI beatnotes (see appendix B). After noise reduction, we resolve the SBR ambiguities combining the estimated pseudorange with the simulated SBR phase anchor points (see eq. 59). We then integrate the sideband range rates, to obtain unambiguous SBR. In fig. 6, we plot the ASDs of the residual pseudorange estimates (deviations of the estimates from the true pseudorange values in the simulation) for link 12 (upper plot) and link 21 (lower plot). Blue lines show the ASDs of the residual PRNR, which are essentially the ASDs of the white ranging noises. The residual pseudorange estimates after ranging noise reduction are plotted in orange. They are obtained by combining the PRNR with the sideband range rates. Therefore, they are limited by right-handed modulation noise (dashed black line). In green, we plot the residual pseudorange estimates after subtraction of right-handed modulation noise with the RFI beatnotes. Now the estimates are limited by left-handed modulation noise (dash-dotted black line). The residual SBR are drawn red, they are limited by left-handed modulation noise as well, but involve a smaller offset, since the SBR phase anchor points are more accurate than PRNR after ranging noise reduction (see fig. 7). In the case of left-handed MOSAs (see link 12) the RFI beatnotes need to be time shifted to form the delayed \(\Delta M\) measurements. We apply the time shifting method of PyTDI [26], which consists in a Lagrange interpolation (we use order 5). The interpolation introduces noise in the high frequency band (see the bump at \(2\,\mathrm{Hz}\) in the upper plot) but this is out of band. Figure 6: ASDs of the residual pseudorange estimates for link 12 (upper plot) and link 21 (lower plot). In blue, residual PRNR. In orange, residual pseudorange estimates after ranging noise reduction. In green, residual pseudorange estimates after subtraction of right-handed modulation noise. In red, residual SBR. Dashed black lines, right-handed modulation noise model. Dash-dotted black lines, left-handed modulation noise model. Fig. 7 shows the different residual pseudorange estimates as time series. The upper plot shows the 6 residual pseudorange estimates after ranging noise reduction, the second plot after subtraction of right-handed modulation noise. The third plot shows the SBR residuals. The subtraction of right-handed modulation noise reduces the noise floor, but it does not increase the accuracy of the pseudorange estimates. The accuracy can be increased by one order of magnitude through the resolution of the SBR ambiguities. After ambiguity resolution, SBR constitutes pseudorange estimates with sub-mm accuracy. ### Crosschecks Here we demonstrate the performance of our implementation of the crosschecks for PRNR ambiguity and PRNR offset. The PRNR ambiguities can be resolved using either GOR (see eq. 46) or TDIR (see eq. 47). To evaluate the performance of both methods, we simulate 1000 short (150 s) telemetry datasets with LISA Instrument [23], and one set of ODs and MOC-TCs for each of them. We compute the GOR and TDIR pseudorange estimates for each of the 1000 datasets. Fig. 8 shows the GOR residuals (first row) and the TDIR residuals (second row) in km as histogram plots. We see that the GOR accuracy depends on the arm, because we obtain more accurate ODs for arms oriented in line of sight direction than for those oriented cross-track. The PRNR ambiguity resolution via GOR is successful for GOR deviations smaller than 200 km. In the case of the links 23, 31, 13, and 32 all PRNR ambiguity resolutions via GOR are successful. For each of the links 12 and 21, 2 out of the 1000 PRNR ambiguity resolutions fail. The GOR estimates are passed as initial values to TDIR, which then reduces the uncertainty by almost one order of magnitude (lower plot of fig. 8), such that eventually all PRNR ambiguity resolutions are successful. TDIR can also be applied to estimate the PRNR offsets. Hence, it constitutes a cross-check of the on-ground PRNR offset calibration. We simulate one year of telemetry data using LISANode [24]. We set the PRNR offsets to 160.3 m, \(-210.2\) m, 137.3 m, \(-250.3\) m, \(-188.8\) m, and 105.1 m for the links 12, 23, 31, 13, 32, and 21, respectively. We divide the dataset into 1 day chunks (left plots in fig. 9), 2 day chunks (central plots in fig. 9), and 3 day chunks (right plots in fig. 9). In each partition we apply the TDIR estimator presented in section III.3 to each chunk in order to estimate the PRNR offsets. This computation was parallelized and executed on the ATLAS cluster at the AEI Hannover. In the upper part of fig. 9 we show the offset estimation residuals for the three chunk sizes. The offset estimation accuracy increases with the chunk size in agreement with the order of magnitude estimate through eq. 40. In the lower part of fig. 9 we plot the residual cumulative averages of the PRNR offset estimates for the different chunk sizes. Here, it can be seen that the TDIR estimator performs similarly for the different chunk sizes. With the 3 day chunk size we can estimate all PRNR offsets with an accuracy of better than 20 cm after 10 days. The dashed-black lines indicate 6.25 cm (half the SBR ambiguity). This is the required PRNR offset estimation accuracy for a successful SBR ambiguity resolution. With the 3 day chunk size all offset estimation residuals are below these 6.25 cm after 179 days. ## V Conclusion The reduction of laser frequency noise in TDI crucially depends on information about the pseudoranges. There are four pseudorange observables each having advantages and disadvantages. In this article, we first derived their observation equations carefully taking into account ambiguities, noise, and on-board delays, which cause offsets and timestamping delays. We then proposed a three-stage ranging sensor fusion (initial data treatment, ranging processing, crosschecks, compare fig. 4) to combine Figure 7: Upper plot: residual pseudorange estimates after ranging noise reduction. Second plot: residual pseudorange estimates after subtraction of right-handed modulation noise. Third plot: residual SBR estimates. the four pseudorange observables, such that we obtain optimal pseudorange estimates. We pointed out that the common carrier, PRN, and sideband timestamp delays (see eqs. 16, 22, and 33), as well as the PRNR and SBR offsets (see eqs. 23 and 26) need to be calibrated on ground, so that they can be compensated in the initial data treatment. We further derived that the small optical path lengths between laser and PBS, PBS and ISI BS, and laser and ISI BS show up in the uncommon delays (see eq. 12), which are to be applied in TDI. We proposed to measure these optical path lengths on ground, so that during operation they can be combined with the pseudorange estimates to form the uncommon delays. We identified the processing steps, which need to be performed continuously during operation. These are the PRNR unwrapping, and the reduction of ranging and right-handed modulation noise, we referred to them as ranging processing. We implemented the ranging processing numerically: we showed that the white ranging noise can be reduced by combining the PRNR with the sideband range rates in a KF. We split up the system and implemented one KF per SC, such that the individual SCETs served as KF time-grids. We further applied the RFI beatnotes to subtract the right-handed modulation noise. The pseudorange estimates we obtained this way were at sub-cm accuracy. We showed that in combination with phase anchor points they allow for the resolution of the SBR ambiguity resulting in pseudorange estimates at sub-mm accuracy. We implemented crosschecks for the PRNR ambiguities and offsets. We showed that both GOR and TDIR allow for the resolution of the PRNR ambiguity. We applied TDIR as a crosscheck for the PRNR offset calibration and demonstrated its performance for one year of telemetry data: after about 180 days all PRNR offset estimates reached an accuracy of better than \(6.25\,\mathrm{cm}\) allowing for the resolution of the SBR ambiguity. In reality, the PRNR offsets are slowly time-varying. The investigation of the PRNR offset estimation via TDIR could be extended for linearly time varying PRNR offsets. The delay model for the TDIR estimator would then become (compare eq. 57): \[d_{ij}^{\pi_{i}}(\tau)=\hat{R}_{ij}^{\pi_{i}}(\tau)-(O_{ij}^{0}+O_{ij}^{1}\cdot \tau), \tag{65}\] The TDIR estimator would now have to fit the 12 parameters \(O_{ij}^{0}\) and \(O_{ij}^{1}\). Apart from that, tone-assisted TDIR [27] could be applied for the PRNR offset esti Figure 8: PRNR ambiguity resolution via GOR (upper plots) and TDIR (lower plots). The histogram plots show the residual GOR and TDIR pseudorange estimates for the different links. mation in order to reach faster convergence. As a further follow-up investigation, time-varying on-board delays and the associated SC monitors could be included into the simulation, which would enable an inspection of the feasibility of the initial data treatment as proposed in section III. Furthermore, the ranging sensor fusion could be included into the different INReP topologies. Apart from that, the algorithms could be applied to real data as, e.g., produced by the hexagon experiment [28], [29]. ## Acknowledgements J. N. R. acknowledges the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Furthermore, he acknowledges the support by the IMPRS on Gravitational Wave-Astronomy at the Max Planck Institute for Gravitational Physics in Hannover, Germany. This work is also supported by the Max-Planck-Society within the LEGACY ("Low-Frequency Gravitational-Wave Astronomy in Space") collaboration (M.IF.A.QOP18098). O. H. and A. H. acknowledge support from the Programme National GRAM of CNRS/INSU with INP and IN2P3 co-funded by CNES and from the Centre National d'Etudes Spatiales (CNES). The authors thank Miles Clark, Pascal Grafe, Waldemar Martens, and Peter Wolf for useful discussions. The study on PRNR offset estimation via TDIR was performed on the ATLAS cluster at AEI Hannover. The authors thank Carsten Aulbert and Henning Fehrmann for their support. ## Appendix A Pseudoranges in TCB The pseudorange can be expressed in TCB by writing the SCETs of receiving and emitting SC as functions of TCB evaluated at the events of reception and emission, respectively: \[R^{t}_{ij}(t_{\mathrm{rec}})=\hat{\tau}^{t}_{i}(t_{\mathrm{rec}})-\hat{\tau}^{ t}_{j}(t_{\mathrm{emit}}), \tag{10}\] \(\hat{\tau}^{t}_{i}\) denotes the SCET of SC \(i\) expressed as a function of TCB. The TCB of emission can be expressed as the Figure 9: We simulate one year of telemetry data with PRNR offsets in the order of 100 m. We divide this dataset into 1 day chunks (left plots), 2 day chunks (central plots), and 3 day chunks (right plots). We then we apply TDIR to each of these chunks in order to estimate the PRNR offsets. Upper plots: Residual offset estimates in m for the different chunk sizes. Lower plots: residual offset estimates after cumulative averaging in m for the different chunk sizes. Dashed-black lines: half the SBR ambiguity. difference between the TCB of reception and the light travel time from SC \(j\) to SC \(i\), denoted by \(d^{t}_{ij}\): \[R^{t}_{ij}(t_{\text{rec}})=\hat{\tau}^{t}_{i}(t_{\text{rec}})-\hat{\tau}^{t}_{j} \left(t_{\text{rec}}-d^{t}_{ij}(t_{\text{rec}})\right), \tag{10}\] in the following we drop the subscript, hence \(t\) refers to the TCB of reception. The SCET can be expressed in terms of the SCET deviation from TCB \[\hat{\tau}^{t}_{i}(t)=t+\delta\hat{\tau}^{t}_{i}(t), \tag{11}\] which allows us to write eq. 10 as \[R^{t}_{ij}(t)=\delta\hat{\tau}^{t}_{i}(t)+d^{t}_{ij}(t)+\delta\hat{\tau}^{t}_{j }(t-d^{t}_{ij}(t). \tag{12}\] Expanding the emitting SC SCET deviation from TCB around the reception TCB yields: \[R^{t}_{ij}(t) =\delta\hat{\tau}^{t}_{ij}(t)+\left(1+\delta\hat{\tau}^{t}_{j}(t) \right)\cdot d^{t}_{ij}(t), \tag{13}\] \[\delta\hat{\tau}^{t}_{ij}(t) :=\delta\hat{\tau}^{t}_{i}(t)-\delta\hat{\tau}^{t}_{j}(t). \tag{14}\] Hence, in a global time frame like TCB, the pseudorange can be expressed in terms of the light travel time \(d^{t}_{ij}\) and the differential SCET offset \(\delta\hat{\tau}^{t}_{ij}\). ## Appendix B Subtraction of right-handed modulation noise Following the notation in [6], we express the RFI beatnotes in frequency: \[\text{RFI}^{\hat{\tau}_{i}}_{ij}(\tau) =\nu^{\hat{\tau}_{i}}_{ik}(\tau)-\nu^{\hat{\tau}_{i}}_{ij}(\tau), \tag{15}\] \[\text{RFI}^{\hat{\tau}_{i}}_{\text{sb},\,ij}(\tau) =\nu^{\hat{\tau}_{i}}_{\text{sb},\,ik}(\tau)-\nu^{\hat{\tau}_{i} }_{\text{sb},\,ij}(\tau),\] (16) \[\nu^{\hat{\tau}_{i}}_{\text{sb},\,ij}(\tau) =\nu^{\hat{\tau}_{i}}_{ij}(\tau)+\nu^{\hat{\tau}_{i}}_{ij}\cdot(1+ M^{\hat{\tau}_{i}}_{ij}). \tag{17}\] In this article we do not consider on-board delays in the RFI beatnotes. We combine the RFI carrier and sideband beatnotes to form measurements of the right-handed modulation noise: \[\Delta M^{\hat{\tau}_{i}}_{i} :=\frac{\text{RFI}^{\hat{\tau}_{i}}_{ij}-\text{RFI}^{\hat{\tau}_ {i}}_{\text{sb},\,ij}+1\,\text{MHz}}{2}\] \[-\frac{\text{RFI}^{\hat{\tau}_{i}}_{ik}-\text{RFI}^{\hat{\tau}_ {i}}_{\text{sb},\,ik}-1\,\text{MHz}}{2},\] \[=\nu^{\text{m}}_{ij}\cdot M^{\hat{\tau}_{i}}_{ij}-\nu^{\text{m}} _{ik}\cdot M^{\hat{\tau}_{i}}_{ik}, \tag{18}\] \(i\), \(j\), and \(k\) being a cyclic permutation of 1, 2, and 3. We can now subtract the \(\Delta M^{\hat{\tau}_{i}}_{i}\) measurements from the sideband range rates (eq. 34). Thus, we reduce the right-handed modulation noise, so that we are limited by the one order of magnitude lower left-handed modulation noise: \[\text{SBR}^{\hat{\tau}_{i}}_{\text{cor},\,ij}=\text{SBR}^{\hat{ \tau}_{i}}_{ij}-\text{\dot{D}}^{\hat{\tau}_{i}}_{ij}\cdot\Delta M^{\hat{\tau} _{j}}_{ij}(\tau),\] \[= \nu^{\text{m}}_{ji}\cdot\dot{R}^{\hat{\tau}_{i}}_{ij}+\nu^{\text{ m}}_{ij}\left(M^{\hat{\tau}_{i}}_{ij}-\text{\dot{D}}^{\hat{\tau}_{i}}_{ij}\cdot M^{ \hat{\tau}_{j}}_{jk}(\tau)\right), \tag{19a}\] \[\text{SBR}^{\hat{\tau}_{i}}_{\text{cor},\,ik}=\text{SBR}^{\hat{ \tau}_{i}}_{ik}(\tau)+\Delta M^{\hat{\tau}_{i}}_{i}(\tau),\] \[= \nu^{\text{m}}_{ki}\cdot\dot{R}^{\hat{\tau}_{i}}_{ik}+\nu^{\text{ m}}_{ki}\left(M^{\hat{\tau}_{i}}_{ij}+\dot{\text{D}}^{\hat{\tau}_{i}}_{ik}\,M^{ \hat{\tau}_{k}}_{ki}(\tau)\right), \tag{19b}\] \(i\), \(j\), and \(k\) being a cyclic permutation of 1, 2, and 3. ## Appendix C Solar wind dispersion The average solar wind particle density at the LISA orbit is about \(10\,\text{cm}^{-3}\). Hence, at the scales of optical wavelengths the solar wind plasma can be treated as a free electron gas with the plasma frequency [30] \[\nu^{2}_{p}=\frac{n_{e}\,e^{2}}{4\pi^{2}\,\epsilon_{0}\,m_{e}}\approx 8\times 10^{ 8}\,\text{s}^{-2}, \tag{20}\] \(n_{e}\) denotes the electron density, \(e\) the elementary charge, \(m_{e}\) the electron mass, and \(\epsilon_{0}\) the the vacuum permittivity. Contributions from protons and ions can be neglected as the plasma frequency is inversely proportional to the mass. We describe the refractive index of the solar wind plasma by the Appleton equation. Neglecting collisions and magnetic fields it denotes \[n(\nu)=\sqrt{1-\left(\frac{\nu_{p}}{\nu}\right)^{2}}. \tag{21}\] In a dispersive medium we need to distinguish between phase and group velocity. The phase velocity is given by \[v_{\text{p}}(\nu)=\frac{c}{n(\nu)}=\frac{c}{\sqrt{1-\left(\frac{\nu_{p}}{\nu} \right)^{2}}}\approx c\cdot\left(1+\frac{1}{2}\frac{\nu^{2}_{p}}{\nu^{2}} \right), \tag{22}\] where we applied the expansion for \(\nu\gg\nu_{p}\), as we consider optical frequencies. The product of group and phase velocity yields \(c^{2}\). Consequently, the group velocity is \[v_{\text{g}}(\nu)=c\cdot n(\nu)=c\cdot\sqrt{1-\left(\frac{\nu_{p}}{\nu}\right)^{ 2}}\approx c\cdot\left(1-\frac{1}{2}\frac{\nu^{2}_{p}}{\nu^{2}}\right). \tag{23}\] Group and phase delay can now be written as \[\Delta\tau_{\text{g}}(\nu) =L\left(\frac{1}{c\cdot\sqrt{1-\left(\frac{\nu_{p}}{\nu}\right)^{2} }}-\frac{1}{c}\right)\approx\frac{L\,\nu^{2}_{p}}{2\,c}\cdot\frac{1}{\nu^{2}}, \tag{24}\] \[\Delta\tau_{\text{p}}(\nu) =L\left(\frac{\sqrt{1-\left(\frac{\nu_{p}}{\nu}\right)^{2}}}{c}- \frac{1}{c}\right)\approx-\frac{L\,\nu^{2}_{p}}{2\,c}\cdot\frac{1}{\nu^{2}}, \tag{25}\] where \(L=2.5\,\text{Gm}\) denotes the LISA armlength. PRN and sideband signals propagate at the group velocity, hence they are delayed by the group delay: \[\Delta\tau_{\text{g}}^{\text{prn}} =\Delta\tau_{\text{g}}(281\,\text{THz}\pm 1\,\text{MHz})\approx 12.7\, \text{pm}, \tag{26}\] \[\Delta\tau_{\text{g}}^{\text{sb}} =\Delta\tau_{\text{g}}(281\,\text{THz}\pm 2.4\,\text{GHz})\approx 12.7\, \text{pm}. \tag{27}\] The phase delay is negative, because the phase velocity is bigger than \(c\). Therefore, the laser phase is advanced with respect to a wave propagating in vacuum. For the LISA carrier this phase advancement corresponds to \[\Delta\tau_{\text{p}}(281\,\text{THz})\approx-12.7\,\text{pm}. \tag{28}\]
2308.15692
Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models
Denoising probabilistic diffusion models have shown breakthrough performance to generate more photo-realistic images or human-level illustrations than the prior models such as GANs. This high image-generation capability has stimulated the creation of many downstream applications in various areas. However, we find that this technology is actually a double-edged sword: We identify a new type of attack, called the Natural Denoising Diffusion (NDD) attack based on the finding that state-of-the-art deep neural network (DNN) models still hold their prediction even if we intentionally remove their robust features, which are essential to the human visual system (HVS), through text prompts. The NDD attack shows a significantly high capability to generate low-cost, model-agnostic, and transferable adversarial attacks by exploiting the natural attack capability in diffusion models. To systematically evaluate the risk of the NDD attack, we perform a large-scale empirical study with our newly created dataset, the Natural Denoising Diffusion Attack (NDDA) dataset. We evaluate the natural attack capability by answering 6 research questions. Through a user study, we find that it can achieve an 88% detection rate while being stealthy to 93% of human subjects; we also find that the non-robust features embedded by diffusion models contribute to the natural attack capability. To confirm the model-agnostic and transferable attack capability, we perform the NDD attack against the Tesla Model 3 and find that 73% of the physically printed attacks can be detected as stop signs. Our hope is that the study and dataset can help our community be aware of the risks in diffusion models and facilitate further research toward robust DNN models.
Takami Sato, Justin Yue, Nanze Chen, Ningfei Wang, Qi Alfred Chen
2023-08-30T01:21:11Z
http://arxiv.org/abs/2308.15692v2
Intriguing Properties of Diffusion Models: A Large-Scale Dataset for Evaluating Natural Attack Capability in Text-to-Image Generative Models ###### Abstract Denoising probabilistic diffusion models have shown breakthrough performance that can generate more photo-realistic images or human-level illustrations than the prior models such as GANs. This high image-generation capability has stimulated the creation of many downstream applications in various areas. However, we find that this technology is indeed a double-edged sword: We identify a new type of attack, called the Natural Denoising Diffusion (NDD) attack based on the finding that state-of-the-art deep neural network (DNN) models still hold their prediction even if we intentionally remove their robust features, which are essential to the human visual system (HVS), by text prompts. The NDD attack can generate low-cost, model-agnostic, and transferable adversarial attacks by exploiting the natural attack capability in diffusion models. Motivated by the finding, we construct a large-scale dataset, Natural Denoising Diffusion Attack (NDDA) dataset, to systematically evaluate the risk of the natural attack capability of diffusion models with state-of-the-art text-to-image diffusion models. We evaluate the natural attack capability by answering 6 research questions. Through a user study to confirm the validity of the NDD attack, we find that the NDD attack can achieve an 88% detection rate while being stealthy to 93% of human subjects. We also find that the non-robust features embedded by diffusion models contribute to the natural attack capability. To confirm the model-agnostic and transferable attack capability, we perform the NDD attack against an AD vehicle and find that 73% of the physically printed attacks can be detected as a stop sign. We hope that our study and dataset can help our community to be aware of the risk of diffusion models and facilitate further research toward robust DNN models. ## Introduction Denoising diffusion probabilistic models (DDPM) [11], or simply diffusion models, have shown breakthrough performance in image generation. Once the DDPM demonstrates the generation capability of photo-realistic images and human-level illustrations, numerous diffusion models, such as DALL-E 2 [20], Stable Diffusion [14], and Firefly [1], are actively released and widely available via APIs or open-source models. This technical breakthrough has facilitated many applications in various fields: arts [13], medical [15], and autonomous driving (AD) [16]. The diffusion model is widely utilized in various downstream tasks, but recent studies have also raised concerns regarding the new security and privacy risks introduced by the diffusion model. Chen et al. [14] shows that the diffusion models can be utilized to generate more transferability and imperceptibility adversarial attacks. Carlini et al. [15] demonstrate that the diffusion models memorize training images and can emit them. These recent studies motivate us to further investigate the security risks of diffusion models. In this work, we identify simple but intriguing properties of the images generated by text-to-image diffusion models, in which text prompts guide the image generation process via a contrastive image-text supervision model, such as OpenAI CLIP [1] guides the image generation process via text prompts. Fig. 1 shows representative examples that motivate this work. We generate the stop sign images using state-of-the-art diffusion Figure 1: Examples of the natural attack capability in diffusion models (row). The images are generated with prompts that intentionally remove essential features in the human visual system (HVS) while keeping the object name (stop sign) in the text prompt. Even without these essential features, state-of-the-art object detectors (column) still detect these objects with high scores. models with text prompts that intentionally break the fundamental properties that humans use to identify stop signs, such as red color and octagonal shape, but the text prompt still contains the object name (stop sign). As shown, the diffusion models strictly follow our instructions and generate images that are not generally recognized as stop signs, since legitimate stop signs should not be blue or rectangular. Surprisingly, many popular deep neural network (DNN)-based object detectors still recognize these examples as stop signs with high confidence. We define this new attack as the Natural Denoising Diffusion (NDD) attack. These results suggest that object detectors may use imperceptible features embedded by diffusion models, and lead us to question that: _Do the images generated by the diffusion model have a natural attack capability against the DNN models?_ The rest of this paper is structured to validate the question. We first construct our dataset, named the Natural Diffusion Denoising Attack (NDDA) dataset, to systematically understand the natural attack capability of diffusion models. We use 3 state-of-the-art diffusion models to collect the images with and without robust features that play essential roles in our human visual system (HVS) (Grill-Spector and Malach, 2004; Ge et al., 2022; Ilyas et al., 2019). Following prior works (Grill-Spector and Malach, 2004; Ge et al., 2022), we define 4 robust features: shape, color, text, and pattern. Second, we conduct a natural attack capability analysis of the NDDA dataset with 5 state-of-the-art object detection models. We find that most object detectors still maintain their detection, even though the text prompt guides them to remove robust features entirely or partially. For example, 32% of the generated stop signs are still detected as stop signs even though all robust features are guided to be removed. This result means that text-to-image diffusion models embed intriguing features that are imperceivable to humans but generalizable to DNN models if the subject word (e.g., stop sign) is in the prompt. We confirm that all diffusion models we evaluate have the natural attack capability against the DNN models Third, we conduct an extensive empirical study to further evaluate the validity and quantify the natural attack capability by answering 6 novel research questions (RQs). We conduct a user study to evaluate the stealthiness of the generated images because valid adversarial attacks need to be not only effective against the DNN models but also stealthy against humans. For example, humans will not be fooled if the robust features are not properly removed since it should be just a legitimate stop sign. As a result, we identify the high stealthiness of the NDD attacks: The stop sign images generated by altering their "STOP" text have 88% detection rate against object detectors while 93% of humans do not regard it as a stop sign. Furthermore, we measure the impact of the non-robust features that are predictive but incomprehensible to humans on the natural attack capability in diffusion models with an analysis inspired by Ilyas et al. (Ilyas et al., 2019), which proposes a methodology to train "robustified" classifiers against the non-robust features. By comparing the robustified and normal classifiers, we illustrate that the non-robust features play a meaningful role in the natural attack capability in diffusion models. Finally, we discuss our findings and the limitations of this work. In summary, our contributions are as follows: * We identify a new security threat of the NDD attack that exploits the natural attack capability in diffusion models to generate model-agnostic and transferable adversarial attacks via simple text prompts that remove robust features. * We construct a new large-scale dataset, named the NDDA dataset, to systematically evaluate the natural attack capability in diffusion models. We cover all patterns of removing the 4 robust features: shape, color, text, and pattern. * We systematically evaluate the natural attack capability by answering 6 research questions. The NDD attack can achieve an attack success rate of 88%, while being stealthy for 93% of human subjects in the stop sign case. We also find that the sensitivity to the non-robust features have a correlation with the natural attack capability. * We confirm the model-agnostic and transferable attack capability of the NDD attack on an commodity AD vehicle. It identifies 8 out of 11 (73%) printed attacks as stop signs. **Dataset release.** Our dataset will be available after this paper is published. ## Related Work **Denoising diffusion model.** DDPM (Ho, Jain, and Abbeel, 2020) is a generative model that exploits the intuition behind nonequilibrium thermodynamics. Training a diffusion model involves both forward and reverse diffusion processes. In the forward process, the model perturbs a clean image with Gaussian noise. In the reverse process, the diffusion model learns to remove this noise for the same number of time steps. In short, diffusion models are image denoisers that learn to reconstruct the original images from randomly noisy images. This procedure is simple but shows remarkable performance in producing high-quality images. Dhariwal et al. (Dhaiwal and Nichol, 2021) shows that diffusion models achieve much higher quality metrics, such as FID, inception score, and precision, than GANs (Brock, Donahue, and Simonyan, 2019). **Text-to-image diffusion models.** Text-to-image diffusion models is (Ramesh et al., 2022; Saharia et al., 2022; Rombach et al., 2022) one of the most common approaches to flexibly control the image generation process. To inform the text information in the generation process, contrastive image-text supervision models such as OpenAI CLIP (Radford et al., 2021) and OpenCLIP (Ilharco et al., 2021) are integrated into their training process. **Adversarial Attacks.** DNN models are known to be generally vulnerable to adversarial attacks (Szegedy et al., 2014; Goodfellow, Shlens, and Szegedy, 2015; Zhang et al., 2020), which can significantly confuse the predictions of DNN models by adding subtle perturbations to the input images. Adversarial attacks are also known to be realizable in the real world (Shen et al., 2022; Cao et al., 2021, 2020; Wang et al., 2023) by putting patches (yeh Chiang et al., 2020; Wang et al., 2022; Sato et al., 2021, 2021) or placing stickers (Eykholt et al., 2018). Despite the great efforts of recent lines of research, none have been able to fully address this vulnerabil ity [11, 12, 13] report that the adversarial attacks are not due to the bug of DNN, but due to non-robust features that are predictive but incomprehensible to humans. We identify that the non-robust features also play a large role in the natural attack capability in diffusion models. ## Dataset Design To systematically evaluate the natural attack capability of the diffusion models, we construct a new large-scale dataset, called the Natural Denoising Diffusion Attack (NDDA) dataset. The design goal for the NDDA dataset is to systematically collect images with and without robust features upon which human perception relies on. To this end, the major challenge is to identify what kinds of robust features are important in our human visual system (HVS) for object detection. Although the complete mechanism of the HVS is not fully understood, prior works [13, 12] recognize that shape, texture, and color are the most important features for the HVS to identify objects. Therefore, in this study, we define them as robust features for object recognition. To further explore the motivated examples in Fig. 1, we decompose the texture into text and pattern because the text has a special meaning for human perception. For example, people may not consider a sign to be a stop sign if it does not have the exact text "STOP" on it. Table 1 lists the templates of text prompts and examples for the "stop sign" label to remove or alter the 4 robust features. In this case, we consider 16 different combinations with and without robust features and generate 50 images for each combination. The 3 object classes are selected from the classes of the COCO dataset [10]: a stop sign for an artificial sign, a fire hydrant for an artificial object, and a horse for a natural thing. We select these 3 classes because of their relatively higher detection rates than others in our preliminary experiments on generated images. We adopt the COCO's classes to make the experiments easier as we can utilize many existing pretrained models on the COCO dataset. Fig. 2 shows an overview of our datasets. ### Natural Attack Capability Analysis With the NDDA dataset, we evaluate the natural attack capability in diffusion models to confirm its effectiveness against state-of-the-art object detectors. _Experimental setup._ We obtain the inference results of all images in the NDAA dataset with 5 popular object detectors: YOLOv3 [12], YOLOv5 [13], DETR [14], Faster R-CNN [15], and RTMDet [12]. For YOLOv5, we use their official pretrained model; For the others, we use the pretrained models in MMDetections [15]. All models are trained on the COCO dataset [10]. We use 0.5 as the confidence threshold for all models. _Results._ Table 2 shows the detection results of the stop sign images in the Diffusion Attack dataset generated by 3 diffusion models. The detection rate is calculated by whether one or more stop signs are detected in the input image. As shown, the majority of the images are still detected as stop signs even though we remove a robust feature. While YOLOv5 shows slightly higher robustness, all object detectors still detect stop signs in the majority of images: On average, the detection rate for all object detectors is \(\geq\)37%, meaning that 37% of the images generated by the diffusion models have the potential to be used as adversarial attacks. We also observed similar results on the other labels. If these images are valid adversarial attacks, the attacker can easily generate model-agnostic and highly transferable adversarial attacks with significantly less effort than the previous works [16, 12, 13] that need iterative attack optimization processes. Meanwhile, these results are still not enough to fully validate the vulnerability of DNN models against the images generated by diffusion models because the adversarial attacks must satisfy 2 requirements: effectiveness against models and stealthiness to humans. For example, there is a possibility that diffusion models may ignore the text prompts and just generate legitimate stop signs. In the next section, we systematically evaluate the impact of the vulnerability. ## Experimental Analysis In this section, we conduct an extensive empirical study to further evaluate the validity and quantify the natural attack capability by answering 6 research questions (RQs). ### RQ1: Does the natural attack capability exist in the previous image generation models? We first investigate whether the natural attack capability exists in the previous image generation models. _Experimental setup._ As the state-of-the-art image generation methods before diffusion models, we evaluate BigGAN [12]. To generate images guided by text prompts, we use BigSleep [13] that guides the image generation process of BigGAN by OpenAI CLIP [1]. We evaluate the 5 object detectors in Table 2. We use a detection threshold of 0.5. For each model, we evaluate 6 combinations of partial or complete removal of the robust features. _Results._ Fig. 3 shows the average detection rate of stop sign images generated by each image generation model over the 5 object detectors. As shown, all diffusion models have significantly higher detection rates than the BigSleep GAN model. In particular, Stable Diffusion 2 and Deepfloyd IF have detection rates of \(\geq\)28% even if all robust features are removed or altered. Meanwhile, BigSleep has much lower detection rates (7.5%). We thus consider that _the natural attack capability has existed slightly in the prior image generation models, but it becomes significantly severe in diffusion models_. Diffusion models have much higher image generation capabilities than GANs, which may also enhance their ability to generate more advanced non-robust features. ### RQ2: How stealthy is NDD attack against humans? To be valid adversarial attacks, the NDD attack should satisfy the two requirements: effectiveness against DNN models and stealthiness against humans. We so far have confirmed the effectiveness against DNN models as in Table 2, but it does not mean that the attack is stealthy. For example, the diffusion models may ignore the text prompts and just generate legitimate objects. To answer the questions, we perform a user test to investigate the stealthiness against humans and the validity of generated images. _Experimental setup._ We recruited 82 human subjects on Prolific [3], a crowdsourcing platform specialized for research purposes. Human subjects are asked to answer yes or no to whether the object of interest (e.g., stop sign and fire hydrant) is in the presented image or not. Considering the reasonable experimental time for human subjects to maintain their concentration, image generation models were limited to the following: Deepfloyd IF, the diffusion model with the highest detection rates in Table 2, and BigSleep, a state-of-the-art GAN-based model. We generate 3 images for each text prompt and each image generation model and also show 3 real images to see the baseline. For the Deepfloyd IF's images, we chose images that can fool at least one object detector in Table 2, i.e., all the images are valid attacks against an object detector. More details are in our questionnaire form [1]. _Results._ Table 3 lists the results of our user study. The detection rate is the proportion of users who answer "yes", i.e., they identify the target object in the presented image. As shown, DeepFloyd IF's images on the benign prompts have similarly high detection rates as the real images. On the other hand, the BigSleep images are not detected as the target objects as the detection rates are \(\leq\)4%. This indicates that the images generated by the state-of-the-art GAN-based model are not only perceived by humans but also are not effective attacks against object detectors, i.e., the generated images are far from realistic. For the images without robust features, their detection rates are much lower than the real images and the images of benign prompts. We observe that the different objects have different sensitivities to each robust feature. For example, the text is very important to the stop sign as the human detection rate is 7% when the text on it is altered. DeepFloyd IF thus can easily generate effective adversarial attacks to be detected as a stop sign since the attacks can fool 88% of cases as in Table 2 and can fool 93% of humans. These results indicate that _the natural attack capability in diffusion models can indeed generate valid adversarial attacks that are stealthy to humans._ **RQ3: Does the incapability of text generation correlate with the natural attack capability?** We observe that the NDD attack has high stealthiness against humans through our user study and find that the different objects have different sensitivities to different robust features. In particular, the text on the stop sign is important for humans to identify a stop sign. Meanwhile, the object detectors are influenced by the shape or pattern rather than the text as in Table 2. Motivated by this, we evaluate the text generation \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{Text prompt to remove/alter robust features} \\ \cline{2-3} Removed Robust Features & \multicolumn{2}{c}{Format} & Example: Stop sign \\ \hline \begin{tabular}{c} Benign prompt \\ Shape \\ \end{tabular} & \begin{tabular}{c} **[Subject]** \\ **[Shape]** \\ **[Subject]** \\ **[Color]** **[Subject]** \\ \end{tabular} & \begin{tabular}{c} **Stop sign** \\ **Square stop sign** \\ **Blue stop sign** \\ **Stop sign** \\ **[Thello"]** on it \\ **[Subject]** with a **[Pattern]** paint on it \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: Templates of text prompts and examples for the “stop sign” object to remove the 4 robust features. Figure 2: Overview of the Natural Denoising Diffusion Attack (NDDA) dataset. We alter or remove the 4 types of robust features partially or entirely. For the stop sign, we alter the text on it considering its importance to be recognized as a stop sign. For each set of robust features, we generate images with 3 diffusion models for 3 object classes. capability to measure the natural attack capability. _Experimental setup._ We generate the images with the 3 diffusion models (DALL-E 2, Stable Diffusion 2, and DeepFloyd IF) and on the GAN-based model (BigSleep). We use the text prompt format: "text of [word]". For the [word], we evaluate 5 popular words: hello, welcome, goodbye, script, and stop, and 20 images per word are generated with different seeds. To evaluate the text generation capability, we measure the normalized Levenshtein (edit) distance, which is 0 for identical pairs of text and 1 for completely different pairs of text. Given a [word], we apply an optical character recognition (OCR) method for generated images and calculate the edit distance between [word] and the recognized sentences. When multiple sentences are recognized, they are concatenated into one sentence with a space. We use the Keras-OCR [14] for the OCR. _Results._ Fig. 4 shows the averaged normalized Levenshtein (edit) distances of all 5 words and only "stop". As shown, the Deepfloyd IF can generate the most accurate text; DALL-E 2 and Stable Diffusion 2 are the second and third while BigSleep is the worst. This order is the same as the order of average detection rates of the object detectors in Table 2. In particular, the Deepfloyd IF shows a high capability to generate "stop" texts. This result is consistent with the observation in RQ2 that the stealthiness against humans is significantly improved on the DeepFloyd IF when the robust feature of the text is removed. In summary, the experimental results show that the text generation capability of image generation models has a certain similarity to their natural attack capability. The capability to generate a complex pattern such as the alphabet may correlate with the capability to generate non-robust features that are too subtle to the HVS. We may use this characteristic to design a simple defense against the NDD attack, i.e., we can differentiate the NDD attack by checking if the exact "stop" is in it for the stop sign attack. Although these empirical results are still not enough to fully support the edit distance-based method as a metric to measure the natural attack capability, _the metric can be used as a simple sanity check for the image generation models and as a simple defense against NDD attacks on stop signs and other objects with text_. "robustify" the images with the method. We train not only the robustified classifier with the dataset but also a normal classifier with the original dataset, again with 500 random images per class. ResNet50 (He et al., 2016) is the model architecture for both. _Results._ Table 4 shows the accuracy of the robust and normal classifiers on the stop sign images in the NDDA dataset. As shown, both classifiers have higher accuracy when the robust features exist, but the robust classifier's accuracy decreases more than the normal classifier when all the robust features are removed. This means that there is a clear relationship between the sensitivity to the non-robust features and the natural attack capability. We thus consider that _the non-robust features play an important role in enabling the natural attack capability in the diffusion models._ attacks can successfully fool the commercial traffic sign system as the detected stop signs appear on the driver's monitor. This means that the _NDD attack has 73% of the attack success rate, which is a surprisingly high success rate considering that we do not perform any optimization processes to attack the AD vehicle_. All attacks are simply generated by diffusion models with simple text prompts. Furthermore, we did not have any special considerations when printing the attacks. We just use a commodity printer, roughly stick papers together with transparent tape, and use normal printing paper that is thin and translucent. ## Discussion and Limitation Safety implicationsWe demonstrate the attack effectiveness of the NDD attack against the commodity AD vehicle but also find that these attacks were not as robust at different distances. Thus, this vulnerability may not pose an immediate threat to fast-driving AD vehicles. However, the possibility of affecting a driving vehicle cannot be completely ruled out. We note that the effect of the attack is unlikely to be a coincidence, as these attacks are detected as stop signs even though they do not resemble similar to legitimate stop signs at all. If the attack is reddish and hexagonal, it could be a coincidence. However, it cannot be considered a coincidence that such an attack with blue, purple, green, or non-hexagonal shapes is detected as a stop sign at such a high rate. Furthermore, the current NDD attack does not have any special optimization processes to attack the commodity AD vehicle. Prior works have already shown that we can integrate image generation models into attack optimization processes [1, 1]. We hope that our study can facilitate further research to assess the security threat of diffusion models. If this paper is accepted, we plan to conduct a responsible vulnerability disclosure to the commodity AD vehicle. Possible defensesAs a possible defense for the stop sign attack, we can use OCR to detect the attack by recognizing whether the "STOP" text is on it or not. As discussed in RQ3, the generated images are likely to fail to generate the exact "STOP" text. However, possible defense is not trivial for other classes besides stop signs as no generic defense against adversarial attacks has been reported so far [1, 2, 10]. For the physical-world attacks, PatchCleanser [11] shows high certifiable robustness against patch attacks, but it is not applicable to the NDD attack since the attack vector is not a small patch. As in RQ4, the "robustified" training [12] can improve the robustness, but it remains a mitigation measure and thus further research efforts are needed in this area. Evaluation of further diffusion modelsIn this work, we focus on 3 popular diffusion models and 3 object classes with relatively high detection rates to perform deeper analysis on each RQ. Meanwhile, we keep updating the dataset with more diffusion models and object classes for the sake of the dataset's comprehensiveness. The latest version of the NDDA dataset includes 6 diffusion models with 15 object classes to benefit future studies. Current candidates for robust features are chosen to ensure removal (e.g. blue for stop sign); future updates will include more variations for the sake of dataset comprehension. In total, the NDDA dataset has 40,870 images. Each class has \(\geq\)50 images for the models with API access and \(\geq\)20 images for other models. We will release the entire dataset when this paper is published. Ethical considerationsWe paid attention to potential ethical issues in our experiments. We have gone through the IRB process in our institution and manually ensured that the images in our user survey were not offensive. In the experiment on the commodity AD vehicle, we presented the attacks only against the parked AD vehicle and made sure that the attacks were not visible from public roads where other AD vehicles might be driving. ## Conclusion We have identified a new security threat of the NDD attack that leverages the natural attack capability in the diffusion models. To systematically evaluate the impact, we construct a large-scale dataset, called the NDDA dataset, which contains images generated by state-of-the-art diffusion models Figure 5: Successful NDD Attacks against a commodity AD vehicle. The caption of each image shows the used diffusion model and the text prompt. that intentionally remove the robust features essential for the HVS. We demonstrate that the images without robust features, which should be essential for the object, are still detected as the original object and are stealthy to humans. For example, the stop signs with altered text are still detected as stop signs in 88% of cases but are stealthy to 93% of humans. We find that the non-robust features contribute to the natural attack capability. To evaluate the realizability of the NDD attack, we demonstrate the attack against a commodity AD vehicle and confirm that 73% of the NDD attacks are detected as stop signs. Finally, we discuss the implications and limitations of our research. We hope that our study and dataset can help our community to be aware of the risk of the natural attack capability of diffusion models and facilitate further research to develop robust DNN models. ## Acknowledgements This research was supported in part by the NSF CNS-1929771, CNS-2145493, and CNS-1932464.
2304.13136
Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks
The accurate prediction of tandem mass spectra from molecular structures has the potential to unlock new metabolomic discoveries by augmenting the community's libraries of experimental reference standards. Cheminformatic spectrum prediction strategies use a "bond-breaking" framework to iteratively simulate mass spectrum fragmentations, but these methods are (a) slow, due to the need to exhaustively and combinatorially break molecules and (b) inaccurate, as they often rely upon heuristics to predict the intensity of each resulting fragment; neural network alternatives mitigate computational cost but are black-box and not inherently more accurate. We introduce a physically-grounded neural approach that learns to predict each breakage event and score the most relevant subset of molecular fragments quickly and accurately. We evaluate our model by predicting spectra from both public and private standard libraries, demonstrating that our hybrid approach offers state of the art prediction accuracy, improved metabolite identification from a database of candidates, and higher interpretability when compared to previous breakage methods and black box neural networks. The grounding of our approach in physical fragmentation events shows especially high promise for elucidating natural product molecules with more complex scaffolds.
Samuel Goldman, Janet Li, Connor W. Coley
2023-04-25T20:26:47Z
http://arxiv.org/abs/2304.13136v2
# Generating Molecular Fragmentation Graphs with Autoregressive Neural Networks ###### Abstract The accurate prediction of tandem mass spectra from molecular structures has the potential to unlock new metabolomic discoveries by augmenting the community's libraries of experimental reference standards. Cheminformatic spectrum prediction strategies use a "bond-breaking" framework to iteratively simulate mass spectrum fragmentations, but these methods are (a) slow, due to the need to exhaustively and combinatorially break molecules and (b) inaccurate, as they often rely upon heuristics to predict the intensity of each resulting fragment; neural network alternatives mitigate computational cost but are black-box and not inherently more accurate. We introduce a physically-grounded neural approach that learns to predict each breakage event and score the most relevant subset of molecular fragments quickly and accurately. We evaluate our model by predicting spectra from both public and private standard libraries, demonstrating that our hybrid approach offers state of the art prediction accuracy, improved metabolite identification from a database of candidates, and higher interpretability when compared to previous breakage methods and black box neural networks. The grounding of our approach in physical fragmentation events shows especially high promise for elucidating natural product molecules with more complex scaffolds. ## 1 Introduction Identifying unknown molecules in complex metabolomic or environmental samples is of critical importance to biologists [42], forensic scientists [34], and ecologists alike [5]. Tandem mass spectrometry, MS/MS, is the standard analytical chemistry method for analyzing such samples, favored for its speed and sensitivity [27]. In brief, MS/MS metabolomics experiments isolate, ionize, and fragment small molecules, resulting in a characteristic spectrum for each where peaks correspond to molecular sub-fragments (Fig. 1A). Importantly, these experiments are high throughput, leading to thousands of detected spectra per single experiment for complex samples such as human serum. The most straightforward way to identify an unknown molecule from its fragmentation spectrum is to compare the spectrum to a library of known standards [3]. However, spectral libraries only contain on the order of \(10^{4}\) compounds--a drop in the bucket compared to the vast size of biologically-relevant chemical space, oft cited as large as \(10^{60}\)[21]. Of the many tandem spectra deposited into a large community library, 87% still cannot be annotated [3]. The accurate prediction of mass spectra from molecular structures would enable these libraries to be augmented with hypothetical compounds and significantly advance the utility of mass spectrometry for structural elucidation. This paradigm of comparing unknown spectra to putative spectra is well established in the adjacent field of proteomics due to the ease of predicting protein fragmentations [13]. Because tandem mass spectrometry experiments physically break covalent bonds in a process known as "collision-induced-dissociation" (CID) to create fragments, simulating such fragmentation events computationally is a natural strategy for prediction. Tools from the last decade including MetFrag[43], MAGMa[33], and CFM-ID[2, 37] use fragmentation rules (based on removing atoms or bonds) and local scoring methods to (a) enumerate molecular fragmentation trees and (b) estimate the intensity at each node in the tree with a mix of heuristic rules and statistical learning (Fig. 1B). However, these combinatorial methods are computationally demanding and often make inaccurate predictions by _overestimating_ the possible fragments (Fig. 1B, bottom). We recently found CFM-ID to be far less accurate than black-box neural networks [16], an observation separately confirmed by Murphy et al. [26]. Further, current learned fragmentation models are not easily adapted or scaled to new datasets; Murphy et al. estimate it would take the leading fragmentation approach, CFM-ID[2], approximately three months on a 64-core machine to train on a ~300,000 spectrum dataset. Alternative strategies that utilize black box neural networks to predict MS/MS spectra have been attempted. They encode an input molecule (i,e,. as a fingerprint, graph, or 3D structure) and predict either a 1D binned representation of the spectrum [40, 45, 17, 46], or a set of output formulae corresponding to peaks in the spectrum [16, 26, 47]. While we have demonstrated that predicting chemical formulae provides a fast, accurate, and interpretable alternative to binned representation approaches [16], the improved accuracy surprisingly did not directly translate to better database retrieval for complex natural product molecules contained within the Global Natural Products Social (GNPS) database [38]. We hypothesized that combining the flexibility of neural networks to learn from experimental MS/MS data in reference libraries with the structural-bias of combinatorial fragmentation approaches could lead to increased prediction performance on complex natural product molecules. Herein, we introduce a hybrid strategy for simulating molecular fragmentation graphs using neural networks, _Inferring Collision-induced-dissociation by Estimating Breakage Events and Reconstructing their Graphs_ (ICEBERG). ICEBERG is a two-part model that simulates probable breakage events (Generate) and scores the resulting fragments using a Transformer architecture (Score) (Fig. 1C; details in Fig. 2). Our core computational contribution is to leverage previous exhaustive cheminformatics methods for the same task, specifically MAGMa[33], in order to build a training dataset, from Figure 1: ICEBERG enables the prediction of tandem mass spectra by efficiently navigating the space of possible fragmentation events. **A.** Example experimental mass spectrum. An input molecule, benzocaine, is depicted entering a mass spectrometer collision cell and fragmenting. The observation of the resulting charged fragments results in a characteristic spectrum. **B.** A combinatorial mass spectrum simulation. The root molecule, benzocaine, is iteratively fragmented by removing atoms or breaking bonds, resulting in a large fragmentation tree. Heuristic rules score nodes in the tree to predict intensities. **C.** ICEBERG spectrum simulation. ICEBERG learns to generates only the most relevant substructures. After generating fragments, a neural network module scores the resulting fragments to predict intensities. which our model learns to make fast estimates prioritizing only likely bond breakages. In doing so, we lift MAGMa and previous bond-breaking approaches into a neural network space with demonstrable benefits in performance. We evaluate ICEBERG on two datasets: NPLIB1 (GNPS data [38] as used to train the CANOPUS model [9]) and NIST20 [28], which test the model's ability to predict both complex natural products and small organic standard molecules, respectively. We find that ICEBERG increases cosine similarity of predicted spectra by over 0.09, a 17% improvement over a recent state of the art method on NPLIB1 data. When used to identify molecules in retrospective retrieval studies, ICEBERG leads to 47% and 10% improvements in top 1 retrieval accuracy on the two datasets compared to the next best model tested. ICEBERG is fully open-sourced with pretrained weights alongside other existing prediction baseline methods available on GitHub at [https://github.com/samgoldman97/ms-pred](https://github.com/samgoldman97/ms-pred). ## 2 Results ### ICEBERG is trained as a two-stage generative and scoring model Learning to generate likely substructures.ICEBERG simulates a mass spectrum by generating the substructure fragments from an initial molecule that are most likely to be generated by collision induced dissociation and subsequently measured in the mass spectrometer. We define an input molecule \(\mathcal{M}\) (benzocaine example shown in Fig. 2A) and its observed spectrum \(\mathcal{Y}\), which is a set of intensities at various mass-to-charge values (m/z), termed peaks. Each peak represents one or more observed molecular fragment. A core question is then _how to generate the set of potential fragments_. These fragments can be sampled from the many possible substructure options, \(\mathcal{S}^{(i)}\in(N^{(i)},E^{(i)})\subseteq\mathcal{M}\), where the set of nodes and edges in substructures are subsets of the atoms and bonds in the original molecule, Figure 2: Overview of ICEBERG. **A.** The target fragmentation directed acyclic graph (DAG) for an example molecule \(\mathcal{M}\), benzoocaine. Fragments are colored in black with missing substructures in gray. **B.** Example illustration for the generative process at a single step in the DAG generation predicting subfragments of \(\mathcal{S}^{(2)}\). The root molecule \(\mathcal{M}\), fragment of interest \(\mathcal{S}^{(2)}\), and context vector \(C\) are encoded and used to predict fragment probabilities at each atom of the fragment of interest. A sample disconnection is shown at atom \(a_{2}\), resulting in fragment \(\mathcal{S}^{(7)}\). **C.**ICEBERG Score module. Fragments generated from **A** are encoded alongside the root molecule. A Set Transformer module predicts intensities for each fragment, allowing mass changes corresponding to the loss or gain of hydrogen atoms, resulting in the final predicted mass spectrum. \(\mathcal{M}\in(N,E)\). Most often, this sampling is accomplished by iteratively and exhaustively removing edges or atoms from the molecular graph, creating a fragmentation graph \(\mathcal{T}\in(\mathcal{S},\mathcal{E})\), where all the nodes in this graph are themselves substructures of the original molecule \(\mathcal{S}=\{\mathcal{S}^{(0)},\mathcal{S}^{(1)},\ldots\mathcal{S}^{(|\mathcal{ T}|)}\}\) ([2, 33, 43]) (Fig. 1b). However, such a combinatorial approach leads to _thousands_ of molecular fragments, making this procedure slow and complicating the second step of estimating intensity values for all enumerated fragments. We eschew combinatorial generation and instead leverage a graph neural network to parameterize breakage events of the molecule, defining the Generate module of ICEBERG (Fig. 2A,B). Generate predicts the fragmentation graph iteratively, beginning with just the root of the graph \(\mathcal{S}^{(0)}=\mathcal{M}\), borrowing ideas from autoregressive tree generation [4, 14]. At each step in iterative expansion, the model \(\beta_{\theta}^{\texttt{generate}}\) assigns a probability of fragmentation to each atom \(j\) in the current substructure fragment \(\mathcal{S}^{(i)}\), \(p(F[\mathcal{S}^{(i)}_{j}])\). Learned atom embeddings are concatenated alongside embeddings of the root molecule and and a context vector \(C\) containing metadata such as the ionization adduct type in order to make this prediction. An illustrative example can be seen for fragment \(\mathcal{S}^{(2)}\) in Figure 2B. Atom \(a_{2}\) has the highest predicted probability, so this atom is then removed from the graph, leading to the subsequent child node \(\mathcal{S}^{(7)}\) (Fig. 2B). Importantly, the number of child fragments is determined by how many disjoint molecular graphs form upon removal of the \(j^{th}\) node from the molecular graph; in this example, fragments \(\mathcal{S}^{(1)}\) and \(\mathcal{S}^{(4)}\) originate from the same fragmentation event of \(\mathcal{S}^{(0)}\) (Fig. 2A). In this way, ICEBERG predicts breakages at the level of each _atom_, following the convention of MAGMa [33] rather than each _bond_ as is the convention with CFM-ID[2]. We strategically use this abstraction to ensurethal all fragmentation events lead to changes in heavy-atom composition. We refer the reader to Methods SS4.3 for a full description of the model \(\beta_{\theta}^{\texttt{generate}}(\mathcal{M},\mathcal{S}^{(i)},C)_{j}\), graph neural network architectures, and context vector inputs. While this defines a neural network for generation, we must also specify an algorithm for how to _train_ this network. Spectral library datasets contain only molecule and spectrum pairs, but not the directed acyclic graph (DAG) \(\mathcal{T}\) of the molecule's substructures that generated the spectrum. We infer an explanatory substructure identity of each peak for model training by leveraging previous combinatorial enumeration methods, specifically MAGMa[33]. For each training molecule and spectrum pair, \((\mathcal{M},\mathcal{Y})\), we modify MAGMa to enumerate all substructures of \(\mathcal{M}\) up to a depth of \(3\) sequential fragmentation events. We filter enumerated structures to include only those with m/z values appearing in the final spectrum, thereby defining a dataset suitable for training ICEBERG Generate (SS4.2). As a result, each paired example \((\mathcal{M},\mathcal{Y})\), in the training dataset is labeled with an estimated fragmentation DAG. Generate learns from these DAGs to generate only the most relevant and probable substructures for a molecule of interest (SS4.3). Predicting substructure intensities.After generating a set of potential substructure fragments, we employ a second module, ICEBERG Score, to predict their intensities (Fig. 2C). Importantly, this design decision enables our models to consider two important physical phenomena: (i) neutral losses and (ii) mass shifts due to hydrogen rearrangements and isotope effects. Because we elect to fragment molecules at the level of atoms (SS4.3), multiple substructures can result from a single fragmentation event. In physical experiments, not all of these substructure fragments will be observed; when fragmentation events occur in the collision cell, one fragment often retains the charge of the parent while the other is uncharged and therefore undetected, termed a "neutral loss". By deferring prediction of intensities to a second module, Generate needs not predict or track whether structures are ionized, greatly reducing the complexity of the fragmentation DAG. In addition to the occurrence of neutral losses, molecules often undergo complex rearrangements in the collision cell, leading to bond order promotions or reductions (e.g., spurious formation of double bonds when a single bond breaks to maintain valence), the most classic of which is the McLafferty rearrangement [6, 25]. While other approaches attempt to model and estimate where these rearrangements occur using hand-crafted rules [2], we instead adopt the framework of Ridder et al. [33] to consider hydrogen tolerances. That is, for each generated molecular substructure \(\mathcal{S}^{(i)}\) we consider the possibility that this fragment is observed not only at its mass, but also at masses shifted by discrete hydrogen masses, \(\pm\delta\)H. This design choice also simplifies Generate by deferring specification of hydrogen counts to the second model. In addition to accounting for a mass shift of 1 hydrogen, such flexibility also allows the model to predict the common M+1 isotopes for carbon- and nitrogen- containing compounds. Mathematically, we define a neural network, \(g_{\theta}^{\texttt{Score}}\) that predicts multiple intensities for each fragment \(\hat{y}_{\delta}^{(i)}\) corresponding to different hydrogen shifts, \(\delta\): \[\hat{y}_{\delta}^{(i)}=g_{\theta}^{\texttt{Score}}(\mathcal{M},\mathcal{S}^{(i) },\mathcal{T},C)_{\delta} \tag{1}\] In practice, we predict up to \(13\) intensities at each fragment (i.e., \(\{+0\text{H},\pm 1\text{H},\ldots,\pm 6\text{H}\}\)). For each individual subfragment, the tolerance is further restricted to the number of bonds broken, most often less than \(6\). We then take the masses of all fragments, perturb them by the corresponding hydrogen or isotope shifts, and aggregate them into a set of unique m/z peaks by summing the intensities of perturbed fragments with the same m/z value. To consider all fragments simultaneously in a permutation-invariant manner, \(g_{\theta}^{\texttt{Score}}\) is parameterized as a Set Transformer network [22, 36]. We train this second module to maximize the cosine similarity between the ground truth spectrum and the predicted spectrum after converting the set of substructures and intensities to m/z peaks. At test time, we generate the top 100 most likely fragments from ICEBERG Generate and predict intensities for these fragments and their possible hydrogen shifts using ICEBERG Score. We find this tree size allows our model to consider sufficiently many potential fragments while maintaining a speed advantage over previous fragmentation approaches. ### ICEBERG enables highly accurate spectrum prediction We evaluate ICEBERG on its ability to accurately simulate positive ion mode mass spectra for both natural product like molecules and smaller organic molecules under 1,500 Da. Using the data cleaning pipeline from[16], we compile a public natural products dataset NPLIB1 with 10,709 spectra (8,533 unique structures) [9, 15, 38] as well as a gold standard chemical library NIST20 with 35,129 spectra (24,403 unique structures) [28]. We note that NPLIB1 was previously named 'CANOPUS', renamed here to disambiguate the data from the tool CANOPUS [9]. Both datasets are split into structurally disjoint 90%/10% train-test splits, with 10% of the training data reserved for model validation (SS4.1). To measure performance, we calculate the average cosine similarity between each predicted spectrum and the true spectrum, as cosine similarity is widely used to cluster mass spectra in molecular networking [29]. We find that ICEBERG outperforms the next best state of the art, SCARF, on the natural product focused dataset (Fig. 3A; Table 2). ICEBERG achieves an average cosine similarity of 0.628, compared to SCARF with cosine similarity of 0.534--an especially large margin of improvement. Surprisingly, however, this boost in performance does not extend to the gold standard dataset, NIST20. ICEBERG, while still outperforming binned spectrum prediction approaches (i.e., NEIMS [40]) on this Figure 3: ICEBERG predictions are highly accurate. **A.** Cosine similarities to true spectra on NPLIB1 (left) and NIST20 respectively (right) for CFM-ID [2], NEIMS (FFN) [40], NEIMS (GNN) [16, 40], SCARF [16], and ICEBERG. **B.** Time required to predict spectra for 100 molecules randomly sampled from NIST20 on a single CPU, including the time to load models into memory. **C,D.** Comparison of NPLIB1 and NIST20 molecules in terms of synthetic accessibility (SA) score [10] and molecular weight (Mol. weight). dataset, is on par with SCARF (0.707 v. 0.713) [16]. Still, our model performs substantially better than CFM-ID and uses only a fraction of the computational resources (Fig. 3B). Unlike previous physically inspired models, because ICEBERG only samples the most relevant fragments from chemical space, it requires just over 1 CPU second per spectrum. We hypothesize that the discrepancy in performance improvement between NPLIB1 and NIST20 may be partially explained by differences in the chemical spaces they cover. Many molecules within NPLIB1 are natural products with more complicated chemical scaffolds. To characterize this, we analyzed the distributions for both the synthetic accessibility (SA) score [10; 18] (Fig. 3C) and molecular weight (Fig. 3D), both proxies for molecular complexity. In concordance with our hypothesis, we find that SA scores and molecular weight are substantially higher on NPLIB1 than NIST20: NPLIB1 has an average SA score of 3.75, compared to 3.01 for NIST20; the datasets have average molecular weights of 413 Da and 317 Da respectively. Figure 4: Examples of predicted spectra from ICEBERG. Predictions are shown as generated by ICEBERG trained on NPLIB1 for select test set examples GNFS:CCMSLIB00003137969 (**A**), GNFS:CCMSLIB000000853015 (**B**), and GNFS:CCMSLIB00000080524 (**C**). The input molecular structures are shown (left); fragmentation spectra are plotted (right) with predictions (top, blue) and ground truth spectra (bottom, black). Molecular fragments are shown inset. Spectra are plotted with m/z shifted by the mass of the precursor adduct. All examples shown were not included in the model training set. ### Model explanations of observed peaks are consistent with chemistry intuition In addition to accurate predictions, a key benefit of simulating fragmentation events is that predictions are interpretable, even for highly complex molecules. Each predicted peak from ICEBERG is directly attributed to a fragment of the predicted molecule. By inspecting certain patterns and examples, we find expected broken bonds. Weaker bonds such as carbon-oxygen and carbon-nitrogen bonds tend to more reliably break, compared to carbon-carbon bonds and more complex ring breakages (Fig. 4A). Similar patterns can be seen in more complex example molecules, in which ICEBERG predicts the loss of an acetoxy group in order to explain the highest intensity peak in Fig. 4B and various fragmentations around the central ether or iminol (in equilibrium with its amide form) to explain the many high intensity peaks in Fig. 4C. Further alignment can also be seen within the intensity prediction module. Because ICEBERG predicts multiple intensities for each substructure corresponding to hydrogen shifts, up to 3 peaks can be present when a single bond breaks. In fragmentation example of Figure 4A, the most intense peak is estimated at the a mass shift of \(-1\)H from the original fragment, indicating that ICEBERG correctly recognizes the hydroxyl group will likely leave as neutral H\({}_{2}\)O and result in a hydrogen rearrangement. ### Fragmentation simulations lead to improved structural elucidation In addition to improved accuracy on predicting spectra, we next demonstrate that ICEBERG improves the structural elucidation of unknown molecules using reference libraries of model-predicted spectra. We design a retrospective evaluation using our labeled data to resemble the prospective task of spectrum lookup within libraries. For each test spectrum, we extract up to 49 "decoy" isomers from PubChem [19] with the highest Tanimoto similarity to the true molecular structure. The consideration of up to 50 isomers mimics the realistic elucidation setting, as an unknown spectrum can yield clues regarding certain properties of its source molecule (e.g., computed using NIST [15], CSI:FingerID [8], or molecular networking [29]), which narrows the chemical space of possible molecules to a smaller, more relevant set. We predict the fragmentation spectrum for each isomer and, for each model, we rank these possible matches by their spectral similarity to the spectrum of interest and compute how often the true molecule is found within the _top k_ ranked isomers for different values of \(k\). We find that ICEBERG improves upon the next best model by a margin of 10% accuracy (a nearly 50% relative improvement) in _top 1_ retrieval accuracy for the NPLIB1 dataset (Fig. 5A; Table 4). Previous models with high spectrum prediction accuracies have struggled on this task due to their poor ability to differentiate structurally similar isomers [16]. Our structure-based model appears to excel in retrieval and may have out-of-domain robustness beneficial to this task. We observe a similar effect in top 1 retrieval accuracy on the NIST20 dataset, in which ICEBERG outperforms SCARF by an absolute margin of over 2%, a 10% relative improvement, with an even a larger absolute improvement at top 10 accuracy (76.5% vs. 70.3%) (Fig. 5B, Table 3). These results underscore the real world utility of ICEBERG to identify unknown molecules of interest. ### Challenging, non-random data splits better explain retrieval performance The strong performance on the retrieval task suggests that ICEBERG is able to generalize well to decoys not appearing in the training set and to account for how structural changes should affect fragmentation patterns. While encouraging, we observed no increase in cosine similarity accuracy when predicting spectra using NIST20 (Fig. 3, Table 2). Figure 5: ICEBERG enables improved spectrum retrieval over other methods on both NPLIB1 (**A**) and NIST20 (**B**) compared to other spectrum prediction models. To try to explain this apparent discrepancy, we reevaluate prediction accuracy on a more challenging dataset split. We retrain all models on the NIST20 utilizing a Murcko scaffold split of the data [44] with smaller scaffold clusters (i.e., more unique compounds) placed in the test set. This split enforces that molecules in the test set will be more distant and less similar to the training set, probing the ability of each model to generalize in a more stringent setting than our previous random split. In the more strict scaffold split evaluation, the improved accuracy of ICEBERG over existing models is striking (Table 1). While the relative ordering still remains between NEIMS [40] and SCARF [16], we find that ICEBERG outperforms SCARF by 0.03, equivalent to the difference between SCARF and NEIMS (GNN). These results suggest that, particularly for standard libraries with more homogeneous molecules, more challenging scaffold split evaluations may yield performance metrics that better correlate with performance on the structural elucidation problem (retrieval). ## 3 Discussion We have proposed a physically-grounded mass spectrum prediction strategy we term ICEBERG. From a computational perspective, this integration of neural networks into fragmentation prediction is enabled by (a) bootstrapping MAGMa to construct fragmentation trees on which our model is trained, (b) posing the tree generation step as a sequential prediction over atoms, and (c) predicting multiple intensities at each generated fragment with a second module in order to account for hydrogen rearrangements and isotopic peaks. By learning to generate fragmentation events, ICEBERG is able to accurately predict mass spectra, yielding especially strong improvements for natural product molecules under evaluation settings of both spectrum prediction and retrieval. ICEBERG establishes new state of the art performance for these tasks, yet there are some caveats we wish to highlight. First, while we learn to generate molecular substructures to explain each peak, there are no guarantees that they are the correct physical explanations given the number of potential equivalent-mass atom and bond rearrangements that could occur. Second, while we achieve increased accuracy, this comes at a higher computational cost of roughly 1 CPU second per molecule, nearly an order of magnitude more than other neural approaches like SCARF [16]. Future work will consider more explicitly how to synergize fragment- and formula- prediction approaches to achieve higher accuracy and speed. In addition to model architecture modifications, we anticipate model accuracy improvements from modeling other covariates such as collision energy, instrument type, and even jointly modeling MS/MS with other analytical chemistry measurements such as FTIR [12]. The discovery of unknown metabolites and molecules is rapidly expanding our knowledge of potential medical targets [31], the effects of environmental toxins [35], and the diversity of biosynthetically accessible chemical space [7]. We envision exciting possibilities to apply our new model to expand the discovery of novel chemical matter from complex mixtures. ## 4 Methods ### Datasets We train our models on the two datasets, NIST20 [28] as generated by the National Institute of Standards and NPLIB1 extracted from the GNPS database [38] and prepared previously by Duhrkop et al. and Goldman et al.. For each spectrum in the dataset, we first merge all scans at various collision energies, combine peaks that are within \(10^{-4}\) m/z tolerance from each other, renormalize the resulting spectrum by dividing by the maximum observed intensity, and take the square-root of each intensity. We subset the resulting spectrum to keep the top 50 peaks with intensity above \(0.003\). This normalization process is identical to our previous work [16] and emphasizes (a) removing peaks that are likely noise and (b) combining various collision energies. We refer the reader to [16] for exact details on dataset extraction. \begin{table} \begin{tabular}{l l l} \hline \hline NIST20 & \multicolumn{2}{c}{Cosine sim.} \\ \cline{2-3} & Random split & Scaffold split \\ \hline CFM\(-\)ID & 0.371 & 0.401 \\ NEIMS (FFN) & 0.614 & 0.548 \\ NEIMS (GNN) & 0.689 & 0.639 \\ SCARF & **0.713** & 0.665 \\ \hline \hline ICEBERG & 0.707 & **0.691** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing the accuracy of spectrum prediction on NIST20 using random (easier) or scaffold (harder) splits. To further normalize the dataset, for each spectrum, we subtract the mass of the adduct ion from each resulting MS2 peak. Concretely, the precursor molecule is ionized with an adduct ion, for instance, "H+". In this case, the mass of each peak in the spectrum is shifted by the mass of "H+" before proceeding further. In doing so, we normalize against different ionizations. While adduct switching is possible, we note that this is a rare phenomenon and can be easily interchanged at the data preprocessing step. We make the simplifying assumption that all peaks are singly charged and use mass and m/z interchangeably. Ultimately, each spectrum \(\mathcal{Y}\) can be considered a set of mass, intensity tuples, \(\mathcal{Y}=\{(m_{0},y_{0}),(m_{1},y_{1}),\ldots\,(m_{|\mathcal{Y}|},y_{| \mathcal{Y}|})\}\). ### Canonical DAG construction We build a custom re-implementation of the MAGMa algorithm [33] to help create explanatory DAGs for each normalized and adduct-shifted spectrum. Given an input molecule \(\mathcal{M}\), MAGMa iteratively breaks each molecule by removing atoms. Each time an atom is removed, multiple fragments may form, from which we keep all fragments of \(>2\) heavy (non-hydrogen) atoms. To prevent combinatorial explosion of DAG nodes, we use a Weisfeiler-Lehman isomorphism test [41] to generate a unique hash ID of each generated fragment and reject new fragments with hash IDs already observed. When conducting this test, to remain insensitive to _how_ this fragment originated, we hash only the atom identities and bonds in the fragment graph, _not the number of hydrogen atoms_. For instance, consider an ethane fragment in which the terminal carbon was originally double-bonded to a single neighboring atom in the precursor molecule compared to an ethane fragment in which the terminal carbon was single-bonded to two adjacent atoms in the original precursor-- our approach applies the same hash ID to both fragments. The chemical formula and hydrogen status for the fragment is randomly selected from the fragments that required the _minimal_ number of atom removals. Each fragment corresponds to multiple potential m/z observations due to the allowance for hydrogen shifts equal to the number of broken bonds. After creating the fragmentation graph for \(\mathcal{M}\), a subset of the fragments are selected to explain each peak in \(\mathcal{Y}\), using the minimum mass differences of under 10 parts-per-million as the primary filter and the minimal MAGMa heuristic score as a secondary filter. We include nodes along all paths back to the root molecule for each selected fragment. To prune the DAG to select only the most likely paths to each fragment, we design a greedy heuristic. Starting from the lowest level of the DAG, we iteratively select the parent nodes for inclusion into the final DAG that "cover" the highest number of peak-explaining nodes. Finally, the "neutral loss" fragments are added into the DAG, as they provide useful training signals for ICEBERG Generate to learn when to stop fragmenting each molecule. ### Model details DAG generation predictionUsing the ground truth DAG as described above, we train a neural network, ICEBERG Generate, to reconstruct the DAG from an input molecule and adduct type. Concretely, our model learns to predict for each fragment, \(\mathcal{S}^{(i)}\), the probability that it will fragment at the \(j^{th}\) atom: \[p\left(\mathcal{F}[\mathcal{S}^{(i)}_{j}]|\mathcal{S}^{(i)}, \mathcal{M},C\right)=g^{\texttt{Generate}}_{\theta}(\mathcal{M},\mathcal{S}^{ (i)},C)_{j} \tag{2}\] To make this atom-wise prediction, we encode information about the root molecule, fragment molecule, their difference, their respective chemical formulae, the adduct, and the number of bonds that were broken between the root molecule and fragment. To embed the root molecule, we utilize a gated graph neural network [23], \(\mathsf{GNN}(\mathcal{M})\), where either average or weighted summations are used to pool embeddings across atoms (specified by a hyperparameter). We utilize the same network to learn representations of the fragment, \(\mathsf{GNN}(\mathcal{S}^{(i)})\) and define \(\mathsf{GNN}(\mathcal{S}^{(i)})_{j}\) as the graph neural network-derived embedding of fragment \(i\) at the \(j^{th}\) atom prior to pooling operation. For all graph neural networks, a one-hot encoding of the adduct type is also added as atom-wise features alongside the bond types and atom types. We define the chemical formula \(f\) for each DAG fragment and specify an encoding, \(\mathsf{Enc}\), using the Fourier feature scheme defined in [16]. We encode the root and \(i^{th}\) node of the fragmentation DAG as \(\mathsf{Enc}(f_{0})\) and \(\mathsf{Enc}(f_{i})\), respectively. Lastly, we define a one hot vector for the number of bonds broken, \(b\). All the encodings described above are concatenated together and a shallow multilayer perceptron (MLP) ending with a sigmoid function is utilized to predict binary probabilities of fragmentation at each atom. \[\begin{split} p\left(\mathcal{F}[\mathcal{S}_{j}^{(i)}]|\mathcal{S} ^{(i)},\mathcal{M},C\right)=\mathsf{MLP}\Big{(}&[\mathsf{GNN}( \mathcal{M}),\mathsf{GNN}(\mathcal{M})-\mathsf{GNN}(\mathcal{S}^{(i)}),\\ &\mathsf{GNN}(\mathcal{S}^{(i)})_{j},\mathsf{Onehot}(b),\mathsf{ Enc}(f_{i}),\mathsf{Enc}(f_{0}-f_{i})]\Big{)}\end{split} \tag{3}\] The model is trained to maximize the probability of generating the DAG by minimizing the binary cross entropy loss over each atom for every fragment in an observed spectrum. DAG intensity predictionThe trained Generate module is used to generate DAGs for each input molecule in the training set. In this generation step, molecules are iteratively fragmented beginning with the root \(\mathcal{M}\) and the probability of each fragment is computed autoregressively. We define the node indices for an ordering from each fragment \(\mathcal{S}^{(i)}\) back to the root node through its highest likelihood path \(\pi[i]\), where \(\pi[i,j]\) defines the \(j^{th}\) node on this factorization path. \[p(\mathcal{S}^{(i)}|\mathcal{M},C)=p(\mathcal{S}^{(i)}|\mathcal{S}^{(\pi([i,1 ])},\mathcal{M},C)\prod_{j=1}^{|\pi[i]|}p(\mathcal{S}^{(\pi[i,j])}|\mathcal{S} ^{(\pi[i,j+1])},\mathcal{M},C) \tag{4}\] At each step, we maintain only the top \(100\) most likely fragments in the DAG as a practical consideration until reaching the maximum possible fragmentation depth. To further reduce complexity in the inference step, we maintain the highest scoring isomer from the DAG. This resulting set of fragments is featurized and passed to a Set Transformer module to generate output values at each fragment. Following the notation from the generative model, we featurize each individual fragment with a shallow MLP to generate hidden representations, \(h_{i}\): \[\begin{split} h_{i}=\mathsf{MLP}\Big{(}&[\mathsf{GNN }(\mathcal{M}),\mathsf{GNN}(\mathcal{M})-\mathsf{GNN}(\mathcal{S}^{(i)}), \mathsf{GNN}(\mathcal{S}^{(i)}),\\ &\mathsf{Onehot}(b),\mathsf{Enc}(f_{i}),\mathsf{Enc}(f_{0}-f_{i} )]\Big{)}\end{split} \tag{5}\] These are subsequently jointly embedded with a Transformer module and used to predict unnormalized intensity weights at each possible hydrogen shift \(\delta\) alongside an attention weight \(\alpha\) to determine how heavily to weight each prediction for its specified hydrogen shift. To compute the attention weight, we take a softmax over all prediction indices that fall into the same intensity bin (0.1 resolution), \(M(i,\delta)\): \[\hat{y}_{\delta}^{(i)}=\mathsf{MLP}_{inten}\Big{(}\mathsf{Transformer}(h_{0}, h_{1},h_{2},\ldots,h_{|\mathcal{T}|})_{i}\Big{)}_{\delta}, \tag{6}\] \[\alpha_{\delta}^{(i)}=\mathsf{Softmax}_{k\in M(i,\delta)}\left(\mathsf{MLP}_{ attn}\left(\mathsf{Transformer}(h_{0},h_{1},h_{2},\ldots,h_{|\mathcal{T}|})_{k} \right)\right)_{i,\delta} \tag{7}\] The final intensity prediction for the bin at mass \(m\) is then a then a weighted sum over all predictions that fall within this mass bin followed by a sigmoid activation function: \[\hat{y}_{m}=\sigma\Big{(}\sum_{i}\sum_{\delta}\alpha_{\delta}^{(i)}\hat{y}_{ \delta}^{(i)}\mathcal{I}[M(i,\delta)=m]\Big{)} \tag{8}\] The model is trained to maximize the cosine similarity between the predicted spectrum and ground truth spectrum. Model trainingAll models are implemented and trained using Pytorch Lightning [11], the Adam optimizer [20], and the DGL library [39]. Ray[24] is used to complete hyperparameter optimizations over all models and baselines. Models are trained on a single RTX A5000 NVIDIA GPU (CUDA Version 11.6) in under \(3\) hours for each module. A complete list of hyperparameters and their definition can be found in Appendix SSA.3. ### Baselines For model baselines, we utilize splits, hyperparameter training, and numbers as generated by [16]. We include baselines for binned prediction models from Wei et al. that directly predict binned spectra from either molecular fingerprints or graphs, our previous formula prediction model SCARF [16], and a previous fragmentation model CFM-ID [2] with the same procedure as [16]. All model predictions are transformed into binned representations for fair evaluation at a bin resolution of \(0.1\) from mass \(0\) to \(1,500\) Da. ### Code and data availability All code is made available at [https://github.com/samgoldman97/ms-pred](https://github.com/samgoldman97/ms-pred), alongside pre-trained models on publicly accessible data. ## Acknowledgements We thank John Bradshaw, Priyanka Raghavan, David Graff, Fanwang Meng, other members of the Coley Research Group, and Michael Murphy for helpful discussions, as well as Lucas Janson for feedback on earlier iterations of this idea. We thank Mingxun Wang for feedback and helpful suggestions regarding both the method and manuscript. S.G. thanks the MIT-Takeda Program for financial support. S.G. and C.W.C. thank the Machine Learning for Pharmaceutical Discovery and Synthesis consortium for additional support.
2302.10969
Measuring city-scale green infrastructure drawdown dynamics using internet-connected sensors in Detroit
The impact of green infrastructure (GI) on the urban drainage landscape remains largely unmeasured at high temporal and spatial scales. To that end, a data toolchain is introduced, underpinned by a novel wireless sensor network for continuously measuring real-time water levels in GI. The internet-connected sensors enable the collection of high-resolution data across large regions. A case study in Detroit (MI, US) is presented, where the water levels of 14 GI sites were measured in-situ from June to September 2021. The large dataset is analyzed using an automated storm segmentation methodology, which automatically extracts and analyzes individual storms from measurement time series. Storms are used to parameterize a dynamical system model of GI drawdown dynamics. The model is completely described by the decay constant {\alpha}, which is directly proportional to the drawdown rate. The parameter is analyzed across storms to compare GI dynamics between sites and to determine the major design and physiographic features that drive drawdown dynamics. A correlation analysis using Spearman's rank correlation coefficient reveals that depth to groundwater, imperviousness, longitude, and drainage area to surface area ratio are the most important features explaining GI drawdown dynamics in Detroit. A discussion is provided to contextualize these finding and explore the implications of data-driven strategies for GI design and placement.
Brooke E. Mason, Jacquelyn Schmidt, Branko Kerkez
2023-02-21T19:56:12Z
http://arxiv.org/abs/2302.10969v1
Measuring city-scale green infrastructure drawdown dynamics using internet-connected sensors in Detroit ###### Abstract The impact of green infrastructure (GI) on the urban drainage landscape remains largely unmeasured at high temporal and spatial scales. To that end, a data toolchain is introduced, underpinned by a novel wireless sensor network for continuously measuring real-time water levels in GI. The internet-connected sensors enable the collection of high-resolution data across large regions. A case study in Detroit (MI, US) is presented, where the water levels of 14 GI sites were measured in-situ from June to September 2021. The large dataset is analyzed using an automated storm segmentation methodology, which automatically extracts and analyzes individual storms from measurement time series. Storms are used to parameterize a dynamical system model of GI drawdown dynamics. The model is completely described by the decay constant \(\alpha\), which is directly proportional to the drawdown rate. The parameter is analyzed across storms to compare GI dynamics between sites and to determine the major design and physiographic features that drive drawdown dynamics. A correlation analysis using Spearman's rank correlation coefficient reveals that depth to groundwater, imperviousness, longitude, and drainage area to surface area ratio are the most important features explaining GI drawdown dynamics in Detroit. A discussion is provided to contextualize these finding and explore the implications of data-driven strategies for GI design and placement. ## 1 Water Impact Statement Globally, green infrastructure (GI) has become a popular stormwater management solution, but its impact on the larger urban drainage landscape remains unverified. A low-cost, low-maintenance sensor is introduced for real-time, high-resolution GI monitoring. When coupled with an automated data toolchain, we show how investments in monitoring networks support a more targeted and data-driven approach to GI placement, planning, and maintenance. ## 2 Introduction Urban areas around the world are struggling to manage stormwater runoff and flooding- a challenge compounded by rapid urbanization and climate change.[1, 2] Gray infrastructure, which consists of gutters, drains, and pipes, is the traditional method for collecting and conveying stormwater away from urban areas. Recently, green infrastructure (GI) has become a popular alternative, used either as a standalone stormwater management practice or in concert with traditional gray infrastructure.[3, 4] GI attempts to mimic the natural water cycle by using plants, soil, and landscape design to capture and filter local runoff.[3, 5] One of the most common GI practices is bioretention cells, or rain gardens, which are depressed vegetated areas that capture and reduce runoff by allowing it to evapotranspiration or exfiltrate into surrounding soil.[6] Communities worldwide are investing in GI for managing stormwater at increasing scales. For example, China plans to spend over US$ 1.5 trillion on GI in 657 cities by 2030.[7] In the midwestern US, the city of Detroit, Michigan invested US$ 15 million in GI between 2013-2017 and will invest US$ 50 million by 2029.[8] These investments assume adding more GI assets will positively impact stormwater outcomes, however, sufficient data to support this claim has yet to be produced.[3, 5, 9, 10] Real-time monitoring of stormwater infrastructure at high temporal and spatial resolutions is now possible with Internet of Things (IoT) technologies.[11, 12] Real-time sensing has been successfully deployed to monitor depths and flows in stormwater[13] and sewer networks.[14, 15] Recently, some studies have used sensors, such as pressure transducers connected to data loggers, to monitor GI.[16, 17, 18, 19] While these studies provided high resolution measurements, they required frequent field maintenance (e.g., downloading the data onsite, replacing batteries), making this approach impractical for obtaining large-scale, and/or long-term data. Therefore, there is still a need for GI IoT solutions. To that end, we introduce an end-to-end data toolchain based on new wireless sensors for estimating real-time drawdown in GI, the speed at which stormwater is evapotranspiration and exfiltrated into the native soil.[5, 18] These wireless sensors are low-cost, easy to install, and can be deployed at scale to create large, long-term, high-resolution datasets of urban drainage conditions. When combined with an analytics toolchain, our approach can be used to automatically learn GI dynamics from data on a storm-by-storm basis. To study the value of a city-wide dataset, we present a case study of these GI sensors deployed in Detroit. This novel dataset is used to characterize the drawdown dynamics of GI over multiple storms. The core contribution of this paper is a new sensor and data analysis methodology, along with experimental results that show which factors are the strongest predictors of drawdown dynamics for the studied GI network. ## 3 Background ### GI design standards Many communities rely on established stormwater management manuals, which detail how to select, design, construct, and main tain stormwater infrastructure, including GL. A manual's goal is to set forth best management practices which will elicit a certain level of performance, such as mitigating peak flow or infiltrating a certain fraction of runoff.[20] Regional and local manuals set design requirements (e.g., site selection, GI selection/sizing, soil media composition, underdrain sizing, plant selection) as well as performance metrics.[6] These design requirements and performance metrics exist for a variety of reasons, for example to ensure public safety and limit liability by eliminating trip hazards, adding barriers around water features, and reducing standing water to control mosquitos, but most fundamentally, to ensure that stormwater is being managed consistently across various sites. As an example, in the US, two common metrics for rain gardens and bioretention cells include the maximum allowable ponding time, generally 12-48 hours,[21, 22, 23] and infiltration rate, typically 2.5-5 cm/hr.[6, 21, 23] While infiltration rates can vary substantially even within the same GI, drawdown rates are representative of the entire system.[24, 25] The drawdown rate of GI is a function of the design features, building and maintenance practices, and the surrounding and underlying physiographic features.[3, 26] Design features include size, soil type, and vegetation. During site construction, how the sites are excavated and graded can cause significant soil compaction which ultimately impacts GI drawdown rates.[27] Physiographic features include the native soils, topography, land use type, depth to groundwater, and sunlight.[3, 28] While GI design can be optimized, in most cases the surrounding physiographic features cannot be changed. These features may have a strong effect on GI drawdown. For example, a shallow groundwater table (\(<\) 2-3 m) may result in more saturated media, which forms a smaller hydraulic gradient, impeding infiltration into the GI and exfiltration out of the GI into surrounding native soil.[29, 30] This suggests that the drawdown rate of GI is governed by a complex interaction between its design features and surrounding physiographic features. Few large-scale data sets exist to verify these patterns at scale, however. ### GI measurements Monitoring is needed to confirm whether a GI is meeting desired management goals. Additionally, monitoring can be used to determine whether local stormwater manuals are setting appropriate design standards and performance metrics. Due to the sheer number of sites and the cost of measuring quantitative metrics such as drawdown rate, cities often rely on visual inspection or modeling to assess performance.[5] If GI monitoring is carried out, it is generally limited to certain time periods and conditions.[3, 5, 31] Drawdown rate has been traditionally measured via drawdown testing. A GI is filled with water (either synthetically or via rainfall) until ponding occurs, then the drain depth and time are recorded to calculate the drawdown rate.[19, 26] These measurements are typically conducted manually with the help of a watch and gauge plate. Drawdown testing is generally only done pre- and post-installation,[18] but occasionally assets are tested as they age to track how they change over time.[17, 32] Unfortunately, the laboriousness of drawdown testing results in most communities having sparse datasets of in-situ GI drawdown. Furthermore, drawdown is inherently non-linear[18], meaning that drawdown rate may change over the course of a storm and in response to ambient conditions. To gain a complete picture of GI behavior, more data are needed than what can be obtained from a single drawdown test taken during a single storm event. Recent technological advances have opened up new possibilities for low-cost, high resolution stormwater sensing.[11, 12] Despite their availability, the uptake of these technologies for GI management has been limited. According to a national survey of officials in water utilities and agencies, however, assumed high construction and maintenance costs associated with smart GI are the two main barriers to adoption.[33] As such, the concept has yet to be vetted at scale. ## 4 Materials and methods ### Green infrastructure wireless sensors A wireless sensor was designed to continuously measure drawdown in GI (Fig. 1). Specifically, the device measures water level fluctuations in real-time. At the time of writing, the sensor costs approximately USS 1,000 to build and USS 25 annually for telecommunication and data storage services. The form factor of the sensor is similar to a water well, consisting of a 1.5 m long, slotted PVC pipe with one end holding the sensor and the other holding the remaining hardware components. The sensor uses the vetted Open Storm hardware and cloud services stack detailed in Bartos et al. (2018).[13] The hardware layer relies on an ultra-low power ARM Cortex-M3 microcontroller (Cypress PSoC). The microcontroller manages the sensing and data transmission logic of the embedded system. The sensor measures water levels to a reported accuracy of \(\pm\)0.762 cm using a pressure transducer (Stevens SDX 93720-110), which converts a barometric reading to a 4-20 milliampere (mA) output. The sensor is equalized for atmospheric pressure changes and was calibrated in the laboratory using a standard water column. The device is connected to the internet with a 4G LTE CAT-4 cellular modem (Nimbelink NL-SW-LTE). The cellular modem enables bi-directional communication between the sensor and a remote cloud-hosted web server. The device is powered using a 3.7 V lithium-ion battery (Tenergy) that is recharged by a solar panel (Adafruit 500). Power consumption measurements were used to confirm that when the device is on, power consumption is in the milli-amperage range and when the device is in sleep mode, it is in the micro-amperage range. With these power consumption numbers the sensor can stay in the field for up to 10 years without needing a battery replacement. The main reason for field maintenance occurs if a sensor malfunctions. The first type of sensor malfunction is sensor drift, which is defined as a small temporal variation in the sensor output under unchanging conditions. Sensor drift can be detected in this case when the sensor's "zero" reading changes over time. The other type of sensor malfunction occurs if a sensor provides a zero reading during periods of rainfall. There are several possible explanations for this malfunction. First, since the sensor operates by converting current to depth, there could be an issue with the analog circuitry resulting in inaccurate current measure ments. Second, the sensor could be physically damaged during node assembly or deployment. Third, the sensor provides a venting tube for equalizing atmospheric pressure changes. Although a cap is added to the tube to keep moisture out, if the cap is faulty, condensation can enter the tube and cause inaccurate readings. Finally, the PVC well may clog with sediment. To rectify any of the above sensor malfunctions, the sensor is swapped for a new one, which only takes a few minutes of field work. The sensor measurements were validated in the field using a gauge plate and digital, time-lapse photography by an outside consultant.[34] During rain events, photos were taken of the ponded water and gauge plate measurement every ten minutes (ESI Fig. A1). There was an average alignment of 11 mm between the camera-recorded and sensor-recorded depth measurements (ESI Fig. A2). Installation of the sensor takes less than 30 minutes by one person and requires digging a 1 m deep hole using a simple, off-the-shelf, handheld post hole digger. The sensor is placed in the hole and backfilled with soil. Real-time data begins streaming to a web dashboard as soon as the unit is deployed. The sensor is deployed such that an water level of 0 m indicates dry conditions, while a measurement above 1 m indicates water is ponding on the surface. The sensor takes measurements every ten minutes and reports data to the server once every hour. Measurements are transmitted over the cellular network via a secure connection to a cloud-hosted server. Data and metadata are stored in an InfluxDB database.[35] Measurements are then made available for visualization and sharing with partners through Grafana,[36] a dashboarding software used to plot measured water level over time. Both InfluxDB and Grafana instances are hosted on an Amazon Web Services (AWS) Elastic Cloud Computing (EC2) instance.[37] The system is entirely open source and the complete codebase, hardware schematics, and how-to guides have been made available as part of this paper on github.com/kLabDM/GI_Sensor_Node. ### Automatically learning GI dynamics from data To enable comparisons between sites without losing temporal information due to averaging, we synthesize and parameterize a drawdown model automatically from data. We assume that water levels inside GI can be approximated as a first-order linear dynamical system, which evolves according to the differential equation: \[\frac{dh}{dt}=\alpha h\,;\,\alpha<0 \tag{1}\] where \(h\) represents the water level in GI and \(\alpha\) is the decay constant- a measure of how fast the water level inside a GI recedes following a storm. In this formulation, this decay constant is directly proportional to drawdown rate and provides a single parameter that can be compared between sites. A relatively larger magnitude \(\alpha\) corresponds to a faster rate of drawdown, while a smaller magnitude \(\alpha\) corresponds to more slowly changing water levels. More relevant to cross comparisons between sites, however, is that \(\alpha\) embeds both temporal and magnitude information in one parameter. In other words, two sites could have similar bulk performance metrics, such as average volume capture over 24 hours, but exhibit vastly different drawdown curves. As such, studying the decay constant \(\alpha\) allows us to compare sites while Fig. 1: A GI sensor installed in a rain garden (top right). The sensor’s hardware layer (center) includes the PVC well, microcontroller, cellular modem, and pressure transducer. The cloud services layer (left) includes the database backend, along with applications for controlling sensor behavior and visualizing data (bottom right). taking advantage of the temporal granularity of our sensor data. Linear regression is used to fit the drawdown model to the water level sensor data of each storm. To fit the data to Eqn. 1, we find the fit that best captures the relationship between the water level and its first derivative \([h(t),\frac{dh}{dt}]\) (Fig. 2, left col.). The slope of this line is the decay constant, \(\alpha\). This method selects the most dominant rate of decay in the data. The fit of the model is evaluated using two metrics: the coefficient of determination (R\({}^{2}\)) and root mean squared error (RMSE). To illustrate the methodology, the fit of the drawdown model to the sensor data for three distinct storms is shown in Fig. 2. Since we calculate \(\alpha\) for every storm, drawdown dynamics of each site can be compared on a storm-by-storm basis, or the set of \(\alpha\)'s can be combined into a single value for a given site. A single value of \(\alpha\) can be thought of as a regression in \([h(t),\frac{dh}{dt}]\) feature space across all storms. This allows us to model the expected water level drawdown curve for a future storm. The resulting model could be used to inform estimates on how long a GI would take to drain given an initial water level of \(h(0)\) m, for example. A parameterized decay model can also be used to simulate the GI's behavior as part of a broader hydrologic simulator (e.g., US EPA SWMM [38]). #### 4.2.1 Implementation An automated process is developed to identify individual storms in the sensor data. This methodology requires water level time data, in this case provided by our sensors. Storm events are automatically identified by marking local minima and maxima using the find_peaks() function of Python Scipy Signal library [39]. To find the maxima we pass the water level time series to the function, which returns a list of indices corresponding to peaks (local maxima). To find the minima, we pass the negative of the water level time series, which then returns a list of indicies for local minima. We use two of the function's optional parameters to refine which points qualify as "peaks": prominence (\(p\)) and distance. Prominence is a measure of how high a local maxima stands out in comparison to its neighboring local minima. The prominence parameter was adjusted for each site such that the selected peaks corresponded reasonably well to local rainfall measurements and captured a meaningful segment of water level drawdown for each storm. We set the distance parameter to 3 hours, meaning adjacent local minima/maxima must be at least 3 hours apart to be selected. An example of the resultant automated storm segmentation is provided in Fig. 2, top row. While rainfall data are not required for the method, they can nonetheless be used as a secondary check, by visually lining up storms detected in the water levels with those measured by nearby rain gaps. Once the storms were isolated, the drawdown model is fit to the data using the poly_fit() function of Python's Numpy library [40]. The function uses least squares to fit a polynomial to the provided data. We pass \([h(t),\frac{dh}{dt}]\) to the function with the degree set to one. The function returns the \(\alpha\) that minimizes the squared error. Fig. 2 (rows 2-4) show these fits along with the resultant drawdown model for three storms measured at the same site. Taken out of the differential form, the drawdown model follows \(x=Ce^{\alpha\mu}+b\), where \(C\) and \(b\) are scaling and offset parameters that are adjusted to fit the magnitude of the storm. The coefficient of determination (R\({}^{2}\)) and root mean squared error (RMSE) are calculated for each fit using Python's Scikit-learn library [41]. ### Case study We selected Detroit, Michigan, US for the GI monitoring network (latitude 42\({}^{\circ}\)19\({}^{\circ}\)53", longitude \(-\)83\({}^{\circ}\)24\({}^{\circ}\)4"). Detroit has a unique opportunity for extensive GI installations because approximately 103 km\({}^{2}\) (28%) of the city is classified as vacant land [42]. The city is located at the outlet of three major watersheds (i.e., Rouge River, Clinton River, Lake St. Clair) where flows eventually discharge into either Lake St. Clair or the Detroit River. Due to Detroit's location in the floodplain, most of its soil is poorly drained clay and silt [43]. Detroit also has a shallow groundwater table. Teimoori et al. (2021) found that the modeled depth to groundwater in Detroit ranged from approximately 1-3 meters below the ground surface [44]. Detroit's climate follows a four-season pattern, with average temperatures ranging from \(-\)7.11\({}^{\circ}\)C to 28.7\({}^{\circ}\)C. Detroit averages 87 cm and 137 days of precipitation per year [45]. Precipitation is dispersed relatively evenly throughout the year as rain and snow, but heavier amounts occur in spring and winter [43]. Detroit has a combined sewer system for managing stormwater and wastewater which flows into the second largest wastewater plant in the world [43]. During extreme rainfall events in 2021, the sewer conveyance and wastewater plant's treatment capacity was exceeded on multiple occasions, resulting in billions of gallons of raw sewage being directly discharged into Detroit waterways [46]. In addition, residential basements were flooded with sewage-laden runoff [46]. The need to mitigate flooding and sewer overflows has driven the City of Detroit and organizations like the Detroit Sierra Club to prioritize GI installations [8]. In partnership with the Detroit Sierra Club, a non-profit organization, 14 GI sites were selected for deployment in summer 2021 across 155 km\({}^{2}\) of Detroit to monitor GI performance (Fig. 3). Since 2015, the Detroit Sierra Club has been working with community partners and Detroit residents to build GI, primarily small residential rain gardens. GI were selected that varied in terms of age, size, and surrounding land use type. Twelve sites were rain gardens designed and built by Detroit Sierra Club and their partners, and two were engineered and commercially built bioretention cells. The design and site data for the GI were provided by Detroit Sierra Club (ESI Table A1). Moving forward, each site is identified by an alpha numeric code (e.g., S1 for site 1). ### Correlation analysis Once the decay constants were extracted from the Detroit sensor network, a correlation analysis was conducted to determine which design and physiographic features explain GI drawdown, as quantified by the decay constant \(\alpha\). Design features included the GI's location, surface area, drainage area, storage volume, soil media depth, age, and drainage area to surface area ratio (DA/SA ratio). The DA/SA ratio was calculated by dividing the drainage area by the surface area. The physiographic features for each GI were extracted from public GIS datasets of percent imperviousness, land use type, elevation, slope, native soil type (i.e., hydro logic soil group), and depth to groundwater. ESI Section B provides detailed steps on how the GIS datasets were downloaded, processed, and the features were extracted for each GI. The datasets investigated included both non-normal continuous (e.g., surface area, elevation) and ordinal (e.g., land use type, hydrologic soil group) variables. To handle both types of variables, Spearman's rank correlation coefficient was selected for the correlation analysis [47]. Spearman's rank correlation coefficient is a nonparametric measure of the strength and direction of the monotonic relationship between two ranked variables [48]. Spearman's rank correlation coefficients were computed using the corr() function of Python's Pandas library [49]. A dataframe of the mean decay constants, physiographic features, and design features for the GI monitoring network was passed to the function. The function requires a correlation method, which was set to'spearman'. Readers are directed to a Zenodo web portal to freely obtain the data and code referenced in this paper [50]. ## 5 Results ### Sensor network performance Deployment of the GI monitoring network began mid-June 2021 and 14 operational sensors were deployed by early July 2021 (in Fig. 2: (Top row) Time series water level measurement from a GI overlaid with nearby publicly-available precipitation data. The orange boxes indicate distinct storm events automatically detected by a peak finding algorithm. The decay constant \(\alpha\) is fit for three distinct storms in the same GI. (rows 2–4, left) To find \(\alpha\), we fit a line for the relationship between water level (\(\alpha\)-axis) and the change in water level (y-axis). (rows 2–4, right) The found \(\alpha\)’s are then plotted against the actual water levels experienced from the three distinct storms. The \(R^{2}\) value for each fit is also provided. stallation dates provided in ESI Table A2). The measurement period consists of data collected between June 15, 2021, and September 1, 2021. During the measurement period, there were only two instances of prolonged data loss-- S8 and S12 had a two-hour and 24-hour data gap, respectively. These losses did not impact the measurement of storm response at either site. Sensor drift was not an issue, with an average drift of < 2.5 cm. There was one maintenance trip on August 11th to swap S12's sensor because it indicated the GI was empty during periods of rain (ESI Table A2). ### GI drawdown analysis The measurement period coincided with Detroit's 7th wettest summer on record, which included several historic rain events: 15.2 cm of rain on June 25th, 5.6 cm on July 16th, and 6.9 cm on August 12th.[51] During the measurement period, a total of 122 storms were identified across the network (orange boxes in Fig. 4 (left)). Of the 122 storms, 15 storms were excluded as outliers from the analysis due to poor fit of the drawdown model (negative R\({}^{2}\)). A mean of 7.4 storm events were analyzed for each site with the number of distinct storm events varying widely per site: 21 for S11 versus 1 for S8. The variation in the number of storms captured by site is due to both the installation date (see ESI Table A2) and the spatial variation in rainfall.[52] The mean fit of the drawdown model to the sensor data was R\({}^{2}\) = 0.746 \(\pm\)0.111 and RMSE = 8.579 \(\pm\) 4.168. The fitted decay constant \(\alpha\) varied by storm and by GI (Fig. 4 (right)). Across all storms and sites, the mean decay constant \(\alpha\) and standard deviation was \(-\)0.119 \(\pm\) 0.124 hr\({}^{-1}\). The average decay constant per site varied by two orders of magnitude, from \(-\)0.011 hr\({}^{-1}\) (S2) to \(-\)0.397 hr\({}^{-1}\) (S8). The number of storms identified versus analyzed, as well as the mean decay constant \(\alpha\), RMSE, and R\({}^{2}\) for each GI is provided in Table 1. The decay constant \(\alpha\) corresponds with the GI's drainage dynamics. During the measurement period, most GI completely drained between storm events (S4, S8-S11), providing full storage for the next storm event (Fig. 4 (left)). S2, S6, and S12 always had some water present in their soil media, limiting the amount of storage for each subsequent storm. During the measurement period, most sites experienced ponding (water level > 1 m). However, ponding did not exceed 12 hours for most sites (11 of 14 sites). S6, S11, and S9 experienced extended periods \begin{table} \begin{tabular}{l r r r r} \hline Site & No. Storms & \(\alpha\) & RMSE & \(R^{2}\) \\ & Analyzed & (mean) & (mean) & (mean) \\ \hline S1 & 11/11 & \(-\)0.040 & 5.159 & 0.834 \\ S2 & 3/3 & \(-\)0.011 & 6.306 & 0.875 \\ S3 & 4/4 & \(-\)0.044 & 4.776 & 0.885 \\ S4 & 9/12 & \(-\)0.305 & 9.109 & 0.524 \\ S5 & 9/9 & \(-\)0.146 & 4.611 & 0.727 \\ S6 & 5/6 & \(-\)0.024 & 3.420 & 0.916 \\ S7 & 9/9 & \(-\)0.069 & 6.088 & 0.802 \\ S8 & 1/3 & \(-\)0.397 & 2.998 & 0.922 \\ S9 & 9/12 & \(-\)0.102 & 15.964 & 0.606 \\ S10 & 11/12 & \(-\)0.119 & 13.744 & 0.697 \\ S11 & 21/24 & \(-\)0.200 & 12.209 & 0.738 \\ S12 & 3/3 & \(-\)0.047 & 4.531 & 0.806 \\ S13 & 6/6 & \(-\)0.072 & 3.777 & 0.921 \\ S14 & 7/8 & \(-\)0.021 & 6.630 & 0.637 \\ \hline \end{tabular} \end{table} Table 1: The results from fitting the decay model for the storms captured by the GI monitoring network. We report the mean decay constant \(\alpha\) for each GI and how well the decay constant \(\alpha\) fit the sensor data as measured by RMSE and \(R^{2}\). Figure 3: Map of the 14 GI sites selected for sensors in Detroit. of ponding during the June 25th storm for 22, 29, and 21 hours, respectively. Sites S6 and S11 also experienced extended ponding for approximately 24 hours during the July 16th storm, and S11 bonded for about 16 hours during the August 12th storm. ### Correlation analysis Spearman's rank correlation coefficients between the GI design features and the decay constants ranged from 0.01 (site age) to 0.34 (DA/SA ratio) (Fig. 5). The decay constants were most correlated with the DA/SA ratio (0.34) and drainage area (0.23). Drainage area and DA/SA ratio were highly correlated with each other (0.92); therefore, we focus analysis on the DA/SA ratio. The sites with the largest DA/SA ratios had the smallest magnitude decay constants (i.e., drained the slowest). Soil media depth, storage volume, surface area, and age had limited impact on the decay constants (0.16, \(-\)0.09, 0.06, and 0.01, re Fig. 4: (left) Water level (m) measured across all sites on the left y-axis with rainfall (cm) on the right y-axis. Storm events are highlighted by the orange boxes. Prominence (p), the minimum increase in water level needed for a storm event to be considered distinct, is labeled for each site. (right) A boxplot showing the variance in each GI’s decay constants measured for all highlighted storms. spectively). The correlation coefficients between the physiographic features and the decay constants ranged from \(-0.02\) (slope) to \(-0.64\) (groundwater depth) (Fig. 5). The decay constants were most correlated with groundwater depth (\(-0.64\)), latitude (\(-0.56\)), imperviousness (\(0.43\)), and longitude (\(0.37\)). The closer groundwater was to the surface, the slower the site drained (i.e., the smaller the decay constant's magnitude). Groundwater is also highly correlated with latitude (\(0.98\)), which explains the correlation between latitude and the decay constants. Longitude, however, is not correlated with groundwater but still has a positive correlation with the decay constants. The decay constants' magnitude decreases for sites further away from the western border towards central Detroit, where the smallest magnitude decay constants are, increasing again towards the eastern border. In terms of imperviousness, the greater the imperviousness, the smaller the decay constant's magnitude. This was not always the case, however. For example, S1 and S12 are \(53\) and \(52\)% impervious and their mean \(\alpha\)'s are \(-0.040\) and \(-0.047\) hr\({}^{-1}\), respectively, while S9 is \(92\)% imperviousness with a mean \(\alpha\) of \(-0.102\) hr\({}^{-1}\). The remaining physiographic features are either highly correlated with the explanatory variables discussed above (elevation and longitude: \(-0.73\); land use type and imperviousness: \(0.80\)) or are minimally correlated with the decay constants (hydrologic soil group: \(0.10\); slope: \(-0.02\)). The relationship between the decay constant and its most correlated design feature, DA/SA ratio, and physiographic feature, groundwater depth, was explored further. We show groundwater depth versus DA/SA ratio for estimated decay constants in Fig. 6a. Given that decay constants were retrieved for individual sites and individual storms, the figure reflects averaged surface fit across all the observations. The shape of Fig. 6a is bounded by the observations made by the sensor network and was not extrapolated beyond those bounds. The colored contours indicate the expected decay constant based on the combination of groundwater depth and DA/SA ratio. The red contours indicate slower drawdown while the blue/grey contours indicate faster drawdown. To frame the interpretation of the figure, the corresponding drawdown rates are also color coded in (Fig. 6b). In our study, decay constants with magnitudes \(\geq-0.20\) hr\({}^{-1}\) result in the drainage of one meter of water in under \(24\) hours (Fig. 6b). Fig. 6a shows there are various combinations of groundwater depth and DA/SA ratio that achieve this performance metric. On one end of the spectrum, groundwater can be as shallow as \(7.5\) m if it has a small DA/SA ratio of \(1-2\). On the other end of the spectrum, groundwater must be at least \(10\) Fig. 5: Spearman’s rank order correlation coefficients for the decay constants, design features, and physiographic features. m deep with a DA/SA ratio no larger than 8. Furthermore, if the groundwater table is \(<7.5\) m, a slower drawdown rate is observed regardless of the DA/SA ratio (bottom edge of Fig. 6a). Similarly, when the DA/SA ratio is \(>\)8, the drawdown rate is slow regardless of the groundwater depth (right edge Fig. 6a). ## 6 Discussion ### GI drawdown dynamics The data toolchain introduced in this paper provides an automated way to analyze high resolution hydrologic data, such as water levels in GI. This is enabled by the storm segmentation methodology, which automatically extracts and analyzes data from individual storms. As sensor networks scale, manual data analysis will become infeasible, demanding that we discover means by which to automatically extract relevant data for analysis or training of machine learning algorithms becomes infeasible. As demonstrated here, the approach automatically identified storm events and subsequently analyzed them to train models for the decay constants. The application of a peak-find algorithm to extract events from other types of data (flows, rainfall, soil moisture, etc) should be explored in future studies. The water levels from the 14 sensors indicate the GI are generally performing as designed, despite record rainfall. The GI met and exceeded the requirement specified by Detroit's GI design manual that ponding time should not exceed 24 hours.[23] Below the ground surface, the performance varied by site and storm. To completely drain 1 m of water in 24 hours a GI must have a decay constant \(\geq-0.2\) hr\({}^{-1}\) (Fig. 6b). Only 2 of the 14 gardens had an average decay constant above this threshold. Therefore, most sites have restricted storage capacity when they experience consecutive storms. Fitting a drawdown model for each storm and each site resulted in variability across decay constant estimates. Statistical uncertainty is inherent in a study of this scale, and may manifest across measurements, deployment consistency, and model assumptions. Some variability in the decay constants was likely due in part to the spatial and temporal variation in rainfall.[52] The decay constants may also have been impacted by changes in GI conditions such as the swelling and shrinking of the soil media following wet and dry periods, and the creation of preferential flow paths after extended dry periods.[53] Naturally, a highly granular and continuous sensor dataset can be expected to reveal dynamics and nonlinearities that are not apparent in single measurements or short-term experimental campaigns. We contend that the use of the decay constant poses a first step in the analysis of this large dataset and provides an initial balance by enabling a metric for cross-site comparisons without compressing large amounts of sensor data into an over simplistic summary that ignores dynamics entirely. Future studies could explore the nuanced variabilities dynamics more explicitly. Cross-site comparisons of water level dynamics revealed patterns driven by site design and physiographic features. It is difficult to directly attribute the variation seen between sites to the variations in these features due to the complexity of the physical processes that govern GI drainage dynamics. The correlation analysis found broadly, however, that GI with DA/SA ratios smaller than 8 have faster drawdown rates. Therefore, when designing GI, the size of the garden in relation to the size of the drainage area is critically important. These results align with Davis (2007),[54] which found that a large cell media volume to drainage area ratio and drainage configurations were the two most dominant factors that improved GI performance. Across the broader landscape, GI drawdown dynamics were highly correlated with two physiographic features: groundwater depth and longitude. Faster drawdown rates were correlated with a deeper groundwater table and locations on the outskirts of Detroit. This illustrates the importance of evaluating groundwater levels when planning urban GI installations, especially since many urban areas have shallow groundwater tables,[55] including Detroit.[44] The correlation with longitude may be explained by prolonged soil compaction from development in central Detroit.[56] Some physiographic features had low correlation with the decay constants. Detroit is relatively flat, which may explain the low correlation with elevation and slope. The low correlation between the decay constants and the hydrologic soil group of the surrounding soil is more difficult to posit. Our physiographic input data were limited to public datasets, whose accuracy is driven by factors outside of the control of this study. The low spatial resolution of publicly available raster datasets may oversimplify the physiographic features at a GI site. In the future, site surveys may provide better data for analyzing these physiographic features interaction with the decay constants. Our results have several implications for the future of stormwater management. Considering the broader urban drainage landscape and the potential impact of physiographic features on GI drawdown rates, measurements should become a core component of how managers choose to invest in GI. For example, measuring the drawdown rate, groundwater depth, and/or soil compaction at a site before installation could reduce the risk of installing GI in locations that will have impeded drainage regardless of how well they are engineered. Beyond single sites, an investment into an entire measurement network may help support a more targeted and data-driven approach to GI placement, planning, and maintenance. The application of this methodology could result in empirical design guidance, such as an empirical "heatmap", as shown in Fig. 6a. Such illustrations could serve as a field-validated guide for managers who want to push the performance of their infrastructure without focusing all of their limited resources into one particular design or locale. Naturally, this would require the collection and analysis of more data, but the increasing reliability of technology and automation afforded by some of the tools in this paper may reduce the barrier to adoption. One potential limitation of this work is the duration of our study period. Over longer periods of time we would expect to see fluctuations in the decay constants due to seasonal conditions (e.g., the rate of evapotranspiration falling during colder months[57]) and due to longer-term trends (e.g., deterioration of the GI's drainage capacity due to clogging[58]). In future work, how the decay constants vary over time should be investigated to determine these seasonal and long-term changes. The reliability of the sensors should enable long-term data collection with reduced measurement overhead. ### Beyond site-level drawdown dynamics This study used the high temporal and spatial resolution dataset produced by a sensor network to provide a first order analysis of the variability in GI drawdown dynamics, but the sensor network could also be used for a variety of other purposes. Large GI sensor networks have potential for use in long-term GI monitoring. These data can used to develop a deeper understanding of how GI installations fit into the larger urban drainage network, but this may also require the application of expanded tools for data analysis. Given the accessibility to and availability of modern Machine Learning libraries, the data collected by these networks could be used to inform predictive tools and interactive design guides. The sensor data can also be used to iterate on site design or inform maintenance schedules. Measurements showing when drainage slows over time could indicate that the GI soil media is clogged and should be replaced. A science-based method to validate such scenarios should be investigated. These data may also be used for community education and engagement by communicating to residents and community groups how and where GI may be expected to work well. ## 7 Conclusion This study introduces a wireless, real-time sensor for measuring GI drawdown. Networked together across Detroit, these sensors provide high temporal and spatial resolution data for analyzing city-scale urban drainage conditions. To isolate individual storms in this large dataset, we designed an automated storm segmentation methodology based on peak finding. To our knowledge, this study is the first to monitor GI at this scale and combine it with a data-driven workflow to reveal explanatory features of drawdown dynamics. In Detroit, the groundwater table, imperviousness, longitude, and DA/SA ratio are the most important features impacting drawdown rates. To confirm this finding for other regions, high resolution and long-term GI monitoring is necessary. ## Author Contributions **Brooke E. Mason:** Conceptualization; Methodology, Software, Validation, Data Curation, Formal Analysis, Investigation, Writing - Original Draft, Writing - Reviewing and Editing, Visualization, Supervision **Jacquelyn Schmidt:** Conceptualization, Methodology, Software, Data Curation, Writing - Original Draft, Writing - Reviewing and Editing, Visualization **Branko Kerkez:** Conceptualization, Methodology, Resources, Writing - Reviewing and Editing ## Conflicts of interest There are no conflicts to declare. ## Acknowledgements We would like to thank our collaborators Erma Leaphart, Elayne Elliott, and Cyndi Ross with the Detroit Sierra Club. We would like to thank all the site owners for allowing us to install sensors at their homes, churches, and schools. We would like to thank Angela Hojnacki, Ian Thompson, and Kevin Kaya for installing the sensor network. We would like to thank Lance Kruse for his expertise in statistics. Finally, we would like to thank our Project Manager, Kate Kusiak Galvin. This work was funded by the U.S. National Science Foundation (Award Numbers: 1737432 and 1750744).
2303.01595
Developing a Compiler for EROP -- A Language for the Specification of Smart Contracts, An Experience Report
A smart contract is a translation of a standard paper-based contract that can be enforced and executed by a contract management system. At a high level of abstraction, a contract is only a document that describes how the signing parties are to behave in different scenarios; nevertheless, the translation of a typical paper-based contract to its electronic counterpart has proved to be both time-consuming and difficult. The requirement for a language capable of capturing the core of a contract in simple phrases and definitions has been a focus of study for many years. EROP (Events, Rights, Obligations, Prohibitions) is a contract specification language that breaks a contract down into sets of events, rights, obligations, and prohibitions.
Adrian Delchev, Ioannis Sfyrakis, Ellis Solaiman
2023-03-02T21:35:25Z
http://arxiv.org/abs/2303.01595v1
Developing a Compiler for EROP - A Language for the Specification of Smart Contracts, An Experience Report ###### Abstract A smart contract is a translation of a standard paper-based contract that can be enforced and executed by a contract management system. At a high level of abstraction, a contract is only a document that describes how the signing parties are to behave in different scenarios; nevertheless, the translation of a typical paper-based contract to its electronic counterpart has proved to be both time-consuming and difficult. The requirement for a language capable of capturing the core of a contract in simple phrases and definitions has been a focus of study for many years. EROP (Events, Rights, Obligations, Prohibitions) is a contract specification language that breaks a contract down into sets of events, rights, obligations, and prohibitions. Keywords: Smart Contracts, Blockchain, Monitoring, Enforcement, On chain, Off chain, IoT, Privacy, Trust. ## 1 Introduction Both businesses and academia require novel methods of automating the process of smart contract compliance monitoring [1][2]. A well-designed contract management system may significantly minimise the human component of contract compliance monitoring and verify that the business operations of partners adhere to the terms of the contract being enforced. A management system is required to monitor the interactions between parties to a contract to ensure that their company operations and activities are carried out appropriately and in accordance with the contract's terms. The CCC (Contract Compliance Checker),an impartial, third-party monitoring service developed at Newcastle University [3][4] is an example of such a service. The system itself was designed with a conceptual language for writing smart contracts called EROP. EROP is a contract definition language that uses JBoss Rules, sometimes known as Drools, for rule administration. EROP stands for events, rights, obligations, and prohibitions [5]. The implementation of the CCC relies on the EROP ontology - a set of concepts and relationships used for modeling the execution of business operations between partners, and for reasoning about the compliance of their actions. The EROP ontology is implemented in JAVA as an extension to the Drools engine, which allows for a better, more direct mapping of the EROP specification language to the concrete implementation. Having the option to express a contract in the EROP language allows for a broader user base since only limited technical knowledge will be required to convey a contract in EROP as opposed to writing it in the extended Drools directly for monitoring. As it currently is, a translation from EROP to the extended version of Drools (also known as Augmented Drools or AD) is a manual process, which requires the contract specification in EROP to be mapped to its AD equivalent. The main contribution of this paper is to design and implement a translation engine that would automate the process of EROP to AD translation and provides a valid and correct output for an input that complies with the specifications of the EROP language. We measure the effectiveness of the solution by comparing previously confirmed and verified manual translations from EROP to AD. Our solution has been designed with a modular structure encapsulating required functionality and allowing for easier maintenance and enhancement in the future if needed. A translation engine for EROP to AD mapping eliminates the need for manual translation, potentially significantly reducing translation and mapping related errors, and better utilization of time spent expressing a paper contract as its electronic smart contract equivalent. ## 2 Smart contracts: background ### Contract specification languages The presence of contracts in our society is ubiquitous from every day simple oral agreements that we take for granted to formally specified and notarized documents that have strong and profound effects on our lives. Their wide use today is undeniable and its roots can be traced back to ancient societies [6]. The advancement of electronic commerce has increased the sheer number of contracts an organization can take part in, resulting in difficulties in keeping up with the requirements of an electronic market. Creating a contract is a task that requires significant resources and efforts from hiring adept personnel to formally specify and verify the contract parameters to negotiation between parties and mediation with the business. Electronic smart contracting aims to automate the process of contract establishment and execution while reducing development costs. To this end, a contract has to captured in a language that has the expressive power to specify all the contract content while eliminating any ambiguities [7][8][9][4][10]. In order for a contract to be eligible to be monitored and enforced by a contract compliance system of any kind it has to be formally specified in a language that has the ability to capture the requirements of a contract including legal requirements, clauses and internal policies as well as the acting parties and their actions. Given that the desired outcome of a contract monitoring and compliance service is automation and execution of contracts with minimal human interference, the language that captures the contract has to be precise and free of ambiguities so that the need of manual conflict resolution does not arise. Smart contract research has gained significant momentum since the rise of Blockchain technology with the development of Bitcoin, Ethereum, Hyperledger, and other blockchain technologies. Indeed smart contracts have been found useful in a range of application areas including education [11][12][13][14], Internet of Things [15][16], and Service Level Agreements (SLAs) [17][18][19][20][21]. The idea of smart contracts, however, has been around much earlier than the development of blockchain, and can be implemented on a host of centralised, distributed, and hybrid architectures [22][23][24]. Research into smart contracts and blockchain covers a range of areas including performance [25] and simulation [26][27]. The proposed contract representation language called DocLog [28] introduces the concept of legal advice systems capable of providing, while not as sophisticated as an actual experienced law professional, legal advice about exchanged messages. DocLog uses a tri-layer structure that combines data, text and semantic oriented approaches for exchanging contract terms. The data layer aims to provide the contract data in such a way that it can easily and efficiently be processed by a transaction processing system. The text approach is represented by a Natural language layer that strives to present the contract terms in a way that is easily comprehensible by human users. It uses XML to structure the content of a contract which allows the support for individual clauses, sub clauses, sections, sub sections and allows the use of version and approval management systems. While DocLog does provide a relatively human readable way of capturing contract specifications, it does not provide any means of monitoring the captured the contract or allow any manipulations on the captured properties of the contract. Furthermore several important aspects of contract specifications cannot be captured using the DocLog language. Temporal aspects such as Buyer has to submit payment no later than 5 days after making an order as well as exceptional circumstances within contracts that specify what is to be done when an aberration from the norm of contract occurs are not presented in the original version of DocLog. Nevertheless the underlying architecture of the language provides insights on communication between different layers of the language and mapping between layers using the EDI Translator [29]. The presented administrative architecture [30] supports the management of e-contracts and defines the ontology used in a business partnership for the underpinned contract. This is covered in four steps: 1) Consistency based off-line verification and achievability of contract aims given the possible reachable system states. 2) Definition and compilation of the application specific processes that facilitate the execution of the contracts such as enactment, monitoring, updating, termination, renewal etc. 3) Definition of the roles of the interaction agents. 4) Identifying different components and services used by acting agents necessary for them to play their respective roles in the partnership. The main focus of the framework as discussed in [31] is centered around corrective monitoring, where violation of contractual norms are detected and then tried to be fixed using corrective measures as opposed to predictive monitoring where such violations are predicted by using the agents behavior and actions are taken in order to completely avoid undesired behavior. The actual contracts are captured in an XML based language with a multi-layered architecture. The language is capable of expressing different contract structures such as clauses, parties, groups and actions using deontic notions such as obligations, permissions and prohibitions and even though it uses a relatively high level declarative style of writing its implementability is unclear. It is possible to represent concepts, intuitively acceptable to humans but unfortunately it is not possible to unambiguously translate them in a from that would allow it to be processed by machines. Unlike DocLog, the language offers a degree of exception handling; however it appears to be limited and is not the main focus of the project and neither is usability. There is a myriad of other options when it comes to contract specification but unfortunately most of the available languages either do not focus on usability and practicality or dont have a contract monitoring system in which they can be integrated with ease. BPEL [32] uses event calculus to represent actions and their corresponding effects and is not targeted towards business contracts but for specifying web service interactions. It offers a less declarative approach than the other presented languages. Exception handing is present but recovery from exceptional cases is limited the biggest issue seems to be usability. The abstract notation may be too daunting for non-technical personnel to use for commercial purposes. Heimdahl [33] is a platform for monitoring obligation policies and it uses xSPL [34] for the specification of those policies. xSPL syntax is declarative but some of the language constructs are not intuitive and might be difficult to understand for people without a technical background. Exception handling also does not appear to be possible directly. ### EROP, Augmented Drools, and the CCC As introduced in [5] EROP is a contract specification language that focuses on execution and business resolution of a business partnership. The language relies heavily on events, rights, obligations, and prohibitions, and it captures the information of a conventional paper-based contract into sets of the aforementioned constructs. One of the most significant additions that distinguish EROP from other contract specification languages is the extended capabilities that allow for reasoning and providing resolution of unforeseen circumstances that arise from business and technical failures. One of the strengths of the language stem from the fact that at its level of operation the low level details are abstracted away, allowing contract writers to concentrate on expressing the business operations of a contract. Another selling point of the language is its ease of use even for non-technical personnel and the already existing implementation of its ontology that allows for contractual compliance monitoring. A contract written in EROP consists of two sections a declaration section where all the acting role players, business operations and composite obligations used in the rules are defined, and a rule section that captures operations and manipulations of the entities specified in the contract as well as any actions and exceptional behavior. The formal grammar of the language as well as more details on the EROP syntax will be provided in a later section. The EROP ontology as specified in [5] is a set of concepts and their relationships within the domain of B2B interaction that we employ to model the evolution of interactions between business partners, for the purpose of reasoning about the compliance of their actions with their stated objectives in their agreements. The ideas of the EROP ontology have been implemented as set of Java classes that capture the properties and available operations on those properties; the implementation extends the rule language offered by the Drools rule engine, adding various different construct to reason about and manipulate the operations of business partners within a contract. The implementation of the ontology, also known as Augmented Drools is less abstract and readable than EROP and closer to Java in style. It also needs additional code for convenience and housekeeping purposes that are needed for the implementation of the ontology to work but bot necessary for human reader, initializing a contract in EROP. The EROP language maps completely to Augmented Drools, which makes it possible for direct language to language mapping. In this sense, the problem of creating a precise and formal grammar of the EROP language becomes one of translating EROP to Augmented Drools, which is the main topic of this work and will be explored in detail in the upcoming sections. The Contract Compliance Checker - for short the CCC is the contract compliance monitoring service as introduced in [3][4]. It is a neutral entity conceptually standing between the interacting parties and its purpose is to monitor the exchange of events between participating entities and infer whether or not the business operations these events relate to are compliant or non- compliant to a specified contract. The architecture of the CCC is illustrated in Fig. 1 but any further discussion on the inner workings of the CCC will be limited as its not the topic of the work presented here. For Fig.1: Representation of the CCC architecture [5] the purposes of this project it is important to note that all smart contracts expressed in EROP or Augmented Drools can be uploaded to the CCC for contract compliance monitoring. ### EROP to Augmented Drools translation As mentioned in the previous section EROP maps directly to Augmented Drools given that the former is derived from the latter. In this case a translation between EROP to Augmented Drools essentially comes down to automating the mapping between the two languages. The techniques common for most types of translations will be discussed and reviewed in detail in the next section, here we give some of the more specific techniques available. As discussed in [30], the translation of rules expressed in language based on natural or close to natural format to a rule standard can be accomplished by two-fold mapping where the extracted natural language is mapped to Java beans, which serve as intermediary for the translation between the source language and JBoss Drools production rules. The translation technique uses a straight forward verb and noun concepts grammar that captures the parameters of the source language and is then mapped into rules. Although the paper describes natural or close to natural language mapping, which requires more effort in comparison to EROP to AD mapping because of the lack of rule structure of the source language, it is interesting to note that an intermediary layer in the face of Java Beans is used in order to capture the required information to build an operational rule. The Authors of [35] discuss the approach used by Object Oriented rules systems such as Drools and ILOGrules. Such systems are built on top of Java vocabularies in the case of Drools, Java beans are used as facts to represent the domain of the rules and their vocabulary in user applications. Different vocabularies are used by rules through the import declaration, specified inside of the rule file. The paper describes the approach used to translate from Drools using the low level structure of the language such as beans to translate rules to R2ML, which is an XML based Rule Markup Language. Even though the direction of translation described in the paper goes in the opposite direction of the one desired from the EROP to AD translator, it does highlight the importance and benefits of using an XML structure or language for translation to Drools. ### Summary The conducted research demonstrates that among the presented contract specification languages, none has the expressive power or the ability to deal with exceptional circumstances arising from business or technical failures as EROP does. All of the presented languages require an extensive technical background and may be daunting for a non-technical person to use. In addition to that some of them are merely a notation for expressing contracts, with no definitive means of monitoring the expressed contract for compliance. EROP along with the CCC represent a complete solution for capturing contract requirements and then monitoring and enforcing them. A translator from EROP to AD has to be based on direct mapping and may make use of intermediary representations to hold its data. As presented in the reviewed approaches, JavaBeanslike structures and XML are capable of accomplishing the desired goal and have been used by other similar projects. The next section covers some of the fundamentals in translation and how they are applied in the development of the EROP to AD translator. ## 3 Methodology The development of an EROP to Augmented Drools translator is a task that requires extensive analysis of current techniques and translation technologies. Our initial aim was to approach the development process incrementally, dividing the overall project into several specific sub projects that would be united in the end to produce the final solution. During the initial stages of the development process the focus was on analysis and comprehension of the state of the art of the translation scene. The analysis revealed the parts of common translation techniques that would be essential for developing a translation and also gave an insight into the architectural design of the solution. The use of external, third parties' libraries was taken into consideration and the necessity of such has been reviewed in greater detail later in this section. The programming language used for the development of the solution is JAVA. As its implementation dependencies have been reduced to a minimum and it provides the required functionality to achieve the task at hand. The target of the translation the implementation of the EROP ontology is also in JAVA, which makes the choice to use the language as a logical one as it will promote consistency throughout the contract compliance monitoring solution [5] developed at Newcastle University. ### Compiler Analysis Generally, the purpose of programming languages is two-fold they serve as a notation for describing computations to both machines and people. Other than formally expressing a programmer's intention, they exist for the purpose of bridging the gap between different layers of abstractions the higher layers that are easier to comprehend and safer to use, and lower levels that are often more efficient and flexible. A language that is more human oriented needs to be translated to a language that a computer understands. Such translation software is known as a compiler. A typical compiler breaks the mapping of a source language to the target language in several stages. Most commonly the first of those is the analysis stage it breaks a source program into pieces and verifies the grammatical structure of the source language on them. The resulting pieces are then used for the creation of an intermediary representation of the source language. The analysis stages duty is to detect syntactical and semantical compliance or inconsistency. The intermediate representation built by the analysis stage is called a symbol table and after its creation it is then passed to the next stage of the process. The second stage is the synthesis, where a translation to the target language is created using the intermediary representation of the source language. In a sense the analysis part is the front end, and the synthesis is the back end of a compiler. When reviewed in more detail, the processes of a translation are executed in a sequence of steps (fig2) The initial step of the translation process is carried out by the Lexical analyzer. Its purpose is to read the characters making up a source program or a file and use them to create meaningful sequences called lexemes, producing tokens containing the parsed information, which is then used in the syntax analysis stage. To put things into context an input in the form of the EROP language such as: POAcceptance in buyer.rights Would be broken down by a lexical analyzer into the following lexemes: POAcceptance is a lexeme that would be mapped to the token (id,1), where the 1 is pointing to the position of PAAceptance in the generated symbol table containing information such as value and type. in and. Would be mapped to the tokens (in) and (.) because they are both operations of the language Similarly to POAcceptance, buyer and rights will be mapped to (id,2),(id,3) And the resulting token mapping would be: (id,1) (in) (id,2) (.) (id,3) The next step of a translation process within a compiled is the syntax analysis, also known as parsing. It uses the tokens created by the lexical analyzer to create a tree like intermediate representation of grammatical structure of the source language. Most commonly [19] the intermediary representation is known as a syntax tree in which each parent node represents an operation and the children nodes of the parent represent the arguments needed to complete the operation. Using the above breakdown of input characters into tokens, the resulting syntax tree for the given input would look like: The node labelled as dot indicates that the operation must be completed using the children nodes and that the produced result should serve as the right hand side of the operation labelled in. The next step of the process is the semantic analyzer. It uses the information stored in the symbol table as well as the generated by the syntax analyzer tree in order to check that the source program complies semantically with the definition of the language. One of the most important parts of the semantic analysis is type checking in other words where the operands are from the appropriate type for the specified operator. For instance, in most programming languages an array is indexed using an integer value, if the compiler detects anything that does not match the expected type it is supposed to notify the user for the inconsistency, in the case of EROP an example would be the definition of a business operation. As stated in [5] a business operation is defined by a generic string that starts with an upper letter. In that sense a string with a lower letter would be the wrong type when trying to define a business operation and therefore the compiler would have to return an error. The step following the semantic analysis is the intermediate code generation. The intermediary translation can be more than one or it can be expressed in a variety of different forms. The most important properties of the intermediate representation are that it should be easy to produce, and it should be easy to translate into the target language. The final step of the compilers process is the code generation, which uses as an input the intermediate representation of the source program and maps that to the target language, where different approaches are used depending on the differences or similarities between the source and target languages. The detailed view of a compiler inner workings have been invaluable to the development and design process of the final solution. It has contributed towards the understanding of how typical language translators work and what the most common aspects in them are. Given the semantic similarities between EROP and Augmented Drools (the former is derived from the latter) a translation process between the two languages doesn't need to include all of the steps undertaken by a typical compiler, namely the code optimization step is redundant. Furthermore, in order to speed up the development process and avoid needless low level errors oversights, specialized tools that have efficiently implemented some of the outlined principles can be used. ### Parser Generators Parser generators are a specific class of software development tools that are able to generate the framework needed for a program to implement a parser from a set of rules called grammar. A few different methods exist for parsing a given stream of character input, but the two most important ones are top-down and bottom up analysis algorithms as they apply to the widest range of input grammars and context-free grammars and are appropriate to use in a parser generator. Tools such as parser generators have been used in the creation on compilers and translators can be traced back to the early days of computing with examples dating back to 1965 [36]. The two major advantages of using parser generators are firstly development time once proficient in writing grammars using a generated lexer and parser expedites the development process immensely. The second advantage of parser generators is correctness by construction, meaning that the generated parser accepts exactly the language specified in the grammar used to create it [36]. Today a wide range of parser generators exist, with all of them employing similar input parsing techniques and differing from one another in terms of style of grammar specification, algorithms used to parse the input, language of the generated parser files and so on. As mentioned in the development of EROP, a parser generator called ANTLR was considered when thinking about the translation between EROP and Augmented Drools [5]. ANTLR is a parser generator that has a wide range of uses including reading, processing, executing or translating structured text or binary files. ANTLR [37] supports actions and attributes flexibility, meaning that different actions can be defined in separate files from the grammar and essentially decoupling it from the target language, enabling easier targeting of multiple languages. ANTLR can also be used to generate tree parsers and processors of abstract syntax trees. It uses EBNF as a format of its grammar input and has support for popular IDEs. An additional benefit is that it generates a lexer as well as a parser and the resulting generated files are in JAVA format, which makes it consistent with AD and the language used for the development of the EROP to AD translator. ANTLR is also widely popular and is used by Twitter for query parsing, processing over 2 billion queries a day as well as in projects such as Groovy, Hibernate, IntelliJ IDEA and many more. ### Solution Architecture After reviewing the most common translation techniques and approaches, the architectural design of the EROP to Augmented Drools translator started to emerge. The use of parser generator would enable a more rapid development and allow to some extent to reuse the formal grammar of EROP as specified in [5]. The use of ANTLR would also mean that the first three steps of typical compiler architecture can be accounted for and there is no need to create spate entities for the desired functionality. As noted earlier, machine independent code optimizer would not be practical because of the similarities of the target and source languages, which essentially means that the intermediary code generator can interact directly with the code generator where the mapping is made, and the final result is produced. As discussed in a previous section an intermediary layer can be accomplished by an implementation like JavaBeans. The disadvantages of using JavaBeans directly are that it supplies nullary constructors for all its subclasses, which means that they are at risk of being instantiated in an invalid state. The problem stems from the fact that a compiler cannot detect such instantiation which can lead to troublesome debugging and tracing, especially when using a generated parser. Nevertheless, in order to accommodate the intermediary code generation layer of the EROP to Augmented Drools translator, similar in concept Collection of Java classes can be implemented that captures the essence of the building blocks of the EROP language. The collection of those classes, well as any additional classes required to accommodate communication between different layers of the architecture, will be discussed in the implementation section. The initial conceptual design of the translator is depicted in Figure4. ## 4 Implementation ### ANTLR Grammar A grammar file in ANTLR is simply a file that specifies the syntax and different constructs of the language and how they connect with one another. On a high level of abstraction, the grammar consists of lexer and parser rules that once specified are then embedded in the generated by ANTLR lexical and syntactic analyzers. The lexer rules begin with an upper-case letter, as opposed to the parser rules and are used to tokenize the input. Lexer rules are essentially the fundamental, building blocks of a language. Parser rules on the other hand are more complex rules that can contain Figure 4: The Conceptual architectural design of an EROP to AD translator rules themselves as well as tokens characterized as fundamental to the Language. As specified in [3], a contract expressed in the EROP language consists of two parts a declaration section, where all the role players and business operations are defined and a rule section, where the different rules used for compliance monitoring are captured. From that we can infer that the root structure of the language is a contract file and everything else is contained within that file. That can be represented in ANTLR as the entry point of any received input, and it would have any number of children depending on what a contract file can contain and what different constructs in the contract file can contain themselves. To put things into context, Fig 5 gives a partial visual representation of the structure of a contract file defined in the EROP language. As depicted in the partial representation, a declaration section can have one or more declarations and each declaration is a business operation, role player or composite obligation declaration. The role player declarations as well as the identifier are specified at the lowest level to give a feeling of what rules, sitting at the bottom of the rule hierarchy would look like. A role player declaration simply consists of the keyword Roleplayer followed by white space and one or more identifiers and ending with a semicolon. The one or more quantifier makes it possible to declare multiple role players in a single line. An identifier is simply a string starting with lower case and the ability to contain uppercase letters as well as digits. A grammar is an important part of ANTLR, but by itself it does not provide much functionality because the associated parser is only able to tell us whether an input conforms to the language specification given. In order to build translation or any type of applications for that matter, there is a need for the parser to trigger some sort of Figure 5: Partial representation of EROP grammar. action, whenever it encounters input sequences, phrases or tokens of interest. Fortunately, ANTLR provides two mechanisms that allow invocation of actions it automatically generates parse tree listeners and visitors to enable building language applications. A listener is an object that can respond whenever it detects rule entry and exit events triggered by a parse tree walker as it discovers and finishes nodes which means that ANTLR automatically generates the interfaces for any entry or exit events. The most profound difference between listeners and visitors is that listener methods do not have the obligation to explicitly call methods to walk their children that gives flexibility in a scenario where only specific parts of the input language should trigger events. The alternative is visitors they must explicitly activate visits to their child nodes in order to keep the traversal of the tree going. In the case of an EROP to Augmented Drools translator the alternative method makes much more sense as parsing of all the input information is needed. The alternative is visitors they must explicitly activate visits to their child nodes in order to keep the traversal of the tree going. In the case of an EROP to Augmented Drools translator the alternative method the alternative method the alternative method the alternative method the alternative method the alternative method makes much more sense as parsing of all the input information is needed. The alternative is visitors they must explicitly activate visits to their child nodes in order to keep the traversal of the tree going. In the case of an EROP to Augmented Drools translator the alternative method makes much more sense as parsing of all the input information is needed. The provided functionality allows for triggering specific events when a rule from the grammar is entered. There is still the need for implementing specific actions when such events occur. Given that re-usability is an important part of the software development process it would make sense to reuse common concepts instead of creating duplication. Identifier is an example of commonly repeating grammar structure that can be reused. It is used to describe business operations, composite obligations as well as role players, not to mention that it can occur not only in the declaration section but also in the rule set when role players and business operations are referenced, or their ROP sets manipulated. In order to allow re-usability while keeping the ability to distinguish for which part of the grammar common grammatical structures refer to lve implemented two additional JAVA classes that serve as a buffer for the ANTLR parser and population classes that create the intermediary representation of the EROP language. Those classes are Variables Flagger and Variables memory. Their purpose is to respectively activate various flags whenever the ANTLR tree walker enters different rules and then use those flags in order to make decisions on where the contents of the parsed file should be stored. The separation of communication between the ANTLR parser and the intermediary representation of EROP follows good software development guidelines and practices as it encapsulates and abstracts away the logic needed to make the decisions about where the parsed information goes. It also helps with testing and contributes to the modular approach design, which is one of the development aims of the project. ### The Rule Structure Classes As discussed in the analysis section, the intermediate code generator of the translator can be accomplished by custom Java classes inspired by various techniques. The resulting Java classes would have to capture the structure of the corresponding language constructs and any information they hold that are needed for linking the intermediary representation to the final target language. The resulting classes and their functions are as follows: * EventMatchCondition: represents the conditions an event match has to satisfy in order for the event to be triggered. It follows the structure field operator value and as specified in the full language grammar (included in the appendix) the field value can be any one from botype/outcome/originator/responder as specified in [3]. * Constraint: the constraint class is a generalized collection of any constraints that can be specified on a rule. It can hold any of the following: RopConstraint capturing the presence of absence of particular business operations or composite obligations in a role player ROP sets; HistoricalConstraint used to condition the triggering of rules depending on the presence or absence of certain events or the times a specified event occurred; TimeDirect and TimePartialComparisons constraints used to enforce additional checks on the timestamp of a given event; TimeDirectComparison is used to check of a timestamp the same as, before or after a specified point in time; TimePartialComparison is used to check if an event timestamp is within a given range of hours, minutes, years, days or months; OutcomeConstraint used to specify a constraint on the outcome of an event such as Success, Fail, BizFail etc; RhsAction represents the right hand side of a rule, anything between the then and end part of a rule. It can contain conditional statement, outcome or pass actions as well as any manipulation on a role players ROP set or the outcomes of a business operation. * IfStatement: a conditional structure that is used to capture additional constraints in the RHS of a rule. It comes with its own left- and right-hand side and even though it doesn't alter the ROP languages structure it allows for a more natural and productive style of writing. * AddOrRemAction: used to gather information about any manipulation of a role player ROP sets such as adding or removing Rights/Obligations/Prohibitions. * Rule: the root class in the rule class structure architecture that contains all the information required to represent and recreate a rule. Figure 6 shows the architectural hierarchy of the classes in the Rule Structure and how they interact with one another. Figure 6: UML Diagram rule "Rule1" when e matches (_eventMatchConds_) when $e: Event (_eventMatchConds_) clean( Evaluation In order to critically examine and determine the correctness of the developed tool as well as any addition, enhancements and/or omissions that need to be made to both the EROP language or the Augmented Drools implementation, we present a case study in which the concepts of a contract represented in the latest version of Augmented Drools are extracted and serve as key points that are then used to express the same contract in EROP. Once the contract is in EROP it can then be ran through the translator and the produced translation can be evaluated against the original contract, expressed in Augmented Drools. The two main aspects on which the translation will be judged are ability to express concepts and correctness of the produced translation with regards to the original of the contract. The contract used for the case study is between two parties, which will be referred to by simply Buyer and Seller. The original contract in its entirety, expressed in Augmented Drools can be found in the appendix section. The clauses of the contract extracted from the original are as follows: C1: The buyer has the right to submit a buy request, having sent a buy request, a buyers right so submit any further ones is revoked until the current one is resolved. At the same time, the seller gains an obligation to either accept or reject the received buy request. C2: In the event of one or more business failures during the buy request, the first business failure should be noted, and any further business failures should reset the Rop sets of the role players. C3: Having received a rejection of a buy request, the pending obligation is satisfied, and the buyer can have its right to send additional buy requests restored. C4: In the event of one or more business failures during the rejection of the buy request, the first business failure should be noted, and any further business failures should reset the Rop sets of the role players. C5: After receiving an acceptance of a buy request from the seller, the pending obligation has been satisfied and the buyer receives a new obligation to pay the seller as well as the right to cancel the order. C6: In the event of one or more business failures during the acceptance of the buy request, the first business failure should be noted, and any further business failures should reset the Rop sets of the role players. C7: After a payment has been received, the buyer has satisfied its obligation; he loses the obligation to pay as well as the right for a cancellation and regains his right to submit further buy requests. C8: In the event of one or more business failures during payment of the buy request, the first business failure should be noted, and any further business failures should reset the Rop sets of the role players. C9: After the buyer sends a cancellation, he loses the obligation to pay and the right to submit further cancellations. C10: In the event of one or more business failures during cancellation of the buy request, roleplayer buyer, seller; businessoperation BuyRequest, Payment, BuyConfirm, BuyReject, Cancelation; compollig ReactToBuyRequest(BuyConfirm, BuyReject) Fig.8. rule "BuyRequestReceived" when e matches (botype == BUYREQ,originator == buyer,responder == store,outcome == success) BuyRequest in buyer.rights then buyer.rights == BuyRequest(seller) seller.obligs += ReactToBuyRequest(buyer,"01-01-2016 12:00:00") end Fig.9. rule "BuyRequestBnessFailure" when e matches (botype == BUYREQ,originator == buyer,responder == store,outcome == tecFail) BuyRequest in buyer.rights then if (BuyRequest.BizFail == false) then BuyRequest.BizFail == true else reset buyer reset seller endif end Fig.10. the first business failure should be noted, and any further business failures should reset the Rop sets of the role players. Two role players are defined in the contract along with the following business operations: BuyRequest, Payment, BuyConfirm, BuyReject, and Cancelation. The clauses of the contract as specified above, define how the ROP sets of the roleplayers change during the interaction. A file in EROP starts with the definition of the role players, business operations and composite obligations used in the contract. The second part of the contract contains definition of the rules of the specified roleplayers and interactions between them. The rule for a received Buy Request can be derived from C1 of the contract. It occurs when a successful buy request is received. The second rule can be derived by C2 and as specified it requires specific actions to be executed depending on a certain condition. The rule can be modelled using a conditional structure in EROP. The third rule of the contract is derived from C3 and is triggered whenever the seller rejects a buy request from the buyer. rule "BuyRequestRejected" when e matches (botype == BUVREJ,originator == store,responder == buyer,outcome == success) ReactToBuyRequest in seller.obligs then seller.obligs -= ReactToBuyRequest(buyer) end ``` Fig.11. ``` rule"BuyRequestRejectedFailures" when e matches (botype == BUVREJ,originator == store,responder == buyer,outcome == recFail) ReactToBuyRequest in seller.obligs then if (BuyConfirm.BizFail == false) then BuyConfirm.BizFail == true else reset buyer reset seller endif end ``` Fig.12. ``` rule"BuyRequestConfirmation" when e matches (botype == BUVCONF,originator == seller,responder == buyer,outcome == success) ReactToBuyRequest in buyer.obligs then seller.obligs -= ReactToBuyRequest(buyer) buyer.obligs -= Payment(seller) buyer.rights -= Cancellation(seller) end ``` Fig.13. The fourth rule of the contract is directly derived from C4 and is very similar in definition to BuyRequestBnessFailure. It occurs when a business failure occurs during a rejection of a buy request. The fifth rule defines what happens when a successful confirmation of the buy request is received, it is derived from C5. The sixth rule, similarly to the second and fourth rules describes what happens in the event of failures during the confirmation. The seventh rule, derived from C7, captures what occurs in the event of a successful payment. The eight rule describes what happens in the event of exceptional circumstances during receiving a payment, it is derived from C8. The ninth rule captures what happens whenever a cancellation is received. rule "BuyRequestConfirmationFailures" when e matches (botype == BUYCONF,originator == seller,responder == buyer,outcome == tecFail) ReactToBuyRequest in seller.obligs then if (BuyConfirm.BizFail == false) then BuyConfirm.BizFail == true else reset buyer reset seller endif end ``` Fig.14. ``` rule"PaymentReceived" when e matches (botype == BUYPAY,originator == buyer,responder == store,outcome == success) Payment in buyer.obligs then buyer.obligs -= Payment(seller) buyer.rights -= Cancellation(seller) end ``` Fig.15. ``` rule"PaymentReceivedBFailures" when e matches (botype == BUYPAY,originator == buyer,responder == store,outcome == tecFail) Payment in buyer.obligs then if (Payment.BizFail == false) then Payment.BizFail == true else reset buyer reset seller endif end ``` Fig.16. The last rule of the contract expresses what is to happen whenever exceptional circumstances occur during the a buy cancellation. ### Outcome and Role player Constraints We will start be discussing the ability of EROP to express concepts present in the latest version of Augmented Drools as seen in the original of the contract presented in the appendix section. Clauses 2, 4, 6, 8 and 10 of the contract require the ability to check if a certain business action has been set as a business failure, this can be characterized as rule "BuyCancellation" when e matches (botype == BUYCANC,originator == buyer,responder == store,outcome == success) Cancelation in buyer.rights then buyer.rights -- Cancellation(seller) buyer.obligs -- Payment(seller) end ``` Fig.17. ``` rule"CancellationBrailures" when e matches (botype == BUYCANC,originator == buyer,responder == store,outcome == tecFail) Cancellation in buyer.rights then if (Cancelation.BizFail == false) then Cancelation.BizFail == true else buyer reset seller reset endif end ``` Fig.18. ``` rule"Sample" when e matches (botype == BUYCANC,originator == buyer,responder == store) e.outcome == success then end ``` Fig.19. operations to check for example if the business operation has failed. This is expressed in clauses 2,4,6,8 and 10 of the contract presented in the previous section. The functionality to make such checks were not present in the original version of EROP, to accommodate it we have amended the original outcome constraint to have the following syntax and role. \[BusinessOperation.BizzFail==true/false \tag{1}\] The construct can be used both in the Left-hand side and the right hand side of a rule, with the same syntax but a different meaning. When used in the left-hand side it is placed after the event match condition, the same way as it was introduced originally. The role when placed in the left-hand side of a rule is to check if the specified business action has happened, it other words it is a Boolean condition. When used in the right-hand side of a rule it servers not as a Boolean condition but rather a way to specify that the condition happened or did not happen. In other words when used in the right-hand side of a rule it serves as a setter. The change makes possible expressing clauses 2,4,6,8 and 10 and it updates the no longer needed version of outcome constraints as expressed in the original version of EROP in [5]. The requirement of the latest version of Augmented Drools that all the event match condition fields must be specified also makes the Role player constraint obsolete. The syntax in figure 20 is no longer needed and can be removed from the language. The same as outcome constraints, role player constraints were used to add additional Boolean conditions after the event match block, given that some of the event match condition fields were omitted, given that the methods that were used to check for that functionality have been removed from the implementation of Augmented Drools it is no longer possible to do that. ### Resetting Rop sets The contract in Augmented Drools has another feature that is not present in the original specification of the EROP language. It is also captured by clauses 2,4,6,8 and 10 and it gives the ability to reset the ROP set of a given role player. This is needed to keep the consistency of the ROP sets of role players in the case of certain exceptional situations such as technical or business failures. To accommodate that functionality we have added the keyword reset to the grammar of EROP, which enables the contract writer to reset the ROP sets of a given role player. It can only be used in the right-hand side of a rule, similarly to ROP set manipulation. A sample of the operation is as follows in figure 21. ### Case study evaluation When the contract, shown in the previous section is inputted in the Translator it produces the following results. A rule file in Augmented Drools, like one in EROP, starts with a declaration section where all the objects and entities used in the file are declared. After some java statements to import the classes of the EROP ontology, there is a section to declare global identifiers such as Role Players, Composite Obligations and Business Operations. Augmented Drools also needs instances of some other EROP ontology classes such as the Relevance Engine and the Event Logger for reference in the rules. The translator also automatically generates ROP sets for each Role Player specified in the declaration section (conforming by functional requirements FR1, FR2, FR3 and FR4). The translated declaration section looks as follows: Package. BuyerStoreContractEx importuk.ac.ncl.erop.*; importuk.ac.ncl.logging.CCCLogger; globalRelevanceEngineengine;globalEventLoggerlogger; globalRolePlayerbuyer; The translator correctly generates instances of the Relevance Engine and Event Logger as well as the two Role Players and their corresponding ROP sets and all the specified Business Operations (Operations names start with lower case since they are java object and must follow Java style rules, as specified in FR5). The syntax to define rules in Augmented Drools is the same as in EROP given that the latter is derived from the former and has the following structure: rule RuleName when conditions then actions end The Translator produces the following translation of the first two Rules (figure 22). rule "BuyRequestReceived" when $e: Event(type=="BUYREQ", originator=="buyer", responder=="store", status=="success") eval(ropBuyer.matchesRights(buyRequest)) then propBuyer.removeRight(buyRequest, seller); BusinessOperation[] bos = {buyConfirm, buyReject}; propSeller.addObligation("ReactToBuyRequest", bos, buyer,"01-01-201612:00:00")); end Fig.22. The placeholder event variable is correctly translated to $e and the event match conditions are specified in the Augmented Drools format. Constraints on event attributes is imposed outside the event match using the eval keyword as well as the methods from the Augmented Drools implementation (as specified in FR6). The right-hand side of the rule is translated correctly with manipulation of ROP sets going through method calls of the generated ROP sets of roleplayers. As expected, in the case of composite obligations, an extra line of code is needed to add a new composite obligation. In the above translation a composite obligation called bos is created and it consists of two other business operations. The second EROP rule, derived from clause 2 is translated to two rules in Augmented Drools because of the conditional structure used (figure 23). The single rule in EROP is correctly broken down into two rules in Augmented Drools (as specified in FR7) the first one consisting of the conditions of the if statement added to the left-hand side of the rule and the then action added to the right hand sand of the rule. This rule also demonstrates the changed outcome constraints at work, correctly matching them as Boolean conditions and setters on the appropriate places (Lhs/Rhs). The second rule is produced by adding the negated conditions of the if statement to the left-hand side of the rule, while adding the then action to the right hand side of the rule. It also demonstrates the translation of the newly added reset construct that allows a contract writer to reset the ROP sets of a given role player. The rest of the translation produces similar results, correct for the constructs used. The full translation is attached in the appendix section and can be compared against the original of the contract, which is also included, to verify its correctness. ## 6 Conclusions The developed solution can generate translations, however there are improvements that can be made to enhance its capabilities. Some of these include: Adding more descriptive error handling mechanism, as currently, whenever the translator tries to parse a file, it expects the input to follow a certain format as specified in the grammar. If it does not find what it expects, error messages are presented that show the line of the file and character position at which the parser found an unexpected input. The error messages can be enhanced and made more descriptive and user friendly. Currently the translator is a stand-alone entity, separate from the CCC. It is worth exploring the cost of integrating it with the CCC and the amount of work required to do so. Another improvement would be enhancing the translator so that it can translate EROP to other Rules Languages. Even without any of the suggested features, the produced solution is capable with dealing with any of the EROP language constructs and their translations. rule "BuyRequestBnessFailureIfThen" when $e: Event(type == "buyreq",originator == "buyer",responder == "store",status == "tecfail") eval(buyRequest.getBusinessFailure() == false) eval(ropBuyer.matchesRights(BuyRequest)) then buyRequest.setBusinessFailure == (true) end rule "BuyRequestBnesslistFailureIfFalse" when $e: Event(type == "buyreq",originator == "buyer",responder == "store",status == "tecfail") eval(!buyRequest.getBusinessFailure() == false) eval(ropBuyer.matchesRights(BuyRequest)) then propBuyer.reset(); propSeller.reset(); end Fig.23.
2307.13466
Integrating processed-based models and machine learning for crop yield prediction
Crop yield prediction typically involves the utilization of either theory-driven process-based crop growth models, which have proven to be difficult to calibrate for local conditions, or data-driven machine learning methods, which are known to require large datasets. In this work we investigate potato yield prediction using a hybrid meta-modeling approach. A crop growth model is employed to generate synthetic data for (pre)training a convolutional neural net, which is then fine-tuned with observational data. When applied in silico, our meta-modeling approach yields better predictions than a baseline comprising a purely data-driven approach. When tested on real-world data from field trials (n=303) and commercial fields (n=77), the meta-modeling approach yields competitive results with respect to the crop growth model. In the latter set, however, both models perform worse than a simple linear regression with a hand-picked feature set and dedicated preprocessing designed by domain experts. Our findings indicate the potential of meta-modeling for accurate crop yield prediction; however, further advancements and validation using extensive real-world datasets is recommended to solidify its practical effectiveness.
Michiel G. J. Kallenberg, Bernardo Maestrini, Ron van Bree, Paul Ravensbergen, Christos Pylianidis, Frits van Evert, Ioannis N. Athanasiadis
2023-07-25T12:51:25Z
http://arxiv.org/abs/2307.13466v1
# Integrating processed-based models and machine learning ###### Abstract Crop yield prediction typically involves the utilization of either _theory-driven_ process-based crop growth models, which have proven to be difficult to calibrate for local conditions, or _data-driven_ machine learning methods, which are known to require large datasets. In this work we investigate potato yield prediction using a hybrid meta-modeling approach. A crop growth model is employed to generate synthetic data for (pre)training a convolutional neural net, which is then fine-tuned with observational data. When applied in silico, our meta-modeling approach yields better predictions than a baseline comprising a purely data-driven approach. When tested on real-world data from field trials (n=303) and commercial fields (n=77), the meta-modeling approach yields competitive results with respect to the crop growth model. In the latter set, however, both models perform worse than a simple linear regression with a hand-picked feature set and dedicated preprocessing designed by domain experts. Our findings indicate the potential of meta-modeling for accurate crop yield prediction; however, further advancements and validation using extensive real-world datasets is recommended to solidify its practical effectiveness. Machine Learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning, machine learning ## 1 Introduction Crop yield prediction plays an important role in ensuring food security, optimizing agricultural practices, and managing risks in the farming industry. By accurately estimating the expected crop yields, stakeholders ranging from farmers and policymakers to traders and researchers can make informed decisions and take proactive measures. In a world facing increasing population growth and climate uncertainties, the ability to predict crop yields has become indispensable for addressing global food challenges and fostering sustainable agricultural practices. In crop yield prediction, _theory-driven_ process-based crop models and _data-driven_ machine learning models are usually seen as opposite approaches (Leng and Hall, 2020; Paudel et al., 2022; Maestrini et al., 2022). In this work, we investigate the use of a meta-modeling approach to combine the best of these two worlds. The integration of the two approaches may help overcome their individual approach limitations, for process-based models the limited processes included in the model (van Ittersum et al., 2013), and their difficulty in adapting to local conditions, and for data-driven models the need for large, variable and orthogonal datasets, that are often unavailable in the agricultural context. One common method of integrating data-driven and process-based approaches is through the development of a meta-model (Wallach et al., 2019; Pylianidis et al., 2022). The workflow to create a metamodel is typically the following: a synthetic dataset is created using a process-based model fed with different inputs representative for the conditions where the model will be used, and then a data-driven model is fit on the synthetic dataset (Razavi et al., 2012; Pylianidis et al., 2022). Usually only a subset of the input (e.g. weather, management, soil texture) and a subset of the output (e.g. yield, or yield and leached N) is retained in the metamodel. To align model predictions with local conditions, crop yield prediction using process-based crop growth models approaches typically involves a tedious calibration process (Seidel et al., 2018). Calibration of these models can be challenging due to the complexity of cropping systems, spatial and temporal variability, limited data availability, and parameter sensitivity. Calibrating a data-driven method is often considered easier compared to calibrating a process-based model. For data-driven models transfer learning techniques, such as fine-tuning, can be leveraged to adapt a generic model to local conditions. In this work, we compare three methods for predicting potato yield, quantified as fresh tuber weight at harvest time: (1) a process based crop growth model, in our case Tipstar (Jansen, 2008) (2) a (fine-tuned) metamodel, pre-trained with synthetic data generated by Tipstar, and (3) a purely data-driven model, not (pre)trained with Tipstar. ## 2 Materials and methods A schematic overview of the experimental setup, data flow and model development is shown in Figure 1. ### Models #### 2.1.1 Process-based crop growth model Tipstar (Jansen, 2008) is a crop growth model that simulates potato growth under water and nitrogen limited conditions with daily time steps. The input to the model are crop management (planting, fertilization, and irrigation), weather (solar radiation, rainfall, maximum and minimum air temperature, wind), and soil characteristics (texture, organic matter percentage, van Genuchten parameters (van Genuchten, 1980)). The model was not calibrated for this specific setup (i.e., no model parameters were adjusted to fit the data of this study), but we re-used default parameters that were derived from historical data from the same country (i.e., the Netherlands). #### 2.1.2 Metamodel We developed a metamodel of Tipstar by creating a synthetic dataset of 86,000 entries, which was then used to pretrain a convolutional neural network. The synthetic dataset was created by taking the full factorial of the model inputs weather (7 locations, 32 years) and soil (32 soil types (Wosten et al., 2013)). For each of the resulting 7,168 combinations we generated 12 simulations using random sampling while varying the following model inputs: nitrogen fertilization, souring date, irrigation, maximum rooting depth, and cultivar earliness. The metamodel was constructed as a multi-stream (n=3) convolutional neural network. The first stream processed the temporal data (i.e. solar radiation, rainfall, maximum and minimum air temperature, cumulative irrigation and cumulative nitrogen input) with twice a 1D convolution (number of filters 20 and 7, kernel sizes 3 and 2) followed by an average pooling (5 and 5). The second stream processes the scalars (i.e. maximum rooting depth, souring day of year and cultivar earliness) with two dense layers (20, and 20). The (optional) third stream processes the soil characteristics (i.e., clay, loam, organic matter percentage, soil moisture at saturation, and the Van Genuchten parameters: \(\alpha\), \(\lambda\) and \(n\)(van Genuchten, 1980)) with a 1D convolution (number of filers 5, kernel size 5) followed by an average pooling (24). The three streams are all flattened and concatenated, and subsequently fed to three dense layers (25, 5 and 1). Figure 1: Schematic overview of experimental setup, data flow and model development #### 2.1.3 Data-driven model As a baseline for our model comparisons we trained a purely data-driven model. This model was not pretrained with synthetic data, but was trained with observational data. The model had the same architecture as the metamodel. (Pre)training of the metamodel and the data-driven model was done with mean squared error (MSE) as a loss function. We used the ADAM optimizer with an initial learning rate of 0.001. Training was monitored on an hold-out validation set. To prevent overfitting, we used early stopping (min_delta=0.001, patience=20), and learning rate reduction (factor=0.5, min_delta=0.001, patience=10). ### Experiments We conducted two experiments to investigate the merits of our meta-modeling approach. #### 2.2.1 Transfer learning on synthetic data The first experiment served as a proof-of-concept of our transfer learning approach, and was done with synthetic data only. We selected "soil" as our domain of interest, since, in general, potato yield depends substantially on the characteristics of the soil (Porter et al., 1999). We divided the synthetic dataset in two subsets; the first set (i.e., source domain) contained the simulations on peat soils (category 2xx in BOFEK 2012 (Wosten et al., 2013)) and the second (i.e., target domain) the sandy soils (category 3xx in BOFEK 2012 (Wosten et al., 2013)). We confirmed that yields differed between the two soil types, which was found to be partially caused by a different yield response to water availability. A metamodel (excluding the soil input stream) was pre-trained on the peat soil dataset (n=10k), and then fine-tuned with the sandy soil dataset. For the fine-tuning we chose to freeze all layers of the network, except for the last two layers, as this approach yielded slightly superior results compared to alternatives such as training all layers, and training the first layers only. For comparison, we trained a data-driven model with the same architecture of the metamodel. The data-driven model was exclusively trained with the sandy soil dataset. Both models were evaluated on a hold-out set (n=17k) with sandy soils. Transfer learning is especially instrumental in cases where there is a limited availability of data from the target domain. We simulated such a scenario by limiting the size of the dataset available for fine-tuning; we evaluated three different levels of size n=50, 200, and 1000 respectively. #### 2.2.2 Yield prediction on real-world data The second experiment was done with two real-world datasets; one from field trials and one from commercial fields. The field trial dataset (n=303, 1994-2003) came from a collection of 36 potato experiments carried out by Waegeningen University and Research. The factors that were varied in the experiments were cultivar, nitrogen fertilization rates and timings, irrigation rates and timings, sowing date, and planting density. The field trials were conducted under well-controlled conditions, with e.g. effective pest management. The commercial field dataset (n=77, 2015-2020) was collected from an arable farm located near Waeeningen University and Research. As the primary focus of the commercial cultivation was profit rather than scientific research, crop management was carried out based on standard practices aiming to achieve an optimal balance between yield and costs. A metamodel was pretrained on the synthetic dataset, and subsequently fine-tuned with the observational dataset. To prevent data leakage, we excluded the years in which the experiments of the potato trials dataset were performed from the synthetic dataset. For comparison, we trained a data-driven model with the same architecture of the metamodel. Because the observational dataset was relatively modest in size, we used leave-one-out cross-validation for model development and testing. As weather is a major determinant of final yield, we used weather years as criteria for the splits. To assess the stability of the training, we replicated the trainings three times using different random seeds. As an additional baseline, representing one of the most commonly used data-driven approaches in yield prediction, we trained a simple linear regression model with a hand-picked feature set (i.e. earliness, sowing date, precipitation, and average daily temperature) selected by domain experts. The time series variables (i.e., precipitation, and average daily temperature) were preprocessed by averaging over the period from May-August. ## 3 Results We (pre)trained a metamodel for potato yield prediction, using synthetic data obtained from the process-based crop growth model Tipstar. The obtained metamodel was able to reproduce Tipstar with an RMSE of 5.1 fresh tonne/ha and \(r=0.95\) on a hold-out set (n=13,000) (see Fig.2). ### Transfer learning on synthetic data In our first experiment we investigated the merits of incorporating fine-tuning in the training process of our metamodel, to cover for domain shifts. As an alternative approach we included a purely data-driven baseline that was not pretrained with the crop growth model. Table 1 reports on the performance of the models as function of the fine tuning set size. When target domain data availability is limited, best results are obtained with the fine-tuned metamodel. These results indicate that (1) fine-tuning improves the performance of a pretrained metamodel, and (2) pretraining with synthetic data obtained from a processed based crop model is effective, even when that data is collected from a source domain that differs from the target domain. ### Yield prediction on real-world data In our second experiment we evaluated our metamodel approach on real-world data. Figures 3 and 4 show scatter plots of the predicted and the observed yields for both the crop growth model and the metamodel on the field trial set (n=303), and the commercial field set (n=77), respectively. In general, predictions of both models are better in the field trial set than in the commercial field set. While for the field trial set the predictions of the crop growth model are, on average, in line with the observed yields, for the commercial fields the crop growth model systematically underestimates the yield. This emphasizes the necessity of calibrating a crop growth model prior to its application to real-world data. The (fine-tuned) metamodel, on the contrary, exhibits no systematic bias, in either sets. Yet, the metamodel, has a relatively high scatter. As a reference, we trained a purely data-driven model (i.e., we did not use the crop growth model for pretraining), with the same architecture of the metamodel. With an RMSE=15.80, and r=0.02 on the field trial set, and RMSE=14.10, and r=0.35 on the commercial set, the purely data-driven model showed poor performance. Arguably the complexity of the model was excessive considering the limited number of available training samples. We also trained a significantly less complex model, specifically a linear regressor, for which the feature set was hand picked by domain experts, with dedicated preprocessing. This model yielded an RMSE=8.93, and r=0.64 on the field trial set, and RMSE=9.75, and r=0.72 on the commercial field set, outperforming both the crop growth model and the metamodel on the latter dataset. \begin{table} \begin{tabular}{l l c c c c} \hline \hline metric & model & \multicolumn{3}{c}{fine-tuning set size} \\ & & \(0^{a}\) & 50 & 200 & 1,000 \\ \hline \(r\) & Metamodel & _0.884_ & 0.899 & 0.905 & 0.908 \\ & Data-driven\({}^{b}\) & N/A & 0.767 & 0.790 & 0.902 \\ & Metamodel & _10.3_ & 7.6 & 7.4 & 7.2 \\ & Data-driven\({}^{b}\) & N/A & 11.2 & 11.0 & 7.5 \\ \hline \multicolumn{7}{l}{\({}^{a}\)_Exclusively pretrained (on source domain)_} \\ \multicolumn{7}{l}{\({}^{b}\)_Exclusively trained on fine-tuning set_} \\ \end{tabular} \end{table} Table 1: Model performance as function of fine-tuning set size \begin{table} \begin{tabular}{l l c c} \hline \hline metric & model & \multicolumn{3}{c}{Trial} & Commercial \\ \hline \(r\) & Crop growth model & 0.70 & 0.63 \\ & Metamodel & 0.60 & 0.39 \\ & Data-driven & 0.02 & 0.35 \\ & Linear regression & 0.64 & 0.72 \\ \multicolumn{4}{l}{\({}^{c}\)RMSE} & Crop growth model & 12.98 & 24.90 \\ & Metamodel & 9.40 & 14.10 \\ & Data-driven & 15.80 & 14.10 \\ & Linear regression & 8.93 & 9.75 \\ \hline \hline \end{tabular} \end{table} Table 2: Model performance on real-world datasets Figure 3: Fitness of the crop growth model and the metamodel on the field trial set (n=303). The number in parentheses represents the standard deviation of the three replicates of the metamodel trainings. Figure 2: Fitness of the metamodel on the (hold-out) synthetic dataset (n=13,000; RMSE=5.1, black \(r=0.95\)) ## 4 Discussion and conclusion In this work, we investigated potato yield prediction using a meta-modeling approach that integrates a _theory-driven_ process-based crop growth model into a _data-driven_ machine learning method. We found that, in silico, our meta-model yields better predictions than a purely data-driven approach. When tested on real-world data, comprising a field trial set and a commercial field set, the metamodel yields competitive results with respect to the crop growth model. In the latter set, which was characterized by a small sample size, however, a simple linear regression model with a hand-picked feature set, representing one of the most commonly used data-driven approaches in yield prediction, outperformed both the crop growth model and the metamodeling approach. Our results indicate a benefit of pretraining a machine learning model with synthetic data obtained from a crop growth model, rather than using real-world data only. In our in silico experiments, pretrained models fine-tuned with only 50 data points had similar performance to data-driven models using 1,000 data points. This is important for data poor domains as agriculture. Also, training data exclusively obtained from a real-world setting may lack important contrasts in the input space, as, for example, especially in commercial settings, management practices are typically standardised. In this context, employing synthetic data may be seen as a data augmentation strategy that prevents the model from exploiting spurious features, which is especially relevant in models with high complexity. Model over-complexity combined with a small, low-contrast training set, may be one of the reasons why our purely data driven model did not perform well in the real-world datasets. Pretraining our model with synthetic data turned out to be a successfully strategy. It should be noted though that a simple approach, using an expert designed, regression model may still be preferred in a real-world setting. We conclude that meta-modeling has potential for accurate crop yield prediction, yet further development and validation on large real-world datasets is recommended. ## 5 Impact Predicting crop yields is crucial for ensuring food security, optimizing agricultural practices, and managing risks in the farming industry. In crop yield prediction, theory-driven process-based crop models and data-driven machine learning models are usually seen as opposite approaches. In this work, we explore the utilization of a meta-modeling approach that combines the strengths of both these approaches. By integrating these two methods, we aim to overcome their respective limitations. Process-based models often suffer from a restricted set of included processes, while data-driven models rely on large, diverse, and independent datasets that are often scarce in the agricultural domain. Our findings suggest the potential of meta-modeling for accurate crop yield prediction, and may as such facilitate the uptake of machine learning in agriculture. ## Acknowledgements We are grateful to three anonymous reviewers for their insightful comments. This work was partially supported by the Dutch Ministry of Agriculture, Nature and Food Quality project Data driven and High Tech (KB-38-001-002), the Wageningen University and Research Investment Theme _Digital Twins_, and the European Union Horizon Research and Innovation program (Grant #101070496, Smart Droplets).
2307.14309
Can Cold Jupiters Sculpt the Edge-of-the-Multis?
Compact systems of multiple close-in super-Earths/sub-Neptunes ("compact multis") are a ubiquitous outcome of planet formation. It was recently discovered that the outer edges of compact multis are located at smaller orbital periods than expected from geometric and detection biases alone, suggesting some truncation or transition in the outer architectures. Here we test whether this "edge-of-the-multis" might be explained in any part by distant giant planets in the outer regions ($\gtrsim 1$ AU) of the systems. We investigate the dynamical stability of observed compact multis in the presence of hypothetical giant ($\gtrsim 0.5 \ M_{\mathrm{Jup}}$) perturbing planets. We identify what parameters would be required for hypothetical perturbing planets if they were responsible for dynamically sculpting the outer edges of compact multis. "Edge-sculpting" perturbers are generally in the range $P\sim100-500$ days for the average compact multi, with most between $P\sim200-300$ days. Given the relatively close separation, we explore the detectability of the hypothetical edge-sculpting perturbing planets, finding that they would be readily detectable in transit and radial velocity data. We compare to observational constraints and find it unlikely that dynamical sculpting from distant giant planets contributes significantly to the edge-of-the-multis. However, this conclusion could be strengthened in future work by a more thorough analysis of the detection yields of the perturbing planets.
Nicole Sobski, Sarah C. Millholland
2023-07-26T17:19:15Z
http://arxiv.org/abs/2307.14309v1
# Can Cold Jupiters Sculpt the Edge-of-the-Multis? ###### Abstract Compact systems of multiple close-in super-Earths/sub-Neptunes ("compact multis") are a ubiquitous outcome of planet formation. It was recently discovered that the outer edges of compact multis are located at smaller orbital periods than expected from geometric and detection biases alone, suggesting some truncation or transition in the outer architectures. Here we test whether this "edge-of-the-multis" might be explained in any part by distant giant planets in the outer regions (\(\gtrsim 1\) AU) of the systems. We investigate the dynamical stability of observed compact multis in the presence of hypothetical giant (\(\gtrsim 0.5\)\(M_{\rm Jup}\)) perturbing planets. We identify what parameters would be required for hypothetical perturbing planets if they were responsible for dynamically sculpting the outer edges of compact multis. "Edge-sculpting" perturbers are generally in the range \(P\sim 100-500\) days for the average compact multi, with most between \(P\sim 200-300\) days. Given the relatively close separation, we explore the detectability of the hypothetical edge-sculpting perturbing planets, finding that they would be readily detectable in transit and radial velocity data. We compare to observational constraints and find it unlikely that dynamical sculpting from distant giant planets contributes significantly to the edge-of-the-multis. However, this conclusion could be strengthened in future work by a more thorough analysis of the detection yields of the perturbing planets. 0000-0002-8807-8808]Nicole Sobski 0000-0002-4880-7885]Sarah C. Millholland ## 1 Introduction Short-period super-Earths/sub-Neptunes are the most prevalent class of observed extrasolar planets, orbiting around roughly half of Sun-like stars (e.g. Petigura et al., 2013; Zhu et al., 2018; He et al., 2019). First discovered by early Doppler surveys (e.g. Mayor et al., 2011) and then, to a greater degree, by NASA's Kepler Mission (Borucki et al., 2010), short-period planets are often found in tightly-spaced configurations, with multiple planets all having orbital periods in the range from days to months (Lissauer et al., 2011, 2014; Rowe et al., 2014; Fabrycky et al., 2014). These so-called "compact multiple-planet systems" (or "compact multis") exhibit a remarkable degree of structure and regularity in both their orbital and physical properties. Their orbits are nearly coplanar and circular (e.g. Fang and Margot, 2012; Fabrycky et al., 2014; Van Eylen and Albrecht, 2015; Xie et al., 2016). Moreover, their period ratios, radii, and masses exhibit a statistical tendency towards uniformity within a given system (Weiss et al., 2018; Millholland et al., 2017). This set of patterns is known collectively as the "peas-in-a-pod patterns" or "intra-system uniformity" (for a review, see Weiss et al., 2022). Despite the abundance of information the Kepler Mission delivered about the architectures of close-in super-Earths/sub-Neptunes, it was mostly limited to the inner regions (\(a\lesssim 1\) AU) of the systems. However, various observational and theoretical efforts are gradually revealing more about their outer architectures. For instance, using radial velocity (RV) observations to search for long-period companions, Zhu and Wu (2018) and Bryan et al. (2019) showed evidence that distant giant planets (or "cold Jupiters", with \(a\gtrsim 1\) AU, \(M_{p}\gtrsim 0.5\)\(M_{\rm Jup}\)) are over-represented in systems with inner super-Earths. They found that \(\sim 30-40\%\) of systems with inner super-Earths are found to have distant giant planets, relative to the \(\sim 10\%\) distant giant occurrence for stars irrespective of small planet presence. The super-Earth/distant giant correlation was also studied by Rosenthal et al. (2022) in their analysis of the California Legacy Survey (Rosenthal et al., 2021); they found tentative agreement with the earlier studies, showing that stars with inner small planets may have an enhanced occurrence of outer giants with \(1.7\sigma\) significance. A contradictory result was recently presented by Bonomo et al. (2023), who studied RVs of 38 Kepler and K2 small-planet systems collected over nearly a decade with the HARPS-N spectrograph, as well as publicly available measurements collected with other facilities. They detected five cold Jupiters in three systems and derived an occurrence rate of \(9.3^{+7.7}_{-2.9}\%\) for planets with \(0.3-13~{}M_{\rm Jup}\) and \(1-10\) AU in systems with inner small planets. Bonomo et al. (2023) found no evidence of the overabundance of distant giant planets in super-Earth systems claimed by Zhu and Wu (2018) and Bryan et al. (2019). However, Zhu (2023) recently suggested that the discrepancy between the Bonomo et al. (2023) result and previous works can be fully resolved by accounting for the metallicity dimension of the super Earth-cold Jupiter relations. Further constraints may come from the Kepler Giant Planet Survey (KGPS, Weiss et al., 2023), a decade-long survey of 63 Kepler systems using HIRES at the Keck Observatory designed to search for long-period planets in Kepler systems. Weiss et al. (2023) presented RV-detected companions to 20 stars in their sample, with occurrence rate analyses forthcoming. It is clear that the relationship between inner super-Earths and outer giants is still debated but gradually coming into better focus. Distant giant planets in general often have moderate to high eccentricities, \(e\sim 0.1-0.8\)(e.g. Jones et al., 2006; Udry and Santos, 2007; Wright et al., 2009; Dawson and Murray-Clay, 2013), and their orbits might be significantly mutually-inclined with respect to the inner systems (e.g Gratia and Fabrycky, 2017). In some cases, an eccentric and/or inclined distant perturber can exert a stronger gravitational influence on the inner planets than their mutual gravitational coupling, and the inner system can gain significant eccentricities and inclinations, potentially driving dynamical instabilities (e.g. Lai and Pu, 2017; Hansen, 2017; Pu and Lai, 2018; Denham et al., 2019; Spalding and Millholland, 2020). The dynamics of the inner system can thus be tightly coupled to that of the outer system. Given this dynamical interaction, if a system has both small inner planets and a distant giant planet, it must experience an architectural transition point in its overall layout. The transition point could be generated by the perturbative influence of the distant giant planet. Specifically, each system with an eccentric and possibly inclined distant giant planet must possess a "zone of instability", where any planets existing in that region would experience eccentricity/inclination excitation and destabilization. It is possible that the impacts of this are already detectable. One potential link is the result of Millholland et al. (2022), who presented statistical evidence for an outer truncation or transition (named the "edge-of-the-multis") in the observed high-multiplicity Kepler systems. They explored the existence of hypothetical additional super-Earths/sub-Neptunes orbiting beyond the outermost planets in Kepler compact multis, under the assumption that the "peas-in-a-pod" patterns continue to larger separations than observed. They then estimated the transit and detection probabilities of these hypothetical planets and found that additional exterior planets should have been detected in \(\gtrsim 35\%\) of Kepler compact multis. Accordingly, to \(\gtrsim 7\sigma\) confidence, the Kepler compact multis truncate at smaller orbital periods than expected from geometric and detection biases alone. These results can be explained if compact multis experience an architectural truncation (i.e. an occurrence rate decrease) or a transition (i.e. a breakdown in the "peas-in-a-pod" patterns) at \(\sim 100-300\) days. Although various theories predict an "edge-of-the-multis" consistent with Millholland et al. (2022)'s observation (e.g. Chatterjee and Tan, 2014; Zawadzki et al., 2022; Batygin and Morbidelli, 2023), it is still unknown what physical mechanism(s) are most responsible. In this paper, we explore the possibility of a link between the outer edges of compact multis and distant giant planets. That is, we test the hypothesis that dynamical perturbations from distant giant planets are strong enough to sculpt the outer edges. We explore this with the following steps: (1) We first consider the observed properties of compact multis and identify where hypothetical distant giant planets with various properties could exist without causing dynamical instabilities. (2) Next, we determine the required properties of the distant giant planets if they were sculpting the outer edges. (3) Finally, we compare the properties of these "edge-sculpting" distant giants to observational constraints. It is important to note that distant giants cannot be the _only_ explanation of the edge-of-the-multis, since there are not enough of them in compact multi systems (even if they were as prevalent as suggested by Zhu and Wu, 2018 and Bryan et al., 2019). Our aim here is to address whether they could play any significant role in the edge-of-the-multis. This paper is organized as follows. We begin by describing our planet sample and our methodology for assessing dynamical stability of compact multis in the presence of distant giant planets (Section 2). We then present the results of these calculations, both of individual systems and the whole population (Section 3). This allows us to determine the properties of perturbing planets that would be capable of sculpting the outer edges of the compact multis. We explore the detectability of these perturbing planets and the resulting implications for the edge-of-the-multis (Section 4). We discuss alternative theories for the edge-of-the-multis and conclude in Section 5. ## 2 Methods ### Planet sample We begin by defining our sample of Kepler compact multiple-planet systems, which is identical to the sample studied in Millholland et al. (2022). We consider systems with four or more observed transiting planets only, since it is a well-defined high multiplicity sample that is most suitable to investigations of the outer edges. As our starting point, we use the Kepler DR25 KOI catalog (Thompson et al., 2018; NASA Exoplanet Archive, 2022) and consider all planets with "confirmed" and "candidate" dispositions. Where possible, we replace the stellar parameters and planet radii in the DR25 catalog with parameters from the Gaia-Kepler Stellar Properties Catalog (Berger et al., 2020, 2020). We apply two quality cuts. First, we consider only planets smaller than 16 \(R_{\oplus}\) with fractional radius uncertainties less than 100%. Second, we discard targets for which Furlan et al. (2017) found a companion star that contributed more than 5% of the light in the photometric aperture. For the purposes of this study, we consider only systems with four or more observed transiting planets, which represent a high fidelity sample of compact multis. We are left with 279 planets in 64 Kepler systems with four or more observed transiting planets. The planet sample is displayed in Figure 1. Most planets in these systems do not have measured masses due to observational limitations. (For instance, the star is too faint for Doppler follow-up and/or there are no detectable transit timing variations.) However, in cases where they are available (60 of 279), we utilize observational mass estimates from radial velocity or transit timing variation analyses, which we obtain from the NASA Exoplanet Archive (2023). For the remaining planets, we obtain approximate mass estimates from the Forecaster probabilistic mass-radius prediction model by Chen and Kipping (2017). Forecaster is an open-source package that predicts a missing planetary mass or radius based on a mass-radius relationship established from a sample of 316 objects of various sizes spanning small planets to late-type stars. Altogether, our approach provides us with a heterogeneous collection of mass estimates, which may affect our results at the detailed level. However, this approach is sufficient for our purposes insofar as we are obtaining an average constraint across the compact multis as a population. Figure 1: **Planet sample.** We display the architectures of the 64 Kepler compact systems with four or more transiting planets described in Section 2.1. The dot size is proportional to the planet size, as shown in the legend at top. ### Dynamical stability calculations without perturbers We first assess the dynamical stability of the compact multi-planet systems before introducing additional perturbing planets. These estimates will serve as our basis of comparison to the later calculations. We perform our dynamical stability analyses using SPOCK (Stability of Planetary Orbital Configuration Klassifier; Tamayo et al., 2020), a machine learning model that predicts dynamical stability over long timescales (\(\sim 10^{9}\) orbits) by first running a short integration (\(10^{4}\) orbits), assessing summary statistics, and classifying the stability using the XGBoost machine learning algorithm. SPOCK provides dynamical stability estimates at a rate that is five orders of magnitude faster than direct \(N\)-body integrations. Here, we will utilize the SPOCK-derived probability of stability, which we denote \(p_{\rm{stab}}\). To estimate \(p_{\rm{stab}}\), SPOCK requires the masses and orbital elements as inputs. Our planet sample only has measured stellar masses, planetary masses (most of which are estimated using Forecaster, as described previously), and periods, so we must randomly sample the remaining parameters. We take the orbital eccentricities and inclinations to be Rayleigh-distributed with scale parameters consistent with current observational constraints (Xie et al., 2016; Van Eylen et al., 2019; Mills et al., 2019). We take the arguments of periastron, longitudes of the ascending node, and mean anomalies to be uniformly distributed from \(0^{\circ}\) to \(360^{\circ}\). Because the planet masses are subject to uncertainties (especially in the case where the masses are predicted from the Forecaster model), we include random sampling of the masses. We take the masses to be normally distributed around the mean observed or estimated value, \(\overline{M}_{p}\), with standard deviations \(\sigma_{M_{p}}\) equal to the average of the observational uncertainties \(\sigma_{M_{p}}=(\sigma_{M_{p},{\rm high}}+\sigma_{M_{p},{\rm low}})/2\) in the cases where the mass estimates are from RVs or TTVs, or equal to \(0.3\overline{M}_{p}\) in the cases where the mass estimates are from Forecaster. The details of the parameter sampling are summarized in Table 1. This sampling is performed for each planet in each system in our sample. New random variables are drawn for each iteration of dynamical stability calculations, as will be described in further detail in the next section. For each system, we estimate \(p_{\rm{stab}}\) 100 times, with each iteration having a different set of random variables. We then find the mean probability of stability, \(\overline{p}_{\rm{stab}}\) and consider that to be representative of the system. Although a majority of the compact multi-planet systems are classified as stable, there are 20 systems with \(\overline{p}_{\rm{stab}}<0.5\), with 6 of these systems demonstrating \(\overline{p}_{\rm{stab}}<0.1\). These systems are preferentially near mean-motion resonances.1 This is unsurprising given that we are randomly sampling the mean anomalies, and planets can become destabilized near resonances if they not placed in specific parts of their orbits. Additional unstable systems that are not near-resonance may have a heightened sensitivity to mass variability (e.g. the real masses may be significantly smaller than the estimated masses), and the combination of this and the randomization of the other orbital elements could yield instability. We disregard these systems in our subsequent analysis, but we believe our analysis is still robust as a population average over many systems. We are more interested in the influence of distant perturbing planets on the ensemble of compact multis than on individual systems. Footnote 1: Considering the subset of unstable systems, we found that 45% of adjacent planet pairs are near a first-order mean-motion resonance, compared to 15% of planet pairs within the subset of stable systems. Moreover, 95% of the unstable systems contain at least one near-resonant pair compared to 40% of the stable systems. ### Dynamical stability calculations with perturbers Having assessed the dynamical stability of the inner systems in isolation, we are ready to test the limits of additional outer planets in each stable system. We again opt to use SPOCK for our stability calculations. However, an important caveat is that the system architectures with both inner multis and distant giant plan \begin{table} \begin{tabular}{l l l} \hline \hline Parameter & Inner planets & Outer perturber \\ \hline Mass & \(M_{p}\sim\mathrm{Normal}(\overline{M}_{p},\sigma_{M_{p}})\) & \(M_{p,p}\sim\mathrm{Uniform}[0.5~{}M_{\rm Jup},5~{}M_{\rm Jup}]\) \\ Semi-major axis & \(a=\) observed & \(a_{p}\sim\mathrm{Uniform}[a(P_{p}=1.3P_{\rm out}),a(P_{p}=1000~{}\mathrm{days})]\) \\ Eccentricity & \(e\sim\mathrm{Rayleigh}(0.04)\) & \(e_{p}\sim\mathrm{Uniform}[0,0.7]\) \\ Inclination & \(i\sim\mathrm{Rayleigh}(1^{\circ})\) & \(i_{p}\sim\mathrm{Rayleigh}(10^{\circ})\) (Case 1); \(i_{p}\sim\mathrm{Uniform}[0^{\circ},40^{\circ}]\) (Case 2) \\ Argument of periapse & \(\omega\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) & \(\omega_{p}\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) \\ Longitude of ascending node & \(\Omega\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) & \(\Omega_{p}\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) \\ Mean anomaly & \(M\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) & \(M_{p}\sim\mathrm{Uniform}[0^{\circ},360^{\circ}]\) \\ \hline \end{tabular} \end{table} Table 1: **Parameter space scope.** Parameters used in the dynamical stability calculations and their sampling scheme. ets are somewhat outside of SPOCK's training models, which only considered masses up to \(0.1\ M_{\rm Jup}\) and inclinations \(\lesssim 10^{\circ}\)(Tamayo et al., 2020). Although SPOCK has been shown to generalize reasonably well, it is still important to check its validity in this context. We performed multiple checks using other numerical and analytical stability estimates, which we describe in more detail in Appendix A. To assess the dynamical impact of the outer perturbing planet, we randomly sample its physical and orbital properties within reasonable ranges. The outer perturber is taken to be a massive gas giant planet on an eccentric and moderately inclined orbit, in line with general observed properties of cold Jupiters (e.g. Dawson and Murray-Clay, 2013). We sample its mass as \(M_{p,p}\sim\text{Uniform}[0.5\ M_{\rm Jup},5\ M_{\rm Jup}]\). We consider periods, \(P_{p}\), between \(1.3P_{\rm out}\) (where \(P_{\rm out}\) is the outermost observed planet in the inner system) and \(1000\) days. These limits were chosen because \(1.3P_{\rm out}\) is approximately the smallest possible stable period, and \(1000\) days is sufficiently distant to detect the transition between stable and unstable orbits. The \(1000\) day maximum period also extends beyond the maximum detectable window via transit detection. We sample uniformly in values of the perturber's semi-major axis, \(a_{p}\), according to this period range. The outer perturber's eccentricity is sampled as \(e_{p}\sim\text{Uniform}[0,0.7]\), which encompasses the range of eccentricities of most observed distant giant planets (e.g. Rosenthal et al., 2021). As for the perturber's inclination, we consider two different cases. In Case 1, we consider the perturber to be nearly coplanar with the inner system and sample \(i_{p}\sim\text{Rayleigh}(10^{\circ})\). This is motivated by Masuda et al. (2020), who suggested that distant giant planets orbiting compact multis tend to be nearly coplanar with the inner systems. In Case 2, we consider a wider range of inclinations for the perturber, \(i_{p}\sim\text{Uniform}[0^{\circ},40^{\circ}]\). This range is consistent with the expectations of energy equipartition between eccentricities and inclinations. The remaining orbital elements (argument of periastron, longitude of the ascending node, and mean anomaly) are sampled uniformly from \(0^{\circ}\) to \(360^{\circ}\). We use the set-up of the inner systems as described in Section 2.2 and use SPOCK to assess the stability of the augmented systems, consisting of the observed inner planets and the hypothetical outer perturbers. For each system, we generate \(10,000\) unique parameter combinations and calculate an associated probability of stability for each. Note that the parameters of both the outer perturber and inner planets are randomly sampled, as summarized in Table 1. We can visualize the general trends resulting from varying the perturber's physical characteristics by constructing scatterplots of the stability variations as a function of the perturber's parameters. We first create scatterplots of \(a_{p}\) vs. \(M_{p,p}\), with \(p_{\rm stab}\) indicated with the colors of the points. Examples are shown for the KOI-812, KOI-408, and KOI-94 systems in Figure 2, which we will discuss at greater length in Section 3. There is a noticeable division in the \(a_{p}\) vs. \(M_{p,p}\) parameter space, which separates regions of low and high probabilities of stability. In all cases, the high probabilities max out at values near the probabilities of stability from the simulations without the perturbers. However, the stable and unstable regions are not starkly divided; we notice an upward sloping transition zone with \(p_{\rm stab}\sim 0.4-0.6\), a region that we will refer to as "metastable". This observation motivates the next section's analysis, which is to calculate exactly where this metastable region exists for each system. ### Metastable transition region We now describe how we utilize the collection of stability calculations to extract the set of perturber parameters defining the metastable region. We approximate the metastable region as a banded line that divides the stable and unstable regions, and we identify its location using the following numerical procedure. First, we divide the \(a_{p}\) vs. \(M_{p,p}\) parameter space into a grid with \(10\) evenly-spaced segments in \(M_{p,p}\) and \(40\) evenly-spaced segments in \(a_{p}\), and we calculate the mean probability of stability, \(\overline{p}_{\rm stab}\), in each grid cell. The number of grid cells was chosen through trial and error; the spacing ensures that each grid cell is large enough to include many stability values within it but small enough to not smear out the relevant features. The results are not strongly sensitive to these choices. Next, we use the grid of \(\overline{p}_{\rm stab}\) to find the best-fit line of metastability. We do this by first iterating through columns in the grid (each of which have a fixed \(M_{p,p}\)) and estimating the value of \(a_{p}\) that lies closest to the center of the metastable region. The metastable region is observed to be a gradual transition between low and high \(\overline{p}_{\rm stab}\), and the curve of \(\overline{p}_{\rm stab}\) vs. \(a_{p}\) for a fixed \(M_{p,p}\) (i.e. a single column of the grid) is well-fit by a piecewise linear function of the following form: \[p(a_{p})=\begin{cases}\overline{p}_{\rm stab,\textit{l}}&a_{p}\leq a_{p,\textit {l}}\\ m_{\rm trans}(a_{p}-a_{p-\textit{l}})+\overline{p}_{\rm stab,\textit{l}}&a_{p, \textit{l}}<a_{p}<a_{p,\textit{h}}\\ \overline{p}_{\rm stab,\textit{h}}&a_{p}\geq a_{p,\textit{h}}.\end{cases} \tag{1}\] Here, \(a_{p,\textit{l}}\) and \(a_{p,\textit{h}}\) are parameters of the function defining the low and high boundaries of the metastable re Figure 2: **Dynamical stability calculations: effect of mass sampling variations.** The scatterplots depict the SPOCK probabilities of stability for three example systems (KOI-812, KOI-408, and KOI-94) with the addition of an outer perturbing planet. Each point corresponds to a single stability calculation with an associated mass and semi-major axis for the outer perturber. The left column illustrates stability calculations in which the inner planet masses are fixed, while the right column has the inner planet masses randomly sampled from a normal distribution as described in Section 2.2. The perturber’s orbital inclination is randomly sampled from a Rayleigh distribution (Case 1 in Table 1). The colorbar indicates the SPOCK probability of stability. Blue horizontal lines are plotted to indicate the semi-major axes of existing planets. The red line bordered by a translucent gray belt defines the zone corresponding to “metastability”, the transition from highly unstable to highly stable. gion, \(\overline{p}_{\mathrm{stab},l}\) and \(\overline{p}_{\mathrm{stab},h}\) are parameters defining the low and high probabilities at which the curve levels out, and \(m_{\mathrm{trans}}=(\overline{p}_{\mathrm{stab},h}-\overline{p}_{\mathrm{stab},l })/(a_{p,h}-a_{p,l})\) is the slope within the metastable region. We fit the piecewise linear function to the curve of \(\overline{p}_{\mathrm{stab}}\) vs. \(a_{p}\) for each column of fixed \(M_{p,p}\). We then find the value of \(a_{p}\) for which \(p(a_{p})=0.5\). This value, which we denote \(a_{p}(p=0.5)\), defines the center of the metastable region within the column. In principle, the edges of the metastable region are defined by \(a_{p,l}\) and \(a_{p,h}\). However, we find that \(a_{p,l}\) is very near to \(\min(a_{p})\) in most cases, so we define the lower edge of the metastable region to be \(a_{p}(p=0.3)\). We define the upper edge of the metastable region to be \(a_{p}(p=0.7)\). By repeating this process for each column, we obtain a set of \((M_{p,p},a_{p})\) values defining three lines: the center, lower edge, and upper edge of the metastable region. Finally, we perform linear regressions on each curve so as to obtain a smooth demarcation of the metastable region. The centers and edges of the metastable regions are indicated with red lines and gray bands for the examples in Figure 2. ## 3 Results We now present the results of our dynamical stability calculations. The first thing we explore is the impact of different sampling choices (Section 3.1). Many parameters are sampled in the exploration of these systems, so it is important to verify that these choices do not strongly impact the results. We will then examine the averaged results for all systems (Section 3.2). ### Effect of sampling variations Figure 2 shows the impact of sampling the inner planet masses, where the left and right columns show the results when the masses are held fixed and varied, respectively. Sampling the inner planet masses makes some systems a bit more stable and other systems a bit more unstable. However, for most systems, there is very little overall change in the location and width of the metastable region, suggesting that the results are not strongly sensitive to the inner planet mass sampling scheme. Figure 3 shows the impact of the different methods of sampling the outer perturber's inclination. The left and right columns show the results of the uniformly-distributed and Rayleigh-distributed sampling (recall the two cases in Table 1). As in Figure 2, there are some slight differences between the two versions, but overall the general features and the locations and widths of the metastable regions are the same regardless of the sampling method. Finally, Figure 4 displays the results for the same three example systems in a different parameter space. The left column is the same as the right column of Figure 3, displaying perturber semi-major axis vs. mass. (Here we use Rayleigh-distributed perturber inclinations.) The right column replaces the y-axis with the perturber periastron distance, \(a_{p}(1-e_{p})\). It is immediately clear that plotting the results in this space yields a sharper transition from the unstable to stable region and thus a smaller metastable region. This makes physical sense because the inner planets are more sensitive to the perturber's close approach distance than its semi-major axis. ### Combined constraints We now combine the results for all systems in order to explore the influence of distant perturbing planets on the ensemble of compact multi-planet systems. Figure 5 shows the extracted metastable regions for all systems. The darkest color indicates where there is the greatest overlap among the various systems. The metastable region is concentrated in the period range \(P\sim 100-500\) days (\(a\sim 0.4-1.2\) AU for \(M_{\star}=M_{\odot}\)). We also display information on the detectability of the perturbing planet using RV and Kepler transit data. As for RV detection, we plot contours of constant RV semi-amplitude assuming \(M_{\star}=M_{\odot}\) and \(e=0\). For the vast majority of the parameter space, the perturber would be easily detected in RV data of a sufficiently long baseline, since the semi-amplitude is much greater than typical RV uncertainties. As for transit detection, we calculate the conditional probability that, provided the perturber was transiting, it would transit its host star at least three times in Kepler data. We obtain this estimate using a data product provided by Kepler DR25 (Thompson et al., 2018; NASA Exoplanet Archive, 2022) called the window function, which is the fraction of unique transit ephemeris epochs that permit three or more transits to be observed as a function of orbital period (Burke and Catanzarite, 2017). This quantity is complex due to the data gaps within the limited span of Kepler photometry. Kepler DR25 provided tabulations of window functions for each star as a function of orbital period and transit duration. We use the downsampled tables provided by Hsu et al. (2019) to extract the window function data for each of the host stars in our sample and obtain the values for periods in the range \(P=1-800\) days, picking the transit duration such that it aligns with expectations of circular orbits. We then average over the different stars in our sample. Using the averaged window function data, we calculate the orbital periods at which the window func Figure 3: **Dynamical stability calculations: effect of inclination sampling variations.** The plotting scheme and example systems are identical to Figure 2, except we now explore different methods of sampling the outer perturber’s inclination. The left column indicate uniform sampling \(i_{p}\sim\text{Uniform}[0^{\circ},40^{\circ}]\), and the right indicates Rayleigh-distributed sampling \(i_{p}\sim\text{Rayleigh}(10^{\circ})\). In all cases, the inner planet masses are randomly sampled from a normal distribution as in the right column of Figure 2. Figure 4: **Dynamical stability calculations: dependence on perturber semi-major axis vs. periastron distance.** The plotting scheme and example systems are identical to Figure 2 and Figure 3, but now the left column shows the semi-major axis vs. mass, while the right column shows the perturber periastron distance vs. mass. In all cases, the inner planet masses are randomly sampled from a normal distribution as in the right column of Figure 2 and the orbital inclination of the outer perturber is Rayleigh-distributed. tion is equal to 0.1, 0.3, 0.5, 0.7, and 0.9. The periods are indicated with the horizontal lines in Figure 5. Since the perturbing planets would have large radii (and thus large transit signal-to-noise ratios), the main factor in their detection by the Kepler pipeline would simply be whether they have three or more transits. Accordingly, the window function effectively provides the probability that the perturbing planets would be detected if they were on transiting orbits. We observe that roughly half of the combined metastable region would have \(p(\geq 3\) transits\()=0.7\), indicating that many of them would be detected in Kepler data if they existed and were transiting. ## 4 Discussion ### Implications for the edge-of-the-multis The results of Figure 5 indicate that perturbing planets with parameters falling in the average metastable region would be readily detectable with RV data (of a sufficiently long baseline) and transit data. What are the implications of this? We argue that it indicates that _distant giant planets likely cannot explain the edge-of-the-multis_, according to the following logic. First, the metastable region delineates two regimes: (1) the unstable regime in which the distant giant planet is strongly coupled to the inner planets, and they cannot maintain dynamical stability, and (2) the stable regime in which the distant giant planet's perturbative influence is weak relative to the coupling among the inner planets, and they maintain stability. Accordingly, the metastable region (or, more accurately, the region just slightly wide of it) is an approximate indicator of the distant giant "edge-sculpting" properties, that is, the range of possible properties that a perturbing planet could have if it is responsible for sculpting the outer edge of the inner system. To see this, consider a system where the perturbing planet has a period larger than the metastable location. With such properties, the inner system could potentially have another planet added to its outer edge and still be stable, so the giant planet would not be consistent with sculpting the edge of the observed system. If the metastable region thus approximates the parameters of edge-sculpting perturbing planets, and if most of the metastable region would be readily detectable with sufficient radial velocity and transit data, then edge-sculpting perturbing planets would generally be detectable. However, such perturbing planets are not widely detected in this region of parameter space. Here we will briefly review the observational constraints. As for transits, all but one of the compact multis in our sample are devoid of a transiting giant planet with a period in the range \(P=100-500\) days confirmed by Figure 5: **Aggregate results of stability calculations.** We display the “metastable regions” (grey belts in Figures 2-4) for all systems in our sample. The top, middle, and bottom panels show the results in perturber semi-major axis, periastron distance, and period vs. mass. The depicted metastable regions are taken from the results in which the inner planet masses are randomly sampled from a normal distribution around the mean and the orbital inclination of the outer perturber is Rayleigh-distributed. The darkest blue indicates where there is greatest overlap among the various systems. In the bottom plot, we include contours of the outer perturber’s RV semi-amplitude (curved lines) and, if it was transiting, the probability that it would transit three or more times in Kepler data (horizontal lines). the Kepler search pipeline. One exception is the Kepler-90 (KOI-351) system, which contains a sun-like star and eight confirmed transiting planets, the outermost of which has period \(P=332\) days and mass \(203\pm 5\ M_{\oplus}\)(Liang et al., 2021). This outer planet may be partially sculpting the edge of the inner seven planets. Another possible exception is Kepler-87 (KOI-1574), which contains two planets beyond 100 days, one planet at 115 days with mass \(324\pm 9\ M_{\oplus}\) and one planet at 191 days with mass \(6.4\pm 0.8\ M_{\oplus}\)(Ofir et al., 2014). The planet at 115 days could be responsible for dynamical perturbations, but it does not strictly fit our definition of edge-sculpting giant planets, since the system contains only two planets interior to it and a small exterior planet. Beyond detections from the Kepler pipeline, Foreman-Mackey et al. (2016) performed a systematic search for transiting long-period giant planets in Kepler data, requiring only one or two transits (rather than the usual \(\geq 3\)). They found 16 planet candidates, one of which is in a system in our sample. Kepler-154 (KOI-435) contains five inner super-Earths/sub-Neptunes with periods ranging from \(3-62\) days and an outer transiting giant with \(P=4.3^{+4.7}_{-1.3}\) yr and \(R_{p}=0.83^{+0.12}_{-0.11}\ R_{\rm Jup}\)(Foreman-Mackey et al., 2016). However, this outer giant is too far from the inner system to be responsible for sculpting the edge. Using injection-recovery experiments, Foreman-Mackey et al. (2016) found that their search method was \(\sim 70\%\) complete to planets with periods \(P\lesssim 3\) yr and \(R_{p}\gtrsim 0.5\ R_{\rm Jup}\). Roughly speaking, this indicates that they would be \(\sim 70\%\) complete to any transiting edge-sculpting giant planets orbiting the stars in our sample, of which none were detected other than in Kepler-154. The lack of many transit detections is suggestive but inconclusive, since the intrinsic transit probability is very small at long periods, and it depends on the unknown orbital inclinations. However, we can still obtain a rough estimate of the number of edge-sculpting giant planets we would expect to be transiting in our sample. We use our calculated metastable region for each system2 and randomly sample an orbital period for a giant planet within it. We then sample its inclination as \(i\sim\) Uniform[\(80^{\circ},100^{\circ}\)]. This assumes that the inner system is approximately edge-on (\(i\approx 90^{\circ}\)), and it roughly agrees with observed constraints on the mutual inclinations between inner super-Earths and outer cold Jupiters. Masuda et al. (2020) found that these mutual inclinations tend to be small, especially for inner systems with higher transit multiplicity. Using the sampled period and inclination, we determine whether or not the planet would be transiting. We then sum up the number of transiting planets across the full sample of 64 systems. Finally, we repeat this process 1,000 times to create a distribution of the number of transiting planets. This yields an expectation of \(2.0\pm 1.3\) transiting planets across the 64 systems. The percentages of cases with 0, 1, 2, 3, 4, and 5 transiting planets are 18%, 32%, 25%, 16%, 7%, and 2%. This indicates that it would be fairly likely to observe zero transiting giant planets, even if all of the systems hosted edge-sculpting giant planets. Constraints from transits are thus not very powerful; RVs are likely to be more informative. Footnote 2: For systems without SPOCK calculations (see Section 2.2), we randomly pick a metastable region from the rest of the sample. As for RV detections, several constraints have recently become available through long term HARPS-N and Keck/HIRES surveys presented by Bonomo et al. (2023) and Weiss et al. (2023), respectively. As for Bonomo et al. (2023), their sample of 38 Kepler and K2 systems of small planets has only three systems that overlap with our sample. None of those have new detected planets or long-term trends. The Kepler Giant Planet Survey (Weiss et al., 2023) observed 63 systems, 13 of which are common to our sample. Of those systems, nine have no new detections or long-term trends, and the data places strong constraints on the existence of additional long-period planets. In most cases, the RV coverage places a \(3\sigma\) upper limit of \(M\sin i\lesssim 0.5\ M_{\rm Jup}\) at 5 AU (although there are variations from system to system). The existence of edge-sculpting perturbers can thus be ruled out in these systems. The remaining four systems have RV-detected non-transiting companions: Kepler-20 (KOI-70), Kepler-106 (KOI-116), Kepler-1130 (KOI-2169), and Kepler-444 (KOI-3158). In Kepler-20 (KOI-70), there is a detected non-transiting planet with \(P=34.9\) days and \(M\sin i=21.0\pm 3.4\ M_{\oplus}\) in between the planets with \(P=19.6\) days and \(P=77.6\) days. This planet was initially discovered by Buchhave et al. (2016), although it remains possible that the signal is a false positive due to stellar activity (Nava et al., 2020). Kepler-106 (KOI-116) has four small transiting planets and reveals peaks in the RV residual periodogram near P = 90, 180, and 365 days. The veracity of these signals is uncertain because they could be driven by the seasonal window function. However, if there is a planet at 90 days, it would have \(M\sin i=46\pm 6\ M_{\oplus}\). Kepler-1130 (KOI-2169) contains a stellar-mass companion with \(P=14000\pm 1800\) days (\(\sim 12\) AU), \(M\sin i=218\pm 3\ M_{\rm Jup}\), and \(e=0.65\pm 0.024\). The periapse distance is \(\sim 4\) AU. Finally, Kepler-444 (KOI-3158) also contains an M-dwarf binary companion (components B and C) first characterized by Dupuy et al. (2016). The orbital solution in Weiss et al. (2023) is consistent with a recent analysis by Zhang et al. (2023) and finds \(P=87000\pm 3000\) days (\(\sim 52\) AU), \(M\sin i=629\pm 21\)\(M_{\rm Jup}\), and \(e=0.55\pm 0.05\). The periapse distance is \(\sim 23\) AU. Among the systems just described, the Kepler-1130 system contains the only perturber with properties that might be consistent with sculpting the outer edge of the inner system, although the perturber is a star, not a giant planet as considered in this work. It is worth noting that the two systems with stellar-mass companions, Kepler-1130 (KOI-2169) and Kepler-444 (KOI-3158), contain some of the smallest inner planets in our sample. The planets all have smaller radii than Earth, and Mills and Fabrycky (2017) constrained the masses of the Kepler-444 planets to be similar to that of Mars. It is thought that the stellar companions limited the extent of the protoplanetary disk in which the inner planets formed, thus limiting the solid material for planet formation (e.g. Zhang et al., 2023). However, this is a different process than the dynamical sculpting we have been considering in this paper. Altogether, the existing constraints from both transits and radial velocities suggest a lack of perturbing planets that are close enough to the inner compact multis to sculpt their outer edges. The observational constraints are incomplete, but they are sufficient to conclude that distant perturbers likely contribute very little to the observed edge-of-the-multis. It is important to clarify that the preceding discussion is not intended to be exhaustive, as it neglected several complications. For instance, we neglected the possibility that a perturbing planet that is not close enough to an inner system to render it dynamically unstable could still potentially increase the mutual inclinations among that inner system and reduce the transit multiplicity. This would be a different form of edge-sculpting. Moreover, our estimation of the expected number of transiting edge-sculpting giant planets used several simplifying assumptions. It is beyond the scope of this work to perform a thorough estimate of how many giant planets should have been detected in existing transit and radial velocity observations if they are the primary cause for the edge-of-the-multis. We leave a more thorough accounting for these detection statistics to future work. ### Alternative theories for the edge-of-the-multis If distant giant planets are not the explanation of the edge-of-the-multis, we must turn towards alternative hypotheses. Several theories were outlined in Millholland et al. (2022), and additional theories have since been proposed. We briefly recap them here. First, the formation process of compact multis may naturally confine a system of \(\sim 4-6\) inner planets to the \(\lesssim 0.5-1\) AU region, with a truncation beyond them. For instance, the theory of "inside-out planet formation" (Chatterjee and Tan, 2014, 2015) posits that planets may form sequentially from successive gravitational instabilities of rings of pebbles. The rings form via pebble drift due to gas drag and build up at the pressure maximum associated with the dead-zone boundary. After the planets form from the pebble ring, they migrate inwards, the dead-zone boundary and pressure maximum retreat, and the process repeats. A similar theory of successive formation of super-Earths from narrow rings of solid material was recently proposed by Batygin and Morbidelli (2023). In this framework, planetesimals form rapidly at the silicate sublimation line and grow through pairwise collisions, until they achieve terminal proto-planet masses regulated by isolation and migrate inwards. Systems that host distant giant planets have also been shown to potentially drive the formation of high-density rings of planetesimals due to the sweeping of secular resonances that transport planetesimals into the inner disk (Best et al., 2023). All of these theories naturally predict an outer edge of compact multis near the observed edge. Even if the planets in compact multis do not form at well-defined discrete locations in the inner disk, an edge-of-the-multis is also predicted by theories of formation at larger orbital distances (\(\gtrsim 1\) AU) followed by inward migration. Specifically, Zawadzki et al. (2022) showed that migration traps (generated by regions where the outward corotation torque dominates over the inward Lindblad torque) yield planetary systems that are bifurcated into two groups at small and larger periods with a gap at \(\sim 100-300\) days. To summarize, these theories predict that the formation of compact multis is a balance between accretion and orbital migration that results in a compact system of planets confined to the inner \(\sim 0.5-1\) AU, with a truncation just outside. We reviewed these hypotheses here for the sake of promoting additional and probable arguments based on our findings, but further investigation is beyond the scope of this paper. Future work on both planet formation modeling and observational characterization of the \(\gtrsim 1\) AU regions of these systems should help determine which, if any, of these mechanisms are dominant in sculpting the edge-of-the-multis. ## 5 Conclusion This work was motivated by the recent finding that the outer edges of compact multi-planet systems appear to truncate at smaller periods than expected from geometric and detection biases alone (Millholland et al., 2022). The "edge-of-the-multi" suggests the existence of some truncation (i.e. occurrence rate fall-off) or transition (i.e. to smaller and/or more widely-spaced planets) in the outer regions (\(P\gtrsim 100\) days) of compact multis. Here we investigated the question of whether the edge-of-the-multis could be caused by the dynamical influence of distant giant planets, which are thought to be common in systems with inner super-Earths/sub-Neptunes (Zhu and Wu, 2018; Bryan et al., 2019). We considered a sample of Kepler compact multi-planet systems with four or more observed planets (Figure 1). We explored the dynamical stability of the observed systems in the presence of hypothetical exterior giant planets. The SPOCK machine learning stability classifier (Tamayo et al., 2020) was used to compute the probability of dynamical stability of a given orbital configuration. We randomly sampled the perturber period and mass in the range \(P_{p}\lesssim 1000\) days and \(M_{p,p}\sim 0.5-5~{}M_{\rm Jup}\). We tested multiple sampling schemes for other parameters (e.g. the perturber inclination) and found minimal sensitivity to these choices (Figures 2 and 3). Given a set of perturber configurations for each system, we identified the "metastable region", the region in perturber mass/semi-major axis space that divides the stable and unstable regimes (e.g. Figure 2). The metastable region associated with an observed inner system approximately defines the parameter regime in which a perturber would dynamically sculpt the edge of the system. That is, a perturber with a wider orbit than the metastable region could leave room for additional (undetected) inner planets and thus would not be an "edge-sculpting" perturber. We found the metastable region to be in the range \(P\sim 100-500\) days, with the strongest concentration around \(P\sim 200\) days for \(M_{p,p}\sim 0.5~{}M_{\rm Jup}\) and \(P\sim 400\) days for \(M_{p,p}=5~{}M_{\rm Jup}\). We explored the detectability of perturbing planets with "edge-sculpting" properties, finding that they would generally be fairly easy to detect. If such perturbers were transiting in Kepler data, many of them would transit at least three times. Roughly half would have a \(>0.7\) probability of three or more transits. However, transit probabilities are small for long-period planets, and they depend on the unknown orbital inclinations. A thorough model of predicted transit yields would be necessary for a complete characterization of the transit detectability. As for RV detection, the vast majority of parameter space of edge-sculpting perturbers would have RV semi-amplitudes well above 50 m/s. Despite their apparent detectability, "edge-sculpting" perturbing planets have not been identified in many compact multi systems. We reviewed the current observational constraints, finding one system (Kepler-90) with a transiting outer giant planet that may be consistent with edge sculpting. As for radial velocity constraints, about 14 (perhaps more) of the 64 systems in our sample have sufficient RV coverage to rule out perturbing planets with edge-sculpting properties. The observational evidence is incomplete, but it clearly does not point towards a large number of edge-sculpting perturbers. Future observations will strengthen this result. Taken as a whole, our results indicate that distant giant planets are unlikely to play a significant role in sculpting the outer edges of compact multis. Our finding narrows the search for the true cause of this phenomenon. The most likely hypothesis is that the edge-of-the-multis is a signature of the formation process, which may be relatively insensitive to the presence or absence of distant perturbing planets. Future theoretical and observational efforts may test this interpretation. In particular, we expect the upcoming PLATO mission (Rauer et al., 2014) will yield further constraints on the outer architectures of compact multis, especially if it revisits the Kepler field for a long baseline. ## 6 Acknowledgements We thank Dan Tamayo and the anonymous reviewer for their helpful comments and suggestions. We thank the MIT Undergraduate Research Opportunities Program (UROP) for their support in making this research possible. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Some of the calculations presented in this paper were performed on the MIT Engaging cluster at the Massachusetts Green High Performance Computing Center (MGHPCC) facility. We gratefully MGHPCC for their computational resources and support. ## Appendix A Testing the validity of the SPOCK calculations Here we test the validity of the SPOCK calculations for our systems of inner super-Earths and outer giant planets by verifying that the metastable region agrees with the transition between stable and unstable regions identified by alternative metrics. We first use the Mean Exponential Growth factor of Nearby Orbits (MEGNO) chaos indicator, which quantifies the degree of divergence of initially closely-separated trajectories in phase space (Cincotta and Simo, 2000; Cincotta et al., 2003). The time-averaged MEGNO value distinguishes between stable/regular and unstable/chaotic orbital motion and thus provides an independent calculation of the stability regions. We considered one of our systems (KOI-812) as a case study and generated a similar \(a_{p}\) vs. \(M_{p,p}\) map with 1,000 unique parameter combinations. Overall, we find good agreement between the SPOCK and MEGNO maps, which increases confidence in the validity of our SPOCK calculations. We also check our results using an analytic stability criterion. Various criteria have been developed for compact multis with distant perturbers (e.g. Pu and Lai, 2018; Denham et al., 2019; Tamayo et al., 2021). We use the results from Pu and Lai (2018) and compute the averaged coupling parameter, \(\bar{\epsilon}\) (their equation 60), which quantifies the extent to which the inner planets are coupled to the giant planet (\(\bar{\epsilon}\gg 1\)) versus each other (\(\bar{\epsilon}\ll 1\)). The coupling parameter is an approximate stability criterion, since the degree of eccentricity and inclination excitation in the inner system is much higher when they are tightly-coupled to the giant planet. We compute \(\bar{\epsilon}\) for the KOI-812 system and find that the \(\bar{\epsilon}=1\) transition line agrees well with the SPOCK results. Given these numerical and analytic checks, we are confident in using SPOCK for our stability calculations.
2305.02312
AG3D: Learning to Generate 3D Avatars from 2D Image Collections
While progress in 2D generative models of human appearance has been rapid, many applications require 3D avatars that can be animated and rendered. Unfortunately, most existing methods for learning generative models of 3D humans with diverse shape and appearance require 3D training data, which is limited and expensive to acquire. The key to progress is hence to learn generative models of 3D avatars from abundant unstructured 2D image collections. However, learning realistic and complete 3D appearance and geometry in this under-constrained setting remains challenging, especially in the presence of loose clothing such as dresses. In this paper, we propose a new adversarial generative model of realistic 3D people from 2D images. Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator and integrating an efficient and flexible articulation module. To improve realism, we train our model using multiple discriminators while also integrating geometric cues in the form of predicted 2D normal maps. We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance. We validate the effectiveness of our model and the importance of each component via systematic ablation studies.
Zijian Dong, Xu Chen, Jinlong Yang, Michael J. Black, Otmar Hilliges, Andreas Geiger
2023-05-03T17:56:24Z
http://arxiv.org/abs/2305.02312v1
# AG3D: Learning to Generate 3D Avatars from 2D Image Collections ###### Abstract While progress in 2D generative models of human appearance has been rapid, many applications require 3D avatars that can be animated and rendered. Unfortunately, most existing methods for learning generative models of 3D humans with diverse shape and appearance require 3D training data, which is limited and expensive to acquire. The key to progress is hence to learn generative models of 3D avatars from abundant unstructured 2D image collections. However, learning realistic and complete 3D appearance and geometry in this under-constrained setting remains challenging, especially in the presence of loose clothing such as dresses. In this paper, we propose a new adversarial generative model of realistic 3D people from 2D images. Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator and integrating an efficient and flexible articulation module. To improve realism, we train our model using multiple discriminators while also integrating geometric cues in the form of predicted 2D normal maps. We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance. We validate the effectiveness of our model and the importance of each component via systematic ablation studies. ## 1 Introduction Generative models, like GANs [19], can be trained from large image collections, to produce photo-realistic images of objects [5, 29, 30, 31] and even clothed humans [2, 18, 20, 33, 34, 55]. The output, however, is only a 2D image and many applications require diverse, high-quality, virtual 3D avatars, with the ability to control poses and camera viewpoints, while ensuring 3D consistency. To enable the generation of 3D avatars, the research community has been studying generative models that can automatically produce 3D shapes of humans and/or clothing based on input parameters such as body pose and shape [9, 11, 38, 50]. Despite rapid progress, most existing methods do not yet consider texture and require accurate and clean 3D scans of humans for training, which are expensive to acquire and hence limited in quantity and diversity. In this paper, we develop a method that learns a generative model of 3D humans with texture from only a set of unstructured 2D images of various people in different poses wearing diverse clothing; that is, we learn a generative 3D human model from data that is ubiquitous on the Internet. Learning to generate 3D shapes and textures of articulated humans from such unstructured image data is a highly under-constrained problem, as each training instance has a different shape and appearance and is observed only once from a particular viewpoint and in a particular pose. Recent progress in 3D-aware GANs [6, 22, 48] shows impressive results in learning 3D geometry and appearance of rigid objects from 2D image collections. However, since humans are highly articulated and have more degrees of freedom to model, such methods struggle to generate realistic humans. By modeling articulation, recent work [4, 47] demonstrates the feasibility of learning articulated humans from image collections, allowing the generation of human shapes and images in desired poses, but only in limited quality and resolution. Recently, EVA3D [23] achieves higher resolution by representing humans as a composition of multiple parts, each of which are generated by a small network. However, there is still a noticeable gap between the generated and real humans in terms of appearance and, in particular, geometry. Additionally, the compositional design precludes modeling loose clothing that is not associated with a single body part, such as dresses shown in Fig. 5c. In this paper, we contribute a new method for learning 3D human generation from 2D image collections, which yields state-of-the-art image and geometry quality and naturally models loose clothing. Instead of representing humans with separate body parts as in EVA3D [23], we adopt a simple monolithic approach that is able to model the human body as well as loose clothing, while adding multiple discriminators that increase the fidelity of perceptually important regions and improve geometric details. **Holistic 3D Generation and Deformation:** To achieve the goal of high image quality while flexibly handling loose clothing, we propose a novel generator design. We model 3D humans holistically in a canonical space using a monolithic 3D generator and an efficient tri-plane representation [6]. An important aspect in attaining high-quality images is to enable fast volume rendering. To this end, we adapt the efficient articulation module, Fast-SNARF [8], to our generative setting and further accelerate rendering via empty-space skipping, informed by a coarse human body prior. Our articulation module is more flexible than prior methods that base deformations of the clothed body on SMPL [38], enabling it to faithfully model deformations for points that are far away from the body. **Modular 2D Discriminators:** We further propose multiple discriminators to improve geometric detail as well as the perceptually-important face region as we found that a single adversarial loss on rendered images is insufficient to recover meaningful 3D geometry in such a highly under-constrained setting. Motivated by the recent success of methods [25, 69] that exploit monocular normal cues [54, 65] for the task of 3D reconstruction, we explore the utility of normal information for guiding 3D geometry in the generative setting. More specifically, we discriminate normal maps rendered from our generative 3D model against 2D normal maps obtained from off-the-shelf monocular estimators [54] applied to 2D images of human subjects. We demonstrate that this additional normal supervision serves as useful and complementary guidance, significantly improving the quality of the generated 3D shapes. Furthermore, we apply separate face discriminators on both the image and normal branch to encourage more realistic face generation. We experimentally find that our method outperforms previous 3D- and articulation-aware methods by a large margin in terms of both geometry and texture quality, quantitatively (Table 1), qualitatively (Fig. 5) and through a perceptual study (Fig. 4). In summary, we contribute (i) a generative model of articulated 3D humans with SotA appearance and geometry, (ii) a new generator that is efficient and can generate and deform loose clothing, and (iii) several, specialized discriminators that significantly improve visual and geometric fidelity. We will release code and models. ## 2 Related Work **3D-aware Generative Adversarial Networks:** Generative adversarial networks (GANs) [19] achieve photorealistic image generation [5, 29, 30, 31] and show impressive results on the task of 2D human image synthesis [2, 18, 20, 33, 34, 55]. However, these 2D methods cannot guarantee 3D consistency [10, 33, 39] and do not provide 3D geometry. Several methods extend 2D GANs to 3D by combining them with 3D representations, including 3D voxels [44, 64], meshes [36, 58] and point clouds [1, 35]. Recently, many methods represent 3D objects as neural implicit functions [42, 46, 51, 61, 68]. Such representations are also used for 3D-aware generative image synthesis [6, 7, 22, 45, 48, 56, 57]. StyleSDF [48] replaces density with an SDF for better geometry generation and SotA methods like EG3D [6] introduce a tri-plane representation to improve rendering efficiency. Nevertheless, it is not straightforward to extend these methods to non-rigid articulated objects such as humans. In this paper, we propose a 3D- and articulation-aware generative model for clothed humans. **3D Human Models:** Parametric 3D human body models [3, 28, 49, 66, 38] are able to synthesize minimally clothed human shapes by deforming a template mesh. Extending these mesh models to generate 3D clothing or clothed humans is challenging [40]. In the case of meshes, the geometry is restricted to a fixed mesh topology and large deviations from the template mesh are hard to model. To overcome this limitation, methods such as SMPLicit [11] and gDNA [9] propose to build a 3D generative model of clothed humans based on implicit surface representations, either by adding an implicit garment to the SMPL body or by learning a multi-subject implicit representation with corresponding skinning weights. The main problem of all aforementioned approaches, however, is their reliance on 3D ground truth: their training requires a large number of complete and registered 3D scans of clothed humans in different poses, which are typically acquired using expensive 3D body scanners. Several methods [13, 17, 27, 32, 53, 62, 63, 72] combine NeRF with human priors to enable 3D human reconstruction from multi-view data or even monocular videos. Nevertheless, their proposed human representations can only be utilized to represent human avatars for a single subject, wearing a specific garment. Recently, some methods have been proposed to learn generative models of 3D human appearance from a collection of 2D images. ENARF-GAN [47] and GNARF [4] leverage 3D human models to learn a 3D human GAN, but they still fail to produce high-quality human images. The concurrent work EVA3D [23] achieves high-resolution human image generation by introducing a compositional part-based human representation. However, none of these methods including other concurrent arXiv papers [26, 67, 71] are able to generate and deform loose clothing, and their geometry typically suffers from noisy artifacts. In contrast, our method generates both high-quality geometry and appearance of diverse 3D-clothed humans even wearing loose clothing, with full control over the pose and appearance. We empirically demonstrate the benefits of our method over the more complex EVA3D model [23] in Section 4.2. A comparison to the recent arXiv papers [26, 67, 71] is not possible since the models and code have not been released. **3D Shape from 2D Normals:** Several methods predict normals from a single image, for general objects [14, 15, 24, 16] or clothed humans [54, 65]. These predicted 2D normal cues can be exploited to guide 3D reconstruction using neural field representations. For instance, MonoSDF [69] leverages predicted normals to improve 3D object reconstruction from sparse views. Similarly, SelfRecon [25] uses a normal reconstruction loss to reconstruct a human avatar from a monocular video. PIFuHD [54] and ICON [65] predict normal maps as additional input to support single-view 3D human reconstruction. In this work, we demonstrate that monocular 2D normal cues are useful for learning a generative 3D model of articulated objects. ## 3 Method Given a large 2D image collection, our goal is to learn a generative model of diverse 3D human avatars with realistic appearance and geometry, while enabling control over pose and identity. An overview of our method is shown in Fig. 2. In this section, we first introduce an efficient and articulation-aware 3D human generator (Section 3.1) which generates the appearance and shape in canonical space and uses a deformation module to warp into posed space via a learned continuous deformation field. Next, we describe our rendering module that is accelerated by an empty space skipping strategy which leverages the SMPL body prior. To enable fast training, we use a super-resolution module to lift feature maps to high-resolution images. We optimize the generator using a combination of adversarial losses (Section 3.2) and an Eikonal loss [21]. While prior work uses a single discriminator formulation, we show that employing several, specialized discriminators improves visual fidelity. To this end, we define discriminators that reason at the level of the whole body and locally at the face region, respectively. We additionally introduce an adversarial normal loss, which significantly improves the quality of the generated geometry. ### Holistic 3D Avatar Generator **Canonical Generator:** Given a latent vector \(\mathbf{z}\in\mathbb{R}^{n_{z}}\) and pose parameters \(\mathbf{p}\in\mathbb{R}^{n_{p}}\), our method first generates 3D human appearance and shape in canonical space (see Fig. 2). Here, we leverage pose conditioning to model pose-dependent effects. For efficient rendering, the canonical generator builds on the tri-plane representation proposed in ConvONet [52] and EG3D [6] to model 3D features. These are then decoded by an MLP to predict the canonical shape and appearance in 3D space. We represent geometry using a signed distance field (SDF). Since existing fashion datasets are imbalanced and contain mostly frontal views, learning correct 3D geometry from such datasets is difficult. Following [23], we exploit a human shape prior in the form of canonical SMPL [38]. Specifically, for every query point \(\mathbf{x}\) in canonical space, we predict an SDF offset \(\Delta d(\mathbf{x},\mathbf{z},\mathbf{p})\) from the base shape to model details such as hair and clothing. The SDF value \(d(\mathbf{x},\mathbf{z},\mathbf{p})\) is then calculated as \[d=d(\mathbf{x},\mathbf{z},\mathbf{p})=d_{\text{SMPL}}(\mathbf{x})+\Delta d( \mathbf{x},\mathbf{z},\mathbf{p}), \tag{1}\] where \(d_{\text{SMPL}}(\mathbf{x})\) is the signed distance to the canonical SMPL surface. Unlike [23], to compute \(d_{\text{SMPL}}(\mathbf{x})\) efficiently, we represent the SDF as a low-resolution voxel grid (\(128\times 128\times 32\)), where the value of every grid point is the (pre-computed) distance to the SMPL mesh. We then query \(d_{\text{SMPL}}(\mathbf{x})\) by trilinear interpolating the SDF voxel grid. We also compute normals in canonical space. The normal \(\mathbf{n}\) at a certain canonical point \(\mathbf{x}\) is computed as the spatial gradient of the signed distance function at that point: \[\mathbf{n}=\nabla_{\mathbf{x}}d(\mathbf{x},\mathbf{z},\mathbf{p}). \tag{2}\] The canonical appearance is represented by a 3D texture field \(\mathbf{c}\). We also predict features \(\mathbf{f}\) that are used to guide the super-resolution module (described later). We denote the entire mapping from 3D point \(\mathbf{x}\), latent vector \(\mathbf{z}\) and pose condition \(\mathbf{p}\) to SDF \(d\), normal \(\mathbf{n}\), color \(\mathbf{c}\) and color features \(\mathbf{f}\) in _canonical space_ as follows: \[\mathbf{g}:\mathbb{R}^{3}\times\mathbb{R}^{n_{z}}\times\mathbb{R}^ {n_{p}} \rightarrow\mathbb{R}\times\mathbb{R}^{3}\times\mathbb{R}^{3}\times \mathbb{R}^{n_{f}} \tag{3}\] \[(\mathbf{x},\mathbf{z},\mathbf{p}) \mapsto(d,\mathbf{n},\mathbf{c},\mathbf{f}).\] **Deformer:** To enable animation and to learn from posed images, we require the appearance and 3D shape in _posed space_. In the following we denote quantities in posed space as \((\cdot)^{\prime}\). Given the bone transformation matrix \(\mathbf{B}_{i}\) for joint \(i\in\{1,...,n_{b}\}\), a canonical point \(\mathbf{x}\) is transformed into its deformed version \(\mathbf{x}^{\prime}\) via \[\mathbf{x}^{\prime}=\sum_{i=1}^{n_{b}}w_{i}\,\mathbf{B}_{i}\,\mathbf{x} \tag{4}\] Here, the canonical LBS weight field \(\mathbf{w}:\mathbb{R}^{3}\rightarrow\mathbb{R}^{n_{b}}\), with \(\mathbf{x}\mapsto(w_{1},...,w_{n_{b}})\) and \(n_{b}\) the number of joints, weights the influence of each bone's \(\mathbf{B}_{i}\) transformation onto the deformed location \(\mathbf{x}^{\prime}\). This weight field is represented by a low-resolution voxel grid. The normal at the deformed point \(\mathbf{x}^{\prime}\) is given by \[\mathbf{n}^{\prime}=\frac{(\sum_{i=1}^{n_{b}}w_{i}\,\mathbf{R}_{i}\,)^{-T} \mathbf{n}}{\|(\sum_{i=1}^{n_{b}}w_{i}\,\mathbf{R}_{i}\,)^{-T}\mathbf{n}\|} \tag{5}\] where \(\mathbf{R}_{i}\) is the rotation component of \(\mathbf{B}_{i}\)[60]. We leverage Fast-SNARF [8] to efficiently warp points _backwards_ from posed space \(\mathbf{x}^{\prime}\) to canonical space \(\mathbf{x}\) via efficient iterative root finding [8]. The SDF value \(d^{\prime}\), color \(\mathbf{c}^{\prime}\) and feature \(\mathbf{f}^{\prime}\) at the deformed point are obtained by evaluating the generator at the corresponding \(\mathbf{x}\). In contrast to [8], which focuses on reconstruction tasks and learns skinning weights on the fly, we constrain the notoriously difficult adversarial training by averaging the skinning weights of the nearest vertices on the canonical SMPL mesh. **Volume Renderer:** To render a pixel, we follow [43] and cast ray \(\mathbf{r}^{\prime}\) from the camera center \(\mathbf{o}^{\prime}\) along its view direction \(\mathbf{v}^{\prime}\). We use two-pass importance sampling of \(M\) points in posed space \(\mathbf{x}^{\prime}_{i}=\mathbf{o}^{\prime}+t_{i}\mathbf{v}^{\prime}\) and predict their SDF values \(d^{\prime}_{i}\), colors \(\mathbf{c}^{\prime}_{i}\), color features \(\mathbf{f}^{\prime}_{i}\) and normals \(\mathbf{n}^{\prime}_{i}\). We convert SDF values \(d^{\prime}_{i}\) to densities \(\sigma^{\prime}_{i}\) via the method of StyleSDF [48]. The color of each pixel in the rendered image \(I\) is computed via numerical integration [43]: \[I(\mathbf{r})=\sum_{i=1}^{M}\alpha_{i}\prod_{i<j}(1-\alpha_{j})\mathbf{c}^{ \prime}_{i}\quad\alpha_{i}=1-\text{exp}(\sigma^{\prime}_{i}\delta_{i}) \tag{6}\] where \(\delta_{i}\) is the distance between samples. 3D normals \(N(\mathbf{r})\) and feature vectors \(F(\mathbf{r})\) are rendered accordingly. To accelerate rendering and to reduce memory, we take advantage of the geometric prior of the SMPL model and define the region within a predefined distance threshold to the SMPL surface as the occupied region. For points sampled outside of this region, we set the density to zero. **Super Resolution:** Although the SMPL-guided volume rendering is more efficient than previous approaches, it is still slow and requires a large amount of memory to render at high resolution. Therefore, we perform volume rendering at a sufficient resolution (\(256^{2}\) pixels) to guarantee good rendering of the normal image \(N\) and rely on a super-resolution module [6] to upsample the image feature map \(F\) and color \(I\) to the final image \(I^{+}\) of size \(512^{2}\) pixels. ### Training We train our model on a large dataset of 2D images using adversarial training, leveraging a combination of multiple discriminators and an Eikonal loss. **Image Discriminator:** The first discriminator \(D_{\text{image}}\) compares full images generated by our method to real images. Figure 2: **Method Overview.**_Holistic 3D Human Generation:_ Given a latent vector \(\mathbf{z}\), our method generates human shape \(d\) and appearance \(\mathbf{c}\) in canonical space. In addition, we compute surface normals \(\mathbf{n}\) via the spatial derivatives of the canonical shape which is represented as an SDF. These canonical representations are then posed into the target body pose \(\mathbf{p}\) via a flexible deformer and then rendered from the target viewpoint. The rendered images are further super-resolved by \(2\times\). _Adversarial Training:_ We optimize the generator and the super-resolution module using multiple discriminators. In addition to an image discriminator operating on the images, we improve geometry by introducing a normal discriminator that compares our rendered normal maps with the normals of real images predicted by an off-the-shelf normal estimator. To further improve the quality of the perceptually important face region, we add normal and image discriminators for the face region. Following EG3D [6], we apply the discriminator at both resolutions: We upsample our low resolution rendering \(I\), concatenate it with the super-resolved image \(I^{+}\), and feed it to a StyleGANv2 [30] discriminator. For real images \(\bar{I}\), we downsample and re-upsample them, and concatenate the results with the original image as input to the discriminator. **Face Discriminator:** We observe that the generated face region suffers from artifacts due to the low resolution of faces within the full-body image. Motivated by 2D human GANs [20], we add a small face discriminator \(D_{\text{face,image}}\). Based on estimated SMPL head keypoints, we crop the head regions of our high resolution output \(I^{+}\) and real data \(\bar{I}\) and feed them into the discriminator for comparison. **Normal Discriminator:** A central goal of our work is to attain geometrically correct 3D avatars. To achieve this, we propose to use geometric cues present in 2D normal maps to guide the adversarial learning towards meaningful 3D geometry. To this end, we use an additional normal discriminator \(D_{\text{normal}}\). This normal discriminator compares the predicted 2D normal maps \(N\) to 2D normal maps \(\bar{N}\) of real images \(\bar{I}\) predicted by the 2D normal estimator from PIFuHD [54]. Analogously to the image branch, we use an additional discriminator \(D_{\text{face,normal}}\) to further enhance the geometric fidelity of the generated faces. We refer the reader to the Sup. Mat. for implementation details. **Eikonal Loss:** To regularize the learned SDFs, we apply an Eikonal loss [21]\(\mathcal{L}_{\text{eik}}=\mathbb{E}_{\mathbf{x}_{i}}\left(\|\nabla(\Delta d (\mathbf{x}_{i}))\|-1\right)^{2}\) to the canonical correspondences \(\{\mathbf{x}_{i}\}\) of sampled points \(\{\mathbf{x}^{\prime}_{i}\}\). **Training:** We train our generator and discriminators jointly using the non-saturating GAN objective with R1-regularization [41] and an Eikonal loss. Please refer to the Sup. Mat. for more training details. ## 4 Experiments In our experiments, we first demonstrate the quality of generated samples and then compare our method to other SotA baselines. In addition, we provide an ablation study to investigate the importance of each component in our model. **Datasets:**1_DeepFashion_[37] contains an unstructured collection of fashion images of different subjects wearing various types of clothing. We use the curated subset with 8k images from [23] as our training data. _UBCFashion_[70] contains 500 sequences of fashion videos with subjects wearing Figure 4: **User Preference. We conduct a perceptual study with approximately 4000 samples and report how often participants preferred shapes and images generated by our method or those generated by EVA3D [23].** Figure 3: **Qualitative Results: 3D Human Generation. We generate 3D human appearance and shape, and render the resulting 3D representations using different body poses and from different viewpoints. In addition, we show virtual people generated by interpolating between latent codes. Overall, our synthesized humans exhibit reasonable appearance and geometric quality, remain consistent across different poses and views, and smoothly interpolate when varying the latent code \(\mathbf{z}\).** loose clothing such as skirts. Following EVA3D [23], we treat these videos as individual images without assuming temporal information. Pre-processing details can be found in the Sup. Mat. **Metrics:** We measure the diversity and quality of generated images using the _Frechet Inception Distance (FID)_ between 50k generated images and available real images, denoted by \(\text{FID}_{\text{image}}\). To measure the generated face quality, we report an additional FID specifically for the face region, denoted by \(\text{FID}_{\text{face}}\). Furthermore, we evaluate the quality of the synthesized geometry by computing the FID between our rendered normals and pseudo-GT normal maps predicted by [54] (\(\text{FID}_{\text{normal}}\)). We use an inception network [59] pre-trained on ImageNet [12] for all FID computation. In addition, we conduct a _Perceptual User Study_ among 50 participants with 4000 samples and report how often participants preferred a particular method over ours. **Baselines:** We compare our method to four baseline methods: EG3D [6], StyleSDF [48], ENARF-GAN [47] and EVA3D [23]. EG3D and StyleSDF are SotA, 3D-aware, generative models of rigid objects. For comparison, these two methods are trained on the aforementioned human datasets. Since these two methods do not model articulation, they have to learn it implicitly. ENARF-GAN and EVA3D additionally model articulation for 3D human generation. The quantitative FID results of all baseline methods are directly taken from the experiment in EVA3D [6]. In addition, we evaluate \(\text{FID}_{\text{face}}\) and \(\text{FID}_{\text{normal}}\) on EVA3D based on their released code and trained model weights. ### Quality of 3D Human Generation We show our qualitative results in Fig. 3. More results can be found in the Sup. Mat. Overall, our method generates realistic human images with faithful details such as clothing patterns, face and hair, and meaningful 3D geometry even with fine structures such as hair and shoe heels. Our method further enables control over the generation as follows. **View Control:** As shown in Fig. 2(a), by learning humans in 3D space, our method can generate 3D-consistent high-quality images and geometry from varied viewpoints. **Pose Control:** The generated 3D humans can also be reposed into unseen poses as shown in Fig. 2(b). The images and geometry in different poses are consistent due to the explicit model of human articulation. **Interpolation:** Our method learns a smooth latent space of 3D human shape and appearance. As shown in Fig. 2(c), our method yields smooth transitions of appearance and shape even when interpolating the latent codes of two subjects with different gender and clothing styles. ### Comparison to SotA Table 1 summarizes our quantitative comparisons on both DeepFashion and UBCFashion datasets. Since the SotA method EVA3D [23] outperforms other baselines by a significant margin, we focus our discussion on the comparison with EVA3D only. More comparisons with other baselines can be found in the Sup. Mat. **Image Quality:** Our method achieves better quantitative results than EVA3D in terms of \(\text{FID}_{\text{image}}\) and \(\text{FID}_{\text{face}}\) on both datasets. This improvement is confirmed by our user study in Fig. 4. Notably, in \(81.4\%\) of the cases, participants consider our generated images to be more realistic than EVA3D. A qualitative comparison is shown in Fig. 4(a). Our method generates overall sharper images with more details due to the use of a holistic 3D generator, and synthesizes more realistic faces due to our face discriminators. Our improvements are particularly pronounced when considering side views. As shown in Fig. 4(b), our method generates sharp and meaningful images also from the side where EVA3D's image quality significantly degrades. This is a consequence of EVA3D's pose-guided sampling strategy. As discussed in their paper, EVA3D had to increase the dataset's frontal bias during training to achieve reasonable geometry and face quality. We hypothesize that this requirement is due to the limited capacity of the lightweight part models. As a consequence, EVA3D overfits more to \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{DeepFashion} & \multicolumn{3}{c}{UBCFashion} \\ & \(\text{FID}\downarrow\) & \(\text{FID}_{\text{normal}}\downarrow\) & \(\text{FID}_{\text{face}}\downarrow\) & \(\text{FID}\downarrow\) & \(\text{FID}_{\text{normal}}\downarrow\) & \(\text{FID}_{\text{face}}\downarrow\) \\ \hline \hline EG3D & \(26.38^{*}\) & - & - & \(23.95^{*}\) & - & - \\ StyleSDF & \(92.40^{*}\) & - & - & \(18.52^{*}\) & - & - \\ ENARF-GAN & \(77.03^{*}\) & - & - & - & - & - \\ EVA3D & \(15.91^{*}\) & - & - & \(12.61^{*}\) & - & - \\ \hline EVA3D (public) & 20.45 & 30.81 & 17.21 & 19.81 & 49.29 & 54.42 \\ Ours & **10.93** & **20.38** & **14.79** & **11.04** & **18.79** & **15.83** \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative Comparison with SotA Methods**. We evaluate FID of full images, cropped face images and normal maps generated by our method and the SotA method EVA3D (public) [23] using their released trained models. For reference, we also report quantitative results from the EVA3D paper above the separation line marked by *. frontal views and generalizes less well. In contrast, our efficient articulation and rendering modules allow us to exploit a single holistic generator and our face and normal discriminators enable us to directly sample from the data distribution which leads to better generalization. Interestingly, the (non-animatable) generative model EG3D achieves reasonable FID despite not modeling articulation. This is because the FID evaluation only considers training poses and views, see also Sup. Mat. **Geometry:** Our method yields significantly better geometry compared to EVA3D, as evidenced by the improvement in FIDnormal in Table 1 and the perceptual study in Fig. 4. Based on our qualitative results, our geometry is more realistic and detailed, in particular on faces. In contrast, noise and holes can be observed around the shapes generated by EVA3D, despite their surface representation and regularization. We attribute this improvement to our normal discriminators, which provide strong geometric cues. **Loose Clothing:** As shown in Fig. 4(c), our method outperforms EVA3D in modeling loose clothing. Due to its compositional nature, EVA3D is prone to generating artifacts between the legs. In contrast, our holistic representation generates loose clothing without discontinuity artifacts. **Efficiency:** Our method is more efficient than EVA3D in rendering images and normal maps with the same image and ray sampling resolution. For an image resolution of \(256^{2}\) and with 28 sample points per ray, our method renders normals and images together at 10.5 FPS while EVA3D runs at 5.5 FPS. With a 2D super-resolution module, our method is more than three times faster than EVA3D when rendering images of \(512^{2}\) (9.5FPS vs 3FPS), while achieving better performance in terms of geometry and appearance. More \begin{table} \begin{tabular}{l c c c} \hline \hline Method & FID \(\downarrow\) & FIDnormal \(\downarrow\) & FIDface \(\downarrow\) \\ \hline Ours & **10.93** & **20.38** & 14.79 \\ w/o normal GAN & 11.15 & 32.17 & **14.35** \\ w/o face GAN & 11.71 & 23.96 & 20.88 \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation. We compare our method and ablated baselines in which we remove individual discriminators.** Figure 5: **Qualitative Comparison to EVA3D. We show random samples of our method and the SotA method EVA3D [23]. Our method achieves better image and shape quality, degrades more gracefully at side views, and better models loose clothing.** Figure 6: **Ablation of the Normal Discriminator. Our normal discriminator effectively improves the generated geometry while preserving appearance quality.** details can be found in Sup. Mat. ### Ablation Study **Normal Discriminator:** Our normal discriminator serves an important role in improving the realism of the generated geometry. Comparing our model to ablated versions where we remove the normal discriminator (w/o normal GAN), we observe a significant FID\({}_{\text{normal}}\) improvement (see Table 2). As shown by the qualitative results in Fig. 6, the normal GAN effectively removes holes and noise on the generated surface, especially on faces, while preserving image quality. **Face Discriminator:** As expected, when training without face discriminators, we observe a large drop in FID\({}_{\text{face}}\) (see Table 2). Given that faces are low resolution and hard to generate, our adversarial loss on faces forces the generator to focus on this local region and thus achieves a more realistic generation as shown in Fig. 7. **Deformer:** To test the importance of our Fast-SNARF based deformation module, we compare our model to a SMPL nearest-neighbor-based deformer (denoted by _Ours w/SMPL_), where points are deformed based on the skinning weights of their \(K\) nearest SMPL vertices in posed space. As shown in Fig. 8, only our method can deform the skirts without splitting them. This is due to our articulation module being able to derive meaningful deformations for points far away from the SMPL surface. In contrast, our ablated baselines, with different choices of \(K\), suffer from discontinuity artifacts as they only provide meaningful deformation at regions close to the SMPL surface. Similar artifacts can be observed in EVA3D's results, which we hypothesize stem from their part-based model. ### Limitations Since each training instance is observed only in one pose, the association of pixels to body parts cannot be uniquely determined. Hence, our model sometimes generates wrong clothing patterns under arms, or at hands close to the torso. Future work should investigate techniques to guide association, such as 2D correspondence predictions. Moreover, samples from generative models reflect the biases present in their training data. The 2D image collections that we used for training focus on fashion images and lack diversity in skin tone, body shape, and age. Our work should be viewed as a methodological proof of concept and contains no mechanisms to combat these biases. To avoid biases future research and deployable systems should i) be trained on more diverse data or ii) use explicit de-biasing. Further limitations are discussed in the Sup. Mat. ## 5 Conclusion In this paper we contribute a new controllable generative 3D human model that is learned from unstructured 2D image collections alone and does not leverage any 3D supervision. Our model synthesizes high-quality 3D avatars with fine geometric details and models loose clothing more naturally than prior work. We achieve this through a new generator design that combines a holistic 3D generator with an efficient and flexible articulation module. Furthermore, we show that employing several, specialized discriminators that operate on the different branches (RGB and normals) and regions (fully body and facial region), leads to higher visual fidelity. We experimentally demonstrate that our method advances state-of-the-art in learning 3D human generation from 2D image collections in terms of both appearance and geometry and that it is the first generative model of 3D humans that can handle the deformations of free-flowing and loose garments and long hair. Figure 8: **Ablation Study of the Deformer. Results with loose clothing in novel poses, generated by EVA3D and our method with different choices for the articulation module.** Figure 7: **Ablation of the Face Discriminator. Our face discriminator effectively improves generated face quality.** **Acknowledgements:** Zijian Dong was supported by the BMWi in the project KI Delta Learning (project number 19A19013O) and the ERC Starting Grant LEGO-3D (850533). Andreas Geiger is a member of the Machine Learning Cluster of Excellence, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC number 2064/1 - Project number 390727645. Xu Chen was supported by the Max Planck ETH Center for Learning Systems. This project was supported by the ERC Starting Grant LEGO-3D (850533), the BMWi project KI Delta Learning (project number 19A19013O) and the DFG EXC number 2064/1 - project number 390727645. We thank Kashyap Chitta, Katja Schwarz, Takeru Miyato and Seyedmorteza Sadat for their feedback, and Tsvetelina Alexiadis for her help with the user study. **Disclosure:** MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. MJB's research was performed solely at, and funded solely by, the Max Planck.
2303.11786
Skeleton Regression: A Graph-Based Approach to Estimation with Manifold Structure
We introduce a new regression framework designed to deal with large-scale, complex data that lies around a low-dimensional manifold with noises. Our approach first constructs a graph representation, referred to as the skeleton, to capture the underlying geometric structure. We then define metrics on the skeleton graph and apply nonparametric regression techniques, along with feature transformations based on the graph, to estimate the regression function. We also discuss the limitations of some nonparametric regressors with respect to the general metric space such as the skeleton graph. The proposed regression framework suggests a novel way to deal with data with underlying geometric structures and provides additional advantages in handling the union of multiple manifolds, additive noises, and noisy observations. We provide statistical guarantees for the proposed method and demonstrate its effectiveness through simulations and real data examples.
Zeyu Wei, Yen-Chi Chen
2023-03-19T21:45:40Z
http://arxiv.org/abs/2303.11786v2
# Skeleton Regression: A Graph-Based Approach to Estimation with Manifold Structure ###### Abstract We introduce a new regression framework designed to deal with large-scale, complex data that lies around a low-dimensional manifold. Our approach first constructs a graph representation, referred to as the _skeleton_, to capture the underlying geometric structure. We then define metrics on the skeleton graph and apply nonparametric regression techniques, along with feature transformations based on the graph, to estimate the regression function. In addition to the included nonparametric methods, we also discuss the limitations of some nonparametric regressors with respect to the general metric space such as the skeleton graph. The proposed regression framework allows us to bypass the curse of dimensionality and provides additional advantages that it can handle the union of multiple manifolds and is robust to additive noise and noisy observations. We provide statistical guarantees for the proposed method and demonstrate its effectiveness through simulations and real data examples. **Keywords:** manifold learning, nonparametric regression, kernel regression, spline ## 1 Introduction Many data nowadays are geometrically structured that the covariates lie around a low-dimensional manifold embedded inside a large-dimensional vector space. Among many geometric data analysis tasks, the estimation of functions defined on manifolds has been extensively studied in the statistical literature. A classical approach to explicitly account for geometric structure takes two steps: map the data to the tangent plane or some embedding space and then run regression methods with the transformed data. This approach is pioneered by the Principle Component Regression (PCR) (Massy, 1965) and the Partial Least Squares (PLS) (Wold, 1975). Aswani et al. (2011) innovatively relate the regression coefficients to exterior derivatives. They propose to learn the manifold structure through local principal components and then constrain the regression to lie close to the manifold by solving a weighted least-squares problem with Ridge regularization. Cheng and Wu (2013) present the Manifold Adaptive Local Linear Estimator for the Regression (MALLER) that performs the local linear regression (LLR) on a tangent plane estimate. However, because those methods directly exploit the local manifold structures in an exact sense, they are not robust to variations in the covariates that perturb them away from the true manifold structure. Many other manifold estimation approaches exist in the statistical literature. Guhaniyogi and Dunson (2016) utilize random compression of the feature vector in combination with Gaussian process regression. Zhang et al. (2013) follows a divide-and-conquer approach that computes an independent kernel Ridge regression estimator for each randomly partitioned subset. Other nonparametric regression approaches such as kernel machine learning (Scholkopf and Smola, 2002), manifold regularization (Belkin et al., 2006), and the spectral series approach (Lee and Izbicki, 2016) also account for the manifold structure of the data. However, those methods still suffer from the curse of dimensionality with large-dimensional covariates. In addition to data with manifold-based covariates, manifold learning has been applied to other types of manifold-related data. Marzio et al. (2014) develop nonparametric smoothing for regression when both the predictor and the response variables are defined on a sphere. Zhang et al. (2019) deal with the presence of grossly corrupted manifold-valued responses. Green et al. (2021) proposes the Principal Components Regression with Laplacian-Eigenmaps (PCR-LE) that projects responses onto the eigenvectors output by Laplacian Eigenmaps. Lin and Yao (2020) address data with functional predictors that reside on a finite-dimensional manifold with contamination. In this work, we focus on manifold-based covariates and may incorporate other types of manifold-related data in the future. The main goal of this work is to estimate scalar responses on manifold-structured covariates in a way that bypasses the curse of dimensionality. This is achieved by proposing a new framework that utilizes graphs and nonparametric regression techniques. Our framework follows the two-step idea: first, we learn a graph representation, which we call the _skeleton_, of the manifold structure based on the methods from Wei and Chen (2023) and project the covariates onto the skeleton. Then we apply different nonparametric regression methods to the skeleton-projected data. We give brief descriptions of the relevant nonparametric regression methods below. Kernel smoothing is a widely used technique that estimates the regression function as locally weighted averages with the kernel as the weighting function. Pioneered by Nadaraya (1964) and Watson (1964) with the famous Nadaraya-Watson estimator, this technique has been widely used and extended by recent works (Fan and Fan (1992), Hastie and Loader (1993), Fan et al. (1996), Kpotufe and Verma (2017)). Splines (Hastie et al. (2009), Friedman (1991)) are popular nonparametric regression constructs that take the derivative-based measure of smoothness into account when fitting a regression function. Moreover, k-Nearest-Neighbors (kNN) regression (Altman, 1992; Hastie et al., 2009) has a simple form but is powerful and widely used in many applications. These techniques are incorporated into our proposed regression framework. In recent years, many nonparametric regression techniques have been shown to adapt to the manifold structure of the data, with convergence rates that depend only on the intrinsic dimension of the data space. Specifically, the kNN regressor and the kernel regressor have been shown to be manifold-adaptive with proper parameter tuning procedures (Kpotufe, 2009, 2011; Kpotufe and Garg, 2013; Kpotufe and Verma, 2017). The proposed regression framework in this work also adapts to the manifold, as the nonparametric regression models fitted on a graph are dimension-independent. This framework has several additional advantages such as the ability to account for predictors from distinct manifolds and being robust to additive noise and noisy observations. _Outline._ We start by presenting the procedures of the skeleton regression framework in section 2. In section 3, we apply nonparametric regression techniques to the constructed skeleton graph along with theoretical justifications. In section 4, we present some simulation results for skeleton regression and demonstrate the effectiveness of our method on real datasets in Section 5. In section 6, we conclude the paper and point out some directions for future research. 1 Footnote 1: R implementation of the proposed methods can be accessed at [https://github.com/JerryBubble/](https://github.com/JerryBubble/) skeletonMethods and Python implementation can be accessed at [https://pypi.org/project/](https://pypi.org/project/) skeleton-methods/. ## 2 Skeleton Regression Framework In this section, we introduce the skeleton regression framework. Given design vectors \(\left\{\mathbf{x}_{i}\right\}_{i=1}^{n}\) where \(\mathbf{x}_{i}\in\mathcal{X}\subseteq\mathbb{R}^{d}\) for each \(i\) and the corresponding responses \(\left\{Y_{i}\right\}_{i=1}^{n}\) in \(\mathbb{R}\), a traditional regression approach is to estimate the regression function \(m(\mathbf{x})=\mathbb{E}(Y|X=\mathbf{x})\). However, the ambient dimension \(d\) can be large while the covariates are distributed on a low-dimensional manifold structure. In this case, \(\mathcal{X}\) can be the union of several disjoint components with different manifold structures, and the regression function can have discontinuous changes from one component to another. To handle such manifold-structured data, we approach the regression task by first representing the sample covariate space with a graph, which we call the skeleton, to summarize the manifold structures. We then focus on Figure 1: Skeleton Regression illustrated by Two Moon Data (d=2). the regression function over the skeleton graph, which incorporates the covariate geometry in a dimension-independent way. We illustrate our regression framework on simulated TwoMoon data in Figure 1. The covariates of the TwoMoon data consist of two 2-dimensional clumps with intrinsically 1-dimensional curve structure, and the regression response increases polynomially with the angle and the radius (Figure 1 (a)). We construct the skeleton presentation to summarize the geometric structure (Figure 1 (b,c) ) and project the covariates onto the skeleton. The regression function on the skeleton is estimated using kernel smoothing (Section 3.1, illustrated in Figure 1 (d) ) and linear spline (Section 3.3, illustrated in Figure 1 (e)). The estimated regression function can be used to predict new projected covariates. We summarize the overall procedure in Algorithm 1. ``` Input: Observations \((\mathbf{x}_{1},Y_{1}),\ldots,(\mathbf{x}_{n},Y_{n})\). 1. **Skeleton Construction.** Construct a data-driven skeleton representation of the covariates preferably assisted with subject knowledge. 2. **Data Projection.** Project the covariates onto the skeleton. 3. **Skeleton Regression Function Estimation.** Fitting regression function on the skeleton using nonparametric techniques such as kernel smoothing (Section 3.1), k-Nearest Neighbor (Section 3.2), and linear spline (Section 3.3). 4. **Prediction.** Project new covariates onto the skeleton and use the estimated regression function for prediction. ``` **Algorithm 1** Skeleton Regression ### Skeleton Construction A skeleton is a low-dimensional subset of the sample space representing regions of interest that admits a graph representation. For given covariate space \(\mathcal{X}\subseteq\mathbb{R}^{d}\), let \(\mathcal{V}=\{V_{j}\in\mathbb{R}^{d}:j=1,\ldots k\}\) be a collection of points of interest and \(E\) be a set of edges connecting points in \(\mathcal{V}\) such that an edge \(e_{j\ell}\in E\) if the region between \(V_{j}\) and \(V_{\ell}\) is also of interest. The tuple \((\mathcal{V},E)\) together forms a graph that represents the focused regions in the sample space. Notably, different from common graph-based regression approaches that take each sample covariate as a vertex, the set \(\mathcal{V}\) takes representative points of the covariate space and has size \(k\ll n\) where \(n\) is the sample size. Moreover, the points on the edges are also part of the analysis as belonging to the regions of interest, which is different from the usual knot-edge graph. While the graph \((\mathcal{V},E)\) contains the region of interest, it is not easy to work with this graph directly. Thus, we introduce the concept of the skeleton induced by this graph. Let \(\mathcal{E}=\{tV_{j}+(1-t)V_{\ell}:t\in(0,1),e_{j\ell}\in E\}\) be the collection of line segments induced by the edge set \(E\). We define the skeleton of \((\mathcal{V},E)\) as \(\mathcal{S}=\mathcal{V}\cup\mathcal{E}\), i.e., \(\mathcal{S}\) is the points of interest and the associated line segments representing the regions of interest. Clearly, \(\mathcal{S}\) is a collection of one-dimensional line segments and zero-dimensional points so it is independent of the ambient dimension \(d\), but the physical location of \(\mathcal{S}\) is meaningful as representing the region of interest. The idea of skeleton regression is to build a regression model on the skeleton \(\mathcal{S}\). #### 2.1.1 A data-driven approach to construct skeleton The skeleton should ideally be constructed based on the analyst's judgment or prior knowledge of the region of interest. However, this information may be unavailable and we have to construct a skeleton from the data. In this section, we give a brief description of a data-driven approach proposed in Wei and Chen (2023) that constructs the skeleton to represent high-density regions. The method constructs knots as the centers from the \(k\)-means clustering with a large number of knots 2. The edges are connected by examining the sample 2-Nearest-Neighbor (2-NN) region of a pair of knots \((V_{j},V_{\ell})\) (see Figure 2) Footnote 2: By default \([\sqrt{n}]\). We explore the effect of choosing different numbers of knots with empirical results. \[B_{j\ell}=\{X_{m},m=1,\ldots,n:\left\|x-V_{i}\right\|>\max\{\left\|x-V_{j} \right\|,\left\|x-V_{\ell}\right\|\},\forall i\neq j,\ell\}, \tag{1}\] where \(\left|\left|.\right|\right|\) denotes the Euclidean norm, and an edge between \(V_{j}\) and \(V_{\ell}\) is added if \(B_{j\ell}\) is non-empty. The method can further prune edges or segment the skeleton by using hierarchical clustering with respect to the Voronoi Density weights defined as \(S_{j\ell}^{VD}=\frac{\frac{1}{n}\left|B_{j\ell}\right|}{\left\|V_{j}-V_{\ell} \right\|}.\) We provide more details about this approach in Appendix A. **Remark 1**: _The idea of using the \(k\)-means algorithm to divide data into cells and perform analysis based on the cells has been proposed in the literature for fast computation. Sivic and Zisserman (2003), when carrying out an approximate nearest neighbor search, proposed to divide the data into Voronoi cells by \(k\)-means and do a neighbor search only in the same or some nearby cells. Babenko and Lempitsky (2012) adopted the Product Quantization technique to construct cell centers for high-dimensional data as the Cartesian product of centers from sub-dimensions._ ### Skeleton-Based Distance One of the advantages of the physically-located skeleton is that it allows for a natural definition of the skeleton-based distance function \(d_{\mathcal{S}}(.,.):\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}^{+} \cup\{\infty\}\). Let \(\boldsymbol{s}_{j},\boldsymbol{s}_{\ell}\in\mathcal{S}\) be two arbitrary points on the skeleton and note that, different from the usual geodesic distance on a graph, in our framework \(\boldsymbol{s}_{j},\boldsymbol{s}_{\ell}\) can be on the edges. We measure the skeleton-based distance between two skeleton points as the graph path length as defined below: Figure 2: Orange shaded area illustrates the 2-NN region of knots \(1,2\). * If \(\mathbf{s}_{j},\mathbf{s}_{\ell}\) are disconnected that they belong to two disjoint components of \(\mathcal{S}\), we define \[d(\mathbf{s}_{j},\mathbf{s}_{\ell})=\infty\] (2) * If \(\mathbf{s}_{j}\) and \(\mathbf{s}_{\ell}\) are on the same edge, we define the skeleton distance as their Euclidean distance that \[d_{\mathcal{S}}(\mathbf{s}_{j},\mathbf{s}_{\ell})=||\mathbf{s}_{j}-\mathbf{s}_{\ell}||\] (3) * For \(\mathbf{s}_{j}\) and \(\mathbf{s}_{\ell}\) on two different edges that share a knot \(V_{0}\), the skeleton distance is defined as \[d_{\mathcal{S}}(\mathbf{s}_{j},\mathbf{s}_{\ell})=||\mathbf{s}_{j}-V_{0}||+||\mathbf{s}_{\ell} -V_{0}||\] (4) * Otherwise, let knots \(V_{i(1)},\ldots,V_{i(m)}\) be the vertices on a path connecting \(\mathbf{s}_{j},\mathbf{s}_{\ell}\), where \(V_{i(1)}\) is one of the two closest knots of \(\mathbf{s}_{j}\) and \(V_{i(m)}\) is the other closest knots of \(\mathbf{s}_{\ell}\). We add the edge lengths of the in-between knots to the distance that \[d_{\mathcal{S}}(\mathbf{s}_{j},\mathbf{s}_{\ell})=||\mathbf{s}_{j}-V_{i(1)}||+||\mathbf{s}_{ \ell}-V_{i(m)}||+\sum_{p=1}^{m-1}\left\|V_{i(p)},V_{i(p+1)}\right\|\] (5) and we use the shortest path length if there are multiple paths connecting \(\mathbf{s}_{j}\) and \(\mathbf{s}_{\ell}\). An example illustrating the skeleton-based distance is shown in Figure 3. Like the shortest path (geodesic) distance that makes a usual knot-edge graph into a metric space, the skeleton-based distance is also a metric on the skeleton graph. In the following sections, we will discuss methods to perform regression on space only with the defined metric. **Remark 2**: _We may view the skeleton-based distance as an approximation of the geodesic distance on the underlying data manifold. Moreover, to make a stronger connection to the manifold structure, it is possible to define edge lengths through local manifold learning techniques that have better approximations to the local manifold structure. However, using more complex local edge weights can pose additional challenges for the data projection step described in the next section and we leave this as a future direction._ Figure 3: Illustration of skeleton-based distance. Let \(C_{1},C_{2},C_{3},C_{4}\) be the knots, and let \(S_{2},S_{3},S_{4}\) be the mid-point on the edges \(E_{12},E_{23},E_{34}\) respectively. Let \(S_{1}\) bet the midpoint between \(C_{1}\) and \(S_{2}\) on the edge. Let \(d_{ij}=\left\|C_{i}-C_{j}\right\|\) denotes the length of the edge \(E_{ij}\). \(d_{\mathcal{S}}(S_{1},S_{2})=\frac{1}{4}d_{12}\) illustrated by the blue path. \(d_{\mathcal{S}}(S_{2},S_{3})=\frac{1}{2}d_{12}+\frac{1}{2}d_{23}\) illustrated by the green path. \(d_{\mathcal{S}}(S_{2},S_{4})=\frac{1}{2}d_{12}+d_{23}+\frac{1}{2}d_{34}\) illustrated by the orange path. ### Data Projection For the next step, we project the sample covariates onto the constructed skeleton. For given covariate \(\mathbf{x}\), let \(I_{1}(\mathbf{x}),I_{2}(\mathbf{x})\in\{1,\ldots,k\}\) be the index of its closest and second closest knots in terms of the Euclidean metric. We define the projection function \(\Pi(.):\mathcal{X}\rightarrow\mathcal{S}\) for \(\mathbf{x}\in\mathcal{S}\) as (illustrated in Figure 4): * If \(V_{I_{1}(\mathbf{x})}\) and \(V_{I_{2}(\mathbf{x})}\) are not connected, \(\mathbf{x}\) is projected onto the closest knot that \(\Pi(\mathbf{x})=V_{I_{1}(\mathbf{x})}\) * If \(V_{I_{1}(\mathbf{x})}\) and \(V_{I_{2}(\mathbf{x})}\) are connected, \(\mathbf{x}\) is projected with the Euclidean metric onto the line passing through \(V_{I_{1}(\mathbf{x})}\) and \(V_{I_{2}(\mathbf{x})}\) that, let \(t=\frac{\left(\mathbf{x}-V_{I_{1}(\mathbf{x})}\right)^{T}\cdot\left(V_{I_{2}(\mathbf{x})}- V_{I_{1}(\mathbf{x})}\right)}{\left\|V_{I_{2}(\mathbf{x})}-V_{I_{1}(\mathbf{x})}\right\|^{2}}\) be the projection proportion, \[\Pi(\mathbf{x})=V_{I_{1}(\mathbf{x})}+\left(V_{I_{2}(\mathbf{x})}-V_{I_{1}(\mathbf{x})}\right) \cdot\begin{cases}0,\text{ if }t<0\\ 1,\text{ if }t>1\\ t,\text{ otherwise}\end{cases}\] (6) where we constrain the covariates to be projected onto the closest edge. Note that with the projection defined above, a non-trivial volume of points can be projected onto the end knots of the skeleton graph as belonging to Case I or due to the constraining in Case II. This adds complexities to the theoretical analysis of the proposed regression framework and leads to our separate analysis of the different domains of the graph in Section 3.1.1. ## 3 Skeleton Nonparametric Regression Covariates are mapped to locations on the skeleton after the data projection step and are equipped with the skeleton-based distances. In this section, we apply nonparametric regression techniques to the skeleton graph with projected data points. We study three feasible nonparametric approaches: the skeleton-based kernel regression (S-Kernel), the skeleton-based k-nearest-neighbor method (S-kNN), and the linear spline model (S-Lspline). At the end of this section, we discuss the challenges of applying some other nonparametric regression methods in the setting of skeleton graph. Figure 4: Illustration of projection to the skeleton. The skeleton structure is given by the black dots and lines. Data point \(X_{1}\) is projected to \(S_{1}\) on the edge between \(C_{1}\) and \(C_{2}\). Data point \(X_{2}\) is projected to knot \(C_{2}\). ### Skeleton Kernel Regression We start by adopting kernel smoothing to the skeleton graph. Let \(\mathbf{s}_{1},\cdots,\mathbf{s}_{n}\) be the projections on the skeleton from \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\), i.e., \(\mathbf{s}_{i}=\Pi(\mathbf{x}_{i})\). With the skeleton-based distances, the skeleton kernel regression makes a prediction at the location \(\mathbf{s}\in\mathcal{S}\) as \[\hat{m}(\mathbf{s})=\frac{\sum_{i=1}^{N}K(d_{\mathcal{S}}(\mathbf{s}_{i},\mathbf{s})/h)Y_{ i}}{\sum_{j=1}^{N}K(d_{\mathcal{S}}(\mathbf{s}_{j},\mathbf{s})/h)}, \tag{7}\] where \(K(\cdot)\geq 0\) is a smoothing kernel such as the Gaussian kernel and \(h>0\) is the smoothing bandwidth that controls the amount of smoothing. In practice, we choose \(h\) by cross-validation. Essentially, the estimator \(\hat{m}(\mathbf{s})\) is the usual kernel regression applied to a general metric space (skeleton) rather than the usual Euclidean space. Notably, the kernel function calculation only depends on the skeleton distances and hence is independent of neither the ambient dimension of the original input nor the intrinsic dimension of the manifold structure. It should be also noted that \(\hat{m}(\mathbf{s})\) only makes prediction on \(\mathbf{s}\in\mathcal{S}\). If we are interested in predicting the outcome at any arbitrary point \(\mathbf{x}\in\mathcal{X}\), the prediction will be based on the projected point, i.e., \(\hat{m}(\mathbf{x})=\hat{m}\left(\Pi(\mathbf{x})\right),\) where \(\Pi(\mathbf{x})\in\mathcal{S}.\) Because of the above projection property, one can think of the skeleton kernel regression as an estimator to the following skeleton-projected regression function \[m_{\mathcal{S}}(\mathbf{s})=\mathbb{E}(\mathbf{Y}|\Pi(\mathbf{X})=\mathbf{s}),\mathbf{s}\in \mathcal{S}. \tag{8}\] We study the convergence of \(\hat{m}(\mathbf{s})\) to \(m_{\mathcal{S}}(\mathbf{s})\) in what follows. #### 3.1.1 Consistency of S-Kernel Regression Our analysis assumes that the skeleton is fixed and given and focuses on the estimation of the regression function. To evaluate the estimation error, we must first impose some concepts of distribution on the skeleton. However, due to the covariate projection procedure, the probability measures on the knots and edges are different. Therefore, we analyze them separately. On an edge, the domain of the projected regression function varies in one dimension, resulting in a standard univariate problem for estimation. For the case of knots, a nontrivial region of the covariate space can be projected onto a knot, leading to a nontrivial probability mass at the knot. For simplicity, we write \(K_{h}(\mathbf{s}_{j},\mathbf{s}_{\ell})\equiv K(d_{\mathcal{S}}(\mathbf{s}_{j},\mathbf{s}_{ \ell})/h)\) for \(\mathbf{s}_{j},\mathbf{s}_{\ell}\in\mathcal{S}\). Let \(\mathcal{B}(\mathbf{s},h)=\{\mathbf{s}^{\prime}\in\mathcal{S}:d_{\mathcal{S}}(\mathbf{s}^{ \prime},\mathbf{s})<h\}\) be the ball on skeleton centered at the point \(\mathbf{s}\in\mathcal{S}\) with radius \(h\). We can decompose the kernel regression estimator into edge parts and knot parts as \[\hat{m}(\mathbf{s}) =\frac{\sum_{j=1}^{n}Y_{j}K_{h}(\mathbf{s}_{j},\mathbf{s})}{\sum_{j=1}^{ n}K_{h}(\mathbf{s}_{j},\mathbf{s})}\] \[=\frac{\frac{1}{n}\sum_{j=1}^{n}Y_{j}K_{h}(\mathbf{s}_{j},\mathbf{s})I( \mathbf{s}_{j}\in\mathcal{E})+\frac{1}{n}\sum_{j=1}^{n}Y_{j}K_{h}(\mathbf{s}_{j},\mathbf{s })I(\mathbf{s}_{j}\in\mathcal{V})}{\frac{1}{n}\sum_{j=1}^{n}K_{h}(\mathbf{s}_{j},\mathbf{s })I(\mathbf{s}_{j}\in\mathcal{V})}\] \[=\frac{\frac{1}{n}\sum_{j=1}^{n}Y_{j}K_{h}(\mathbf{s}_{j},\mathbf{s})I( \mathbf{s}_{j}\in\mathcal{E}\cap\mathcal{B}(\mathbf{s},h))+\frac{1}{n}\sum_{j=1}^{n}Y_ {j}K_{h}(\mathbf{s}_{j},\mathbf{s})I(\mathbf{s}_{j}\in\mathcal{V}\cap\mathcal{B}(\mathbf{s},h))} {\frac{1}{n}\sum_{j=1}^{n}K_{h}(\mathbf{s}_{j},\mathbf{s})I(\mathbf{s}_{j}\in\mathcal{V} \cap\mathcal{B}(\mathbf{s},h))} \tag{9}\] In the last line, we emphasize that the knots and edges in the kernel estimator have a meaningful contribution only within the support of the kernel function. We inspect the different domain cases separately in the following sections. For the model and assumptions, we let \(Y_{j}=m_{\mathcal{S}}(\mathbf{S}_{j})+U_{j},\mathbf{S}_{j}\in\mathcal{S}\), and \(\mathbb{E}(U_{j}|\mathbf{S}_{j})=0\) almost surely. Let \(\sigma^{2}(\mathbf{s})=\mathbb{E}(U_{j}^{2}|\mathbf{S}_{j}=\mathbf{s})\). Let the density on the skeleton edge be defined as the 1-Hausdorff density that \(g(\mathbf{s})=\lim_{r\downarrow 0}\frac{P(\mathcal{S}\in\mathcal{E}(\mathbf{s},r))}{2r}\). Note that \(g(\mathbf{s})=\infty\) if \(\mathbf{s}\) is at a knot point that has a probability mass. We consider the following assumptions: * \(\sigma^{2}(\mathbf{s})\) is continuous and uniformly bounded. * The skeleton edge density function \(g(\mathbf{s})>0\) and are bounded and Lipschitz continuous for \(\mathbf{s}\in\mathcal{E}\). * \(m_{\mathcal{S}}(\mathbf{s})g(\mathbf{s})\) is bounded and Lipschitz continuous for \(\mathbf{s}\in\mathcal{E}\). * The kernel function has compact support and satisfies \(\int K(x)dx=1\), \(\int K^{2}(x)dx<\infty\), \(\int xK(x)dx=0\), and \(\int x^{2}K(x)dx<\infty\) Conditions A1 and K are general assumptions that are commonly made in kernel regression analysis. A2 and A3 are mild conditions that can be sufficiently implied by the boundedness and Lipschitz continuity of the density and regression function in the ambient space along with non-overlapping knots that the area of the orthogonal complements have Lipschitz changes. We do not assume the second-order smoothness commonly required for kernel regression because requiring higher-order derivative smoothness would necessitate specifying directions on the graph, which may present difficulties in model formulation. We include further discussions about derivatives on the skeleton in Section 3.4. #### 3.1.2 Convergence of the Edge Point We first look at an edge point \(\mathbf{s}\in E_{j\ell}\in\mathcal{E}\). In this case, as \(n\to\infty,h\to 0\), for sufficiently large \(n\), we have \(\mathcal{B}(\mathbf{s},h)\subset E_{j\ell}\), and the skeleton distance is the 1-dimensional Euclidean distance for any point within the support. Therefore, we have a convergence rate similar to the 1-dimensional kernel regression estimator (Bierens, 1983; Wasserman, 2006; Chen, 2017). **Theorem 1** (Consistency on Edge Points): _For \(\mathbf{s}\in\mathcal{E}\) an edge point, assume conditions A1-3 and K hold for all points in \(\mathcal{E}\cap\mathcal{B}(\mathbf{s},h)\), as \(n\to\infty\), \(h\to 0\), \(nh\to\infty\),_ \[|\hat{m}_{n}(\mathbf{s})-m_{\mathcal{S}}(\mathbf{s})|=O(h)+O_{p}\bigg{(}\sqrt{\frac{1 }{nh}}\bigg{)} \tag{10}\] We leave the proof in Appendix C.1. Theorem 1 gives the convergence rate for a point on the edge of the constructed skeleton. The convergence rate at the bias is \(O(h)\), which is the usual rate when we only have Lipschitz smoothness (A2) of \(m_{\mathcal{S}}\). One may be wondering if we can obtain a faster rate such as \(O(h^{2})\) if we assume higher-order smoothness of \(m_{\mathcal{S}}\). While it is possible to obtain a faster rate if we have a higher-order smoothness, this assumption will not be reasonable because \(m_{\mathbf{s}}(\mathbf{s})=\mathbb{E}(Y|\Pi(X)=\mathbf{s})\) is defined via a project. The region being projected onto \(\mathbf{s}\) is continuously changing and may not be differentiable due to the boundary of Voronoi cells. Therefore, the Lipschitz continuity (A2) is reasonable while higher-order smoothness is not. #### 3.1.3 Convergence of the Knots with Nonzero Mass We then look at the knots with nonzero probability mass that \(\mathbf{s}\in\mathcal{V}\) with \(p(\mathbf{s})>0\), where we use \(p(\mathbf{s})\) to denote the probability mass on a knot. This case mainly occurs for knots with degree 1 on the skeleton graph, when a non-trivial region of points is projected onto such knots. For example, refer to knot C2 in Figure 4. **Theorem 2** (Consistency on Knots with Nonzero Mass): _For \(\mathbf{s}\in\mathcal{V}\) a knot point, let the probability mass at \(\mathbf{s}\) be \(P(\Pi_{\mathcal{S}}(X)=\mathbf{s})\equiv p(\mathbf{s})>0\) and assume \(\sigma^{2}(\mathbf{s})\) bounded. Also assume conditions A1-3 and K hold for all edge points in \(\mathcal{E}\cap\mathcal{B}(\mathbf{s},h)\). We have, as \(n\to\infty\), \(h\to 0\), and \(nh\to\infty\),_ \[|\hat{m}(\mathbf{s})-m_{\mathcal{S}}(\mathbf{s})|=O(h)+O_{p}\left(\sqrt{\frac{1}{n}}\right) \tag{11}\] Theorem 2 gives the convergence result for a knot point with a nontrivial mass of the skeleton. The bias term \(O(h)\) comes from the influence of nearby edge points. For the stochastic variation part, instead of having the \(O_{p}\left(\sqrt{\frac{1}{nh}}\right)\) rate as the usual kernel regression and in Theorem 1, we have \(O_{p}\left(\sqrt{\frac{1}{n}}\right)\) rate which comes from averaging the observations projected onto the knots. The proof of Theorem 2 is provided in Appendix C.3. #### 3.1.4 Convergence of the Knots with Zero Mass We now look at a knot point \(\mathbf{s}\in\mathcal{V}\) with no probability mass that \(p(\mathbf{s})=0\). This is can be the case for a knot with a degree larger than 1 like knot C3 in Figure 4. Since we define edge sets excluding the knots, there will be no density as well as no probability mass at \(\mathbf{s}\). Note that, with some reformulation, degree 2 knots can be parametrized together with the two connected edges and, under the appropriate assumptions, Theorem 1 applies, giving consistency estimation with \(O(h)+O_{p}\left(\sqrt{\frac{1}{nh}}\right)\) rate. However, density cannot be extended directly to knots with a degree larger than 2, but the kernel estimator still converges to some limits as presented in the Proposition below. **Proposition 3**: _For \(\mathbf{s}\in\mathcal{V}\) a knot point, assume conditions A1-3 hold for all points in \(\mathcal{E}\cap\mathcal{B}(\mathbf{s},h)\) and let the probability mass at \(\mathbf{s}\) be \(p(\mathbf{s})=0\). We assume condition K for the kernel function. Let \(\mathcal{I}\) collect the indexes of edges with one knot being \(\mathbf{s}\). For \(\ell\in\mathcal{I}\) and edge \(E_{\ell}\) connects \(\mathbf{s}\) and \(V_{\ell}\), let \(g_{\ell}(t)=g((1-t)\mathbf{s}+tV_{\ell})\) and \(g_{\ell}(0)=\lim_{x\downarrow 0}g_{\ell}(x)\). Let \(m_{\ell}(t)=m_{\mathcal{S}}((1-t)\mathbf{s}+tV_{\ell})\) and \(m_{\ell}(0)=\lim_{t\downarrow 0}m_{\ell}(t)\). We have, as \(n\to\infty\), \(h\to 0\), and \(nh\to\infty\),_ \[\hat{m}(\mathbf{s})=\frac{\sum_{\ell\in\mathcal{I}}m_{\ell}(0)g_{\ell}(0)}{\sum_{ \ell\in\mathcal{I}}g_{\ell}(0)}+O(h)+O_{p}\left(\sqrt{\frac{1}{nh}}\right). \tag{12}\] Proposition 3 shows that, under proper conditions, the skeleton kernel estimator on a zero-mass knot converges to the weighted average of the limiting regression values of the connected edges, and the convergence rate is the same as the edge points shown in Theorem 1. The proof is included in Appendix C.2. **Remark 3**: _The domain \(\mathcal{S}\) of the regression function can be seen as bounded, and hence the boundary bias issue can arise. The true manifold structure's boundary can be different from the boundary of the skeleton graph, making the consideration of the boundary more complicated. However, the boundary of the skeleton is the set of degree \(1\) knots, and, under our formulation, knots have discrete measures, so the consideration of boundary bias may not be necessary for the proposed formulation. However, some boundary corrections can potentially improve the empirical performance and we leave it for future research._ ### Skeleton kNN regression The \(k\)-Nearest Neighbor (kNN) method can be easily applied to the skeleton using the distance on the skeleton. For a given point on the skeleton at \(\boldsymbol{s}\in\mathcal{S}\), we define the distance to the k-th nearest observation on the skeleton as \[R_{k}(\boldsymbol{s})=\min\left\{r>0:\sum_{i=1}^{n}I(d_{\mathcal{S}}( \boldsymbol{s}_{i},\boldsymbol{s})\leq r)\geq k\right\}. \tag{13}\] Note that it is possible to have multiple observations being the \(k\)-th nearest observation due to observations being projected to the vertices. In this case, we can either randomly choose from them or consider all of them. Here we include all of them in the calculation. The skeleton-based \(k\)NN regression (S-kNN) predicts the value of outcome at \(\boldsymbol{s}\) as \[\hat{m}_{SkNN}(\boldsymbol{s})=\frac{\sum_{i=1}^{k}Y_{i}I(d_{\mathcal{S}}( \boldsymbol{s}_{i},\boldsymbol{s})\leq R_{k}(\boldsymbol{s}))}{\sum_{j=1}^{k} I(d_{\mathcal{S}}(\boldsymbol{s}_{j},\boldsymbol{s})\leq R_{k}(\boldsymbol{s}))}. \tag{14}\] Different from the usual kNN regressor with the covariates \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n}\), which selects neighbors through Euclidean distance in the ambient space, the S-kNN regressor chooses neighbors with skeleton-based distances after projection onto the skeleton graph. Measuring proximity with the skeleton can improve the regression performance when the dimension of the covariates is large, which we empirically show in Section 4. **Remark 4**: _It is well known that the usual \(k_{n}\)NN regressor can be consistent if we let \(k_{n}\) grow as a function of the sample size \(n\), and under appropriate assumptions, Gyorfi et al. (2002) give the convergence rate of the \(k_{n}\)NN estimate \(m_{n}\) to the true function \(m\) as_ \[\mathbb{E}\left\|m_{n}-m\right\|^{2}\leq\frac{\sigma^{2}}{k_{n}}+c_{1}\cdot C^ {2}\left(\frac{k_{n}}{n}\right)^{2/d}\] _Later, Kpotufe (2011) has shown that the convergence rate of \(k\)NN regressor depends on the intrinsic dimension. We expect a similar result with \(d=1\) rate for the skeleton kNN regression at an edge point._ ### Linear Spline Regression on Skeleton In this section, we propose a skeleton-based linear spline model (S-Lspline) for regression estimation. By construction, this approach results in a continuous model across the graph. Moreover, we show that the skeleton-based linear spline corresponds to an elegant parametric regression model on the skeleton. As the skeleton \(\mathcal{S}\) can be decomposed into the edge component \(\mathcal{E}\) and the knot component \(\mathcal{V}\), the linear spline regression on the skeleton can be written as the following constrained model: \[\begin{array}{rl}f:\mathcal{S}\ \rightarrow\ \mathbb{R}&\text{such that 1. }f(x)\text{ is linear on }x\in\mathcal{E},\\ &\text{2. }f(x)\text{ is continuous at }x\in\mathcal{V}.\end{array} \tag{15}\] While solving the above constrained problem may not be easy, we have the following elegant representer theorem showing that a linear spline on the skeleton can be uniquely characterized by the values on each knot. **Theorem 5** (Linear spline representer theorem): _Any function satisfying equation (15) can be characterized by \(\{f(v):v\in\mathcal{V}\}\) and for \(x\in\mathcal{E}\), \(f(x)\) is linear interpolation between the values on the two knots on the edge that \(x\) belongs to._ Let \(f\) be a function satisfying equation (15). By construct, \(f\) is linear for \(x\in\mathcal{E}\) and is continuous at \(x\in\mathcal{V}\). Let \(V_{j}\) and \(V_{\ell}\) be two knots that share an edge and let \(E_{j\ell}=\{x=tV_{j}+(1-t)V_{\ell}:t\in(0,1)\}\) be the shared edge segment. For any \(x\in\mathcal{E}\), there exists a pair \((V_{j},V_{\ell})\) such that \(x\in E_{j\ell}\). Because \(f\) is linear in \(E_{j\ell}\), \(f\) can bee uniquely characterized by the pairs \((f(e_{1}),e_{1}),(f(e_{2}),e_{2})\) for two distinct points \(e_{1},e_{2}\in\bar{E}_{j\ell}\), where \(\bar{E}_{j\ell}=\{x=tV_{j}+(1-t)V_{\ell}:t\in[0,1]\}\) is the closure of \(E_{j\ell}.\) Thus, we can pick \(e_{1}=V_{j}\) and \(e_{2}=V_{\ell}\), which implies that \(f\) on the segment \(E_{j\ell}\) is parameterized by \(f(v_{j})\) and \(f(V_{\ell})\), the values on the two knots. By applying this procedure to every edge segment, we conclude that any function satisfying the first condition in (15) can be characterized by the values on the knots. The second condition in (15) will require that every knot has one consistent value. As a result, any function \(f\) satisfying (15) can be uniquely characterized by the values on the knot \(\{f(x):x\in\mathcal{V}\}\) and \(f(x)\) will be a linear interpolation when \(x\in\mathcal{E}\). Using Theorem 5, we only need to determine the values on the knots. Let \(\mathbf{\beta}\in\mathbb{R}^{k}\) be the values of skeleton linear spline model on each knot with \(k=|\mathcal{V}|\) being the number of knots. As is argued previously, the spline model is parameterized by \(\mathbf{\beta}\), so we only need to estimate \(\mathbf{\beta}\) from the data. Given \(\mathbf{\beta}\), the predicted value of each \(\mathbf{y}_{i}\) is a linear interpolation depending on the projected location of each \(\mathbf{x}_{i}\). To derive an analytic form of \(\mathbf{y}_{i}\), we introduce a transformed covariate matrix \(\mathbf{Z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{n})^{T}\in\mathbf{R}^{n\times k}\) as follows: 1. If \(\mathbf{x}_{i}\) is projected onto a vertex that \(\mathbf{s}_{i}=V_{j}\) for some \(j\), then \[\mathbf{z}_{ij^{\prime}}=I(j^{\prime}=j).\] (16) 2. If \(\mathbf{x}_{i}\) is projected onto an edge between knots \(V_{j}\) and \(V_{\ell}\), then \[\mathbf{z}_{ij}=\frac{||\mathbf{s}_{i}-V_{j}||}{||V_{j}-V_{\ell}||},\quad\mathbf{z}_{i\ell}= \frac{||\mathbf{s}_{i}-V_{\ell}||}{||V_{j}-V_{\ell}||},\quad\text{ and }\mathbf{z}_{ij^{\prime}}=0\text{ for }j^{ \prime}\neq j,\ell.\] (17) With the above feature transform, the predicted value of \(\mathbf{y}_{i}\) by the S-Lspline model is \[\hat{\mathbf{y}}_{i}=\mathbf{\beta}^{T}\mathbf{z}_{i}.\] (18) To see this, if \(\mathbf{x}_{i}\) is projected onto a vertex that \(\mathbf{s}_{i}=V_{j}\) for some \(j\), the linear model with transformed covariates gives \(\mathbf{\beta}^{T}\mathbf{z}_{i}=\mathbf{\beta}_{j}\), the predicted value on vertex \(V_{j}\). In the case where \(\mathbf{x}_{i}\) is projected onto an edge between knots \(V_{j}\) and \(V_{\ell}\), let \(\mathbf{\beta}_{j}\) and \(\mathbf{\beta}_{\ell}\) be the corresponding predicted values at \(V_{j}\) and \(V_{\ell}\), and the linear interpolation between \(\mathbf{\beta}_{\ell}\) and \(\mathbf{\beta}_{j}\) at \(\mathbf{s}_{i}\) can be written as \[\mathbf{\beta}_{j}+\frac{||\mathbf{s}_{i}-V_{j}||}{||V_{j}-V_{\ell}||}\cdot(\mathbf{\beta} _{\ell}-\mathbf{\beta}_{j})=\frac{||\mathbf{s}_{i}-V_{\ell}||}{||V_{j}-V_{\ell}||} \cdot\mathbf{\beta}_{j}+\frac{||\mathbf{s}_{i}-V_{j}||}{||V_{j}-V_{\ell}||}\cdot\mathbf{ \beta}_{\ell}=\mathbf{\beta}^{T}\mathbf{z}_{i}.\] (19) To estimate \(\mathbf{\beta}\), we can apply the least squared procedure to get: \[\hat{\mathbf{\beta}} =\mathsf{argmin}_{\mathbf{\beta}}\sum_{i=1}^{n}(\mathbf{y}_{i}-\hat{\mathbf{y} }_{i})^{2}\] (20) \[=\mathsf{argmin}_{\mathbf{\beta}}\sum_{i=1}^{n}(\mathbf{y}_{i}-\mathbf{\beta}^ {T}\mathbf{z}_{i})^{2}.\] (21) So it becomes a linear regression model and the solution can be elegantly written as \[\hat{\mathbf{\beta}}=(\mathbf{Z}^{T}\mathbf{Z})^{-1}\mathbf{Z}\mathbf{y}.\] (22) Note that in a sense, the above procedure can be viewed as placing a linear model \[\mathbb{E}(\mathbf{y}|\mathbf{X})=\mathbf{\beta}^{T}\mathbf{Z},\] (23) where \(\mathbf{Z}\) is a transformed covariate matrix from \(\mathbf{X}\). Note that the S-Lspline model with the graph-transformed covariates does not include an intercept. **Remark 6**: _An alternative justification of the value-on-knots parameterization is to calculate the degree of freedom. On each graph, the sum of the vertex degrees is twice the number of edges since each edge is counted from both ends. Let \(e\) be the number of edges in the graph, let \(v\) be the number of vertices, and let \(r\) be the sum of all the vertex degrees, we have \(r=2e\). For the S-Lspline model, we construct a linear model with \(2\) free parameters for each edge, and thus without any constraints, the total number of degrees of freedom is \(2e\). For each vertex \(V_{i}\) with degree \(r_{i}\), the continuity constraint imposes \(r_{i}-1\) equations, and as a result, the continuity constraints consume a total of \(\sum_{i=1}^{v}r_{i}-1=r-v\) degrees of freedom. Combining it, we have \(2e-(r-v)=v\) degrees of freedom, which matches the degrees of freedom given by the parametrization of values on the knots._ ### Challenges of Other Nonparametric Regression In this section, we discuss the challenges when applying other nonparametric regression methods to the skeleton. Particularly, the skeleton graph is only equipped with a metric and does not have a well-defined inner product or orientation, which makes many conventional approaches not directly applicable. #### 3.4.1 Local polynomial regression Local polynomial regression (Fan and Gijbels, 2018) is a common generalization of the kernel regression that tries to improve the kernel regression estimator by using higher-order polynomials as local approximations to the regression function. In the Euclidean space, a \(p\)-th order local polynomial regression aims to choose \(\beta(\mathbf{x})\) via minimizing \[\sum_{i=1}^{n}\left[Y_{i}-\sum_{j=0}^{p}\beta_{j}(\mathbf{x}_{i}-\mathbf{x})^{j} \right]^{2}K\left(\frac{\mathbf{x}_{i}-\mathbf{x}}{h}\right) \tag{24}\] and predict \(m(\mathbf{x})\) via \(\hat{\beta}(\mathbf{x})\), the first element in the minimizer. Note that when \(p=1\), one can show that this is equivalent to the kernel regression. Unfortunately, the local polynomial regression cannot be easily adapted to the skeleton because the polynomial \((\mathbf{x}_{i}-\mathbf{x})^{j}\) requires a well-defined orientation, which is ill-defined at a knot (vertex). Directly replacing \((\mathbf{s}_{i}-\mathbf{s})\) with the distance \(d_{\mathcal{S}}(\mathbf{s}_{i}-\mathbf{s})\) will make all the polynomials to be non-negative, which will be problematic for odd orders. Unless in some special skeletons such as a single chain structure, the local polynomial regression cannot be directly applied. #### 3.4.2 Higher-Order Spline In Section 3.3, we introduce the linear spline model. One may be curious about the possibility of using a higher-order spline (enforcing higher-order smoothness on knots; see, e.g., Chapter 5.5 of Wasserman (2006)). Unfortunately, the higher-order spline is generally not applicable to skeleton because a higher-order spline requires derivatives and the concept of derivative may be ill-defined on a knot because of the lack of orientation. To see this, consider a knot with three edges connecting to it. There is no simple definition of derivative at this knot unless we specify the orientation of these three edges. One possible remedy is to introduce an orientation for every edge. This could be done by ordering the knots first and for every edge, the orientation is always from a lower index vertex to the higher index vertex. With this orientation, it is possible to create a higher-order spline on the skeleton but the result will depend on the orientation we choose. Even with edge directions provided and the derivatives on the skeleton defined, higher-order spline on the skeleton can be prone to overfitting. Classical spline methods use degree \(p+1\) polynomial functions to achieve continuity at \(p\)-th order derivative. For example, univariate cubic splines use polynomials up to degree 3 to ensure the second-order smoothness of the regression function at each knot. However, on a graph, degree \(p+1\) polynomial functions may fail to achieve continuity at \(p\)-th order derivative, and on complete graphs, which is the worst case, \(2p+1\) degree polynomials are needed instead. #### 3.4.3 Smoothing Spline Smoothing spline (Wang, 2011; Wahba, 1975) is another popular approach for curve-fitting that attempts to find a smooth curve that minimizes the square loss in the prediction with a penalty on the curvature (second or higher-order derivatives). The major difficulty of this method is that the concept of a _smooth_ function is ill-defined at a knot even if we have a well-defined orientation. In fact, the 'linear function' is not well-defined in general on a skeleton's knot. To see this, consider a knot \(V_{0}\) with three edges \(e_{1},e_{2},e_{3}\) connecting to \(V_{1},V_{2},V_{3}\), respectively. Suppose we have a linear function \(f_{0}\) and \(f_{0}\) is linearly increasing on paths \(V_{1}-V_{0}-V_{2}\) and \(V_{1}-V_{0}-V_{3}\). However, on the path \(V_{2}-V_{0}-V_{3}\), the function \(f_{0}\) will be decreasing \((V_{2}-V_{0})\) and then increasing \((V_{0}-V3)\), leading to a non-smooth structure. #### 3.4.4 Orthonormal Basis and Tree Orthnormal basis approach (see, e.g., Chapter 8 of Wasserman (2006)) uses a set of orthonormal basis functions to approximate the regression function. In general, it is unclear how to find a good orthonormal basis for a skeleton unless the skeleton is simply a circle or a chain. Having said that, it is possible to construct an orthonormal basis borrowing the idea from wavelets (Torrence and Compo, 1998). The key idea is that the skeleton is a measurable set that we can measure its (one-dimensional) volume. Thus, we can partition the skeleton \(\mathcal{S}\) into two equal-volume sets \(A_{1},A_{2}\). Note that the resulting sets \(A_{1},A_{2}\) are not necessarily skeletons because we may cut an edge into two pieces. For each set \(A_{j}\), we can further partition it again into equal volume sets \(A_{j,1},A_{j,2}\). And we can repeat this dyadic procedure to create many equal-volume subsets. We then define a basis as follows: \[f_{0}(s) =1,\] \[f_{1}(s) =I\left(s\in A_{1}\right)-I\left(s\in A_{2}\right)\] \[f_{2}(s) =I(s\in A_{1,1})-I(s\in A_{1,2})\] \[f_{3}(s) =I(s\in A_{2,1})-I(s\in A_{2,2})\] \[\vdots\] After normalization, this set of functions forms an orthonormal basis. With this basis, it is possible to fit an orthonormal basis on the skeleton. However, the above construction creates the partition arbitrarily. The fitting result depends on the particular partition we use to generate the basis and it is unclear how to pick a reasonable partition in practice. The regression tree (Breiman, 2017; Loh, 2014) is a popular idea in nonparametric regression that fits the data via creating a tree of partitioning the whole sample space whose leaves represent a subset of the sample space and predicts the response using a single parameter at each leave (region). This idea could be applied to the skeleton using a similar procedure as the construction of an orthonormal basis that we keep splitting a region into two subsets (but we do not require the two subsets to be of equal size). However, unlike the usual regression tree (in Euclidean space) that the split of two regions is often at a threshold at one coordinate, the split of a skeleton may not be easily represented as the skeleton is just a connected subregion of Euclidean space. Therefore, similar to the orthonormal basis, regression tree may be used in skeleton regression, but there is no simple and principled way to create a good partition. ## 4 Simulations In this section, we use simulated data to evaluate the performance of the proposed skeleton regression framework. We first demonstrate an example with the intrinsic domain composed of several disconnected components, which we call the Yinyang data (Section 4.1). Then, we add noisy observations to the Yinyang data (Section 4.2) to show the effectiveness of our method in handling noises. Moreover, we present an example where the domain is a continuous manifold with a Swiss roll shape (Section 4.3). In all the simulations in this section, there are random perturbations in the intrinsic dimensions, and we add random Gaussian variables as covariates to increase the ambient dimension. ### Yinyang Data The covariate space of Yinyang data is intrinsically composed of 5 disjoint structures of different geometric shapes and different sizes: a large ring of 2000 points, two clumps each with 400 points (generated with the shapes.two.moon function with default parameters in the clusterSim library in R (Walesiak and Dudek, 2020)), and two 2-dimensional Gaussian clusters each with 200 points (Figure 6 left). Together there are a total of 3200 observations. Note that the intrinsic structures of the components are curves and points, and, with perturbations, the generated covariates do not lay exactly on the corresponding manifold structures. The responses are generated from a trigonometric function on the ring and constant functions on the other structures with random Gaussian error(Figure 6 right). That is, let \(\epsilon\sim N(0,0.01)\) and let \(\theta\) be the angle of the covariates, then \[Y=\epsilon+\begin{cases}\sin(\theta*4)+1.5&\text{for points on the outer ring}\\ 0&\text{for points on the bottom-right Gaussian cluster}\\ 1&\text{for points on the right clump}\\ 2&\text{for points on the left clump}\\ 3&\text{for points on the upper-left Gaussian cluster}\end{cases} \tag{25}\] To make the task more challenging with the presence of noisy variables, we add independent and identically distributed random \(N(0,0.01)\) variables to the generated covariates. In this section, we increase the dimension of the covariates to a total of 1000 with those added Gaussian variables. We randomly generate the dataset for 100 times, and on each dataset we use 5-fold cross-validation to calculate the sum of squared errors (SSE) as the performance assessment. For each fold, there are 2560 training samples. We use the skeleton construction method described in Section 2.1 to construct skeletons with varying numbers of knots on each training set. The construction procedure also cuts each skeleton into 5 disjoint components according to the Voronoi Density weights (Section 2.1). We also empirically tested using different cuts to get skeleton structures with different numbers of disjoint components under the same number of knots and noticed little change in the squared error performance (see Appendix D). We evaluate the skeleton-based nonparametric regressors introduced in Section 3: skeleton kernel regression (S-kernel), \(k\)-NN regressor using skeleton-based distance (S-kNN), and the skeleton spline model(S-Lspline). For comparisons, we apply the classical k-Nearest-Neighbors regression based on Euclidean distances (kNN). For penalization regression methods, we test Lasso and Ridge regression. Among the recent manifold and local regression methods, we include the Spectral Series approach with the radial kernel (SpecSeries) for its superior performance and readily available R implementation 3. We take the median, 5th percentile, and 95th percentile of the 5-fold cross-validation Sum of Squared Errors (SSEs) for each parameter setting of each method on the 100 datasets. We present the smallest median SSE for each method in Table 6 along with the corresponding best parameter setting. Footnote 3: [https://projecteuclid.org/journals/supplementalcontent/10.1214/16-EJS1112/supzip_1.zip](https://projecteuclid.org/journals/supplementalcontent/10.1214/16-EJS1112/supzip_1.zip) We observed that all the skeleton-based methods (S-Kernel, S-kNN, and S-Lspline) performed better than the standard kNN in this setting. The SpecSeries approach performed worse than the classical kNN, and only slightly better than the Lasso regression. Ridge and Lasso regression, despite the regularization effect, resulted in relatively high SSEs. Therefore, the skeleton regression framework is beneficial in dealing with covariates that lie around manifold structures. In Figure 7, we present the median SSE of the S-Lspline, S-Kernel, and S-kNN methods on skeletons with various numbers of knots. The vertical dashed line indicates \([\sqrt{n}]=51\) knots as suggested by the empirical rule, where \(n\) is the training sample size. The empirical rule seems to produce satisfactory results in this simulation study, roughly identifying the "elbow" position, but it's advised to use cross-validation for fine-tuning in practice. ### Noisy Yinyang Data To show the robustness of the proposed skeleton-based regression methods, we add 800 noisy observations to the Yinyang data in Section 4.1 (20% of a total of 4000 observations). The first two dimensions of the noisy covariates are uniformly sampled from the 2-dimensional square \([-3.5,3.5]\times[-3.5,3.5]\) and independent random normal \(N(0,0.01)\) variables are added to make the covariates 1000-dimensional in total. The responses of the noisy points are set as \(1.5+\epsilon\) with \(\epsilon\sim N(0,0.01)\), while the responses on the Yinyang covariates are generated the same as in Equation 25. The first two dimensions of the Noisy Yinyang covariates are plotted in Figure 9 left and the \(Y\) values against the first two dimensions of the covariates are illustrated in Figure 9 right. To evaluate the robustness of the proposed skeleton-based regression methods, we randomly generated the Noisy Yinyang data 100 times, and followed the same analysis procedure as in Section 4.1, except that we left the skeleton that we fit our regression estimators on to be a fully connected graph. We also took the median, 5th percentile, and 95th percentile of the 5-fold cross-validation SSEs for each parameter setting of each method on the 100 datasets. The smallest median SSE for each method is reported in Table 2 along with the corresponding best parameter setting. It can be seen that all the skeleton-based \begin{table} \begin{tabular}{||c c c c||} \hline Method & Medium SSE (5\%, 95\%) & nknots & Parameter \\ \hline \hline kNN & 204.5 (192.3, 221.9) & - & neighbor=18 \\ Ridge & 2127.0 (2100.2, 2155.2) & & \(\lambda=7.94\) \\ Lasso & 1556.8 (1515.4, 1607.9) & & \(\lambda=0.0126\) \\ SpecSeries & 1506.4 (1469.1,1555.6) & - & bandwidth = 2 \\ S-Kernel & 112.8 (102.0, 121.7) & 38 & bandwidth = 6 \(r_{hns}\) \\ S-kNN & 139.6 (129.6,148.7) & 38 & neighbor = 36 \\ S-Lspline & 95.8 (88.6, 102.6) & 38 & - \\ \hline \end{tabular} \end{table} Table 1: Regression results on Yinyang \(d=1000\) data. The smallest medium 5-fold cross-validation SSE from each method is listed with the corresponding parameters used. The 5th percentile and 95th percentile of the SSEs from the given parameter settings are reported in brackets. Figure 6: Yinyang Regression Data Figure 7: Yinyang \(d=1000\) data regression results with varying number of knots. The medium SSE across the 100 simulated datasets with each given parameter setting is plotted. \begin{table} \begin{tabular}{||c c c c||} \hline Method & Medium SSE (5\%, 95\%) & Number of knots & Parameter \\ \hline \hline kNN & 440.8 (420.4, 463.0) & - & neighbor=18 \\ Ridge & 2139.1 (2102.6, 2171.1) & - & \(\lambda=6.31\) \\ Lasso & 2029.2 (1988.7, 2071.0) & - & \(\lambda=0.02\) \\ SpecSeries & 1532.0 (1490.7, 1563.2) & - & bandwidth = 2 \\ S-Kernel & 385.7 (365.2, 406.0) & 57 & bandwidth = 6 \(r_{hns}\) \\ S-kNN & 417.6 (396.1, 440.6) & 71 & neighbor = 36 \\ S-Lspline & 377.7 (358.1, 398.9) & 71 & - \\ \hline \end{tabular} \end{table} Table 2: Regression results on Noisy Yinyang \(d=1000\) data.The smallest medium 5-fold cross-validation SSE from each method is listed with the corresponding parameters used. The 5 percentile and 95 percentile of the SSEs from the given parameter settings are reported in bracket. Figure 10: Noisy Yinyang \(d=1000\) data regression results with varying number of knots. The medium SSE across the 100 simulated datasets with each given parameter setting is plotted. Figure 9: Noisy Yinyang Regression Data regression methods outperform the standard kNN and the SpecSeries approach. The Ridge and Lasso regressions again fail to provide good performance on this simulated dataset. In Figure 7, we plot the median SSE of the skeleton-based methods on skeletons with different numbers of knots. Using the empirical rule to construct a skeleton with \([\sqrt{3200}]=57\) knots results in good regression performance and approximately identifies the "elbow" position in Figure 7. However, for some skeleton-based methods, using a number of knots larger than that given by the empirical rule leads to better regression performance. This improvement is related to the phenomenon observed in Wei and Chen (2023) that when dealing with noisy observations, it's better to have a skeleton with more knots and cut the skeleton into more disjoint components in order to have a cleaner representation of the key manifold structures. Therefore, when facing data with noisy feature vectors, it's advised to empirically tune the number of knots favoring larger values. ### SwissRoll Data The intrinsic components of the covariates in Yinyang data are all well-separated, which, admittedly, can give an advantage to skeleton-based methods. Moreover, the intrinsic dimensions of the structural components for Yinyang data covariates are all lower than or equal to 1 and can be straightforwardly represented by knots and line segments, potentially giving another advantage to skeleton-based methods. To address such concerns, we present another simulated data which has covariates lying around a Swill Roll shape (Figure 12 left), an intrinsically 2-dimensional manifold in the 3-dimensional Euclidean space. To make the density on the Swill Roll manifold balanced, we sample points inversely proportional to the radius of the roll in the \(X_{1}X_{3}\) plane. Specifically, let \(u_{1},u_{2}\) be independent random variables from \(\text{Uniform}(0,1)\) and let the angle in the \(X_{1}X_{3}\) plane be generated as \(\theta_{13}=\pi 3^{u_{1}}\). Then for the first 3 dimensions of the covariates we have \[X_{1}=\theta_{13}\cos(\theta_{13}),\ \ X_{2}=4u_{2},\ \ X_{3}=\theta_{13}\sin( \theta_{13}) \tag{26}\] The true response has a polynomial relationship with the angle on the manifold if the \(X_{2}\) value of the point is within some range. Let \(\tilde{\theta}_{13}=\theta_{13}-2\pi\), and let \(\epsilon\sim N(0,0.3)\). Then we set \[Y=0.1\times\tilde{\theta}_{13}^{3}\times[I(X_{2}<\pi)+I(2\pi<X_{2}<3\pi)]+\epsilon \tag{27}\] The response versus the angle \(\theta_{13}\) and \(X_{2}\) is demonstrated in Figure 12 right. Independent random Gaussian variables from \(N(0,0.1)\) are added to make the covariates 1000-dimensional in total, and 2000 observations are sampled to make the Swiss Roll dataset. We randomly generated the data 100 times and used the same analysis procedures as in Section 4.1. We took the median, 5th percentile, and 95th percentile of the 5-fold cross-validation SSEs across each parameter setting for each method on the 100 datasets, and reported the smallest median SSE for each method along with the corresponding best parameter setting in Table 3. All the proposed skeleton-based methods have better performance than the standard kNN regressor, while the S-Lspline method had the best performance in terms of SSE. The SpecSeries approach in this setting has performance similar to the Lasso regression and did not improve much on the regression results utilizing information about the underlying manifold structure, possibly due to the large number of noisy dimensions. Figure 12: Swiss Roll Regression Data \begin{table} \begin{tabular}{||c c c c||} \hline \hline Method & Medium SSE (5\%, 95\%) & nknots & Parameter \\ \hline \hline kNN & 648.5 (607.1, 696.0) & - & neighbor=12 \\ Ridge & 1513.7 (1394.4, 1616.2) & - & \(\lambda=2.0\) \\ Lasso & 1191.4 (1106.7, 1260.7) & - & \(\lambda=0.032\) \\ SpecSeries & 1166.5 (1081.4, 1238.8) & - & bandwidth = 2.0 \\ S-Kernel & 588.7 (527.0, 653.7) & 70 & bandwidth = 4 \(r_{hns}\) \\ S-kNN & 614.7 (561.2, 692.6) & 70 & neighbor = 27 \\ S-Lspline & 578.6 (508.0, 629.6) & 60 & - \\ \hline \end{tabular} \end{table} Table 3: Regression results on Swiss Roll \(d=1000\) data. The smallest medium 5-fold cross-validation SSE from each method is listed with the corresponding parameters used. The 5 percentile and 95 percentile of the SSEs from the given parameter settings are reported in brackets. Figure 13: Swiss Roll \(d=1000\) data regression results with varying number of knots. The medium SSE across the 100 simulated datasets with each given parameter setting is plotted. Therefore, the proposed skeleton regression framework can also be powerful for data on connected, multi-dimensional manifolds. By plotting the median SSE under skeletons with a varying number of knots in Figure 13, we observed that the best performance for all the skeleton-based methods is achieved with the number of knots larger than \([\sqrt{1600}]=40\) knots. Given the intrinsic structure of the Swiss Roll input space is a 2D plane, having more knots on the plane can give a better representation of the data structure and, therefore, lead to better prediction accuracy. We conjecture that the optimal number of knots should depend on the intrinsic dimension of the covariates, and we plan to discuss this further in future work. However, it's recommended to use cross-validation to choose the number of knots in practice. ## 5 Real Data In this section, we present analysis results on two real datasets. We first predict the rotation angles of an object in a sequence of images taken from different angles (Section 5.1). For the second example, we study the galaxy sample from the Sloan Digital Sky Survey (SDSS) to predict the spectroscopic redshift (Section 5.2), a measure of distance from a galaxy to earth. ### Lucky Cat Data This dataset consists of 72 gray-scale images of size \(128\times 128\) pixels taken from the COIL-20 processed dataset (Nene et al., 1996). They are 2D projections of a 3D lucky cat obtained by rotating the object by 72 equispaced angles on a single axis. Several examples of the images are given in Figure 15. The response in this dataset is the angle of rotation. However, this response has a circular nature where degree 0 is the same as degree 360. To avoid this issue, we removed the last 8 images from the sequence, only using the first 64 images. As a result, our dataset consists of 64 samples from a 1-dimensional manifold embedded in \(\mathbb{R}^{16384}\) along with scalar values representing the angle of rotation. To assess the performance of each method, we use leave-one-out cross-validation. Similarly to the simulation studies, we use the skeleton construction method with Voronoi weights in Wei and Chen (2023) to construct the skeleton on the training set. In practice, we found that a small number of knots can still lead to loops in the constructed skeleton structure, and, after some tuning, we fit \(2[\sqrt{n}]=16\) knots to each training set. Additionally, since the underlying manifold should be one connected structure, we do not cut the constructed skeleton structure in this experiment. Due to the high-dimensional nature of the data, Ridge regression, Lasso regressions, and the Spectral Series approach failed to run with the implementations in R. The best SSE from each method is listed in Table 4 along with the corresponding parameters. We observed that the S-Lspline method gives outstanding performance on this real-world data, significantly outperforming the kNN regressor. ### SDSS Data In this section, we applied the skeleton regression to a galaxy sample of size 5000, taken from a random subsample of the Sloan Digital Sky Survey (SDSS), data release 12 (York et al., 2000; Alam et al., 2015). The dataset consists of 5 covariates measuring apparent magnitudes of galaxies from images taken using 5 photometric filters. These covariates can be understood as the color of a galaxy and are inexpensive to obtain. The response variable is the spectroscopic redshift, which is a very costly but accurate measurement of the distance to the earth. It is known that the 5 photometric color measurements are correlated with the spectroscopic redshift. So the goal is to use the photometric information to predict the redshift; this is known as the clustering redshift problem in Astronomy literature (Morrison et al., 2017; Rahman et al., 2015). We construct the skeleton with the method in Wei and Chen (2023) and fit the S-Lspline model. We color the knots by their predicted redshift values and color the edges by the average predicted values of the two connected knots. The resulting skeleton graph is shown in the left panel of Figure 17. For comparison, we color the knots and edges using the true values in the right panel of Figure 17. The predictions given by S-Lspline are very close to the true values. For completeness, we also include results from other approaches in Table 5. While skeleton approaches do not provide the best prediction accuracy, the skeleton structure obtained in Figure 17 shows a clear one-dimensional structure in the underlying covariate distribution. This explains why kNN and SpecSeries methods work well in this data ( both methods can adapt to the underlying manifold data). Thus, even if our method does not provide the best prediction accuracy, the skeleton itself can be used as a tool to investigate the structure of the covariate distribution, which can be valuable for practitioners. We perform the same analysis as in Section 4 by comparing the 5-fold cross-validation SSEs of different regression methods on this dataset. Despite the fact that our skeleton \begin{table} \begin{tabular}{||c c c||} \hline Method & SSE & Parameter \\ \hline \hline kNN & 888.9 & neighbor=9 \\ Ridge & - & - \\ Lasso & - & - \\ SpecSeries & - & - \\ S-Kernel & 1205.9 & bandwidth = \(4r_{hns}\) \\ S-kNN & 2604.2 & neighbor = 6 \\ S-Lspline & 338.1 & - \\ \hline \end{tabular} \end{table} Table 4: Regression results on LuckyCat data from COIL-20. The best SSE from each method is listed with the corresponding parameters used. Figure 15: A part of the lucky cat images from the COIL-20 processed dataset. Each image is of size 128 pixels. \begin{table} \begin{tabular}{||c c c||} \hline Method & SSE & Parameter \\ \hline \hline kNN & 67.8 & neighbor=12 \\ Ridge & 870.3 & \(\lambda=0.001\) \\ Lasso & 882.7 & \(\lambda=0.001\) \\ SpecSeries & 66.6 & bandwidth = 2 \\ S-Kernel & 90.6 & bandwidth = \(10r_{hns}\) \\ S-kNN & 95.8 & neighbor = 39 \\ S-Lspline & 89.6 & - \\ \hline \end{tabular} \end{table} Table 5: Regression results on SDSS data. The best SSE from each method is listed with the corresponding parameters used. Figure 17: SDSS Skeleton Colored by values predicted by S-Lspline (left) and by true values (right). based methods do not show superior performance on this particular dataset, the skeleton representation does reveal the manifold structure in the covariate space and an approximate monotone trend in the response. The clean manifold structure of the data and the small number of covariates may explain the superior performance of kNN and SpecSeries in this case. ## 6 Conclusion In this work, we introduce the skeleton regression framework to handle regression problems with manifold-structured inputs. We generalize the nonparametric regression techniques such as kernel smoothing and splines onto graphs. Our methods provide accurate and reliable prediction performance and are capable of recovering the underlying manifold structure of the data. Both theoretical and empirical analyses are provided to illustrate the effectiveness of the skeleton regression procedures. In what follows, we describe some possible future directions: * **Generalizing skeleton graphs to simplicial complex.** From a geometric perspective, the skeleton graph constructed in this work only focuses on 0-simplices (points) and 1-simplices (line segments). Additional geometric information can be encoded using higher-dimensional simplices. Recent research in deep learning has explored the use of simplicial complices for tasks such as clustering and segmentation (Bronstein et al., 2017; Bodnar et al., 2021). Higher-dimensional simplicies offer a finer approximation to the covariate distribution but have a higher computational cost and a more complex model. Thus, it is unclear if using a higher-dimensional simplex will lead to a better prediction accuracy. We will explore the possibility of extending skeleton graphs to skeleton complex in the future. * **Nonparametric smoothers on graphs.** The kernel regression and spline regression are not the only possibilities to perform nonparametric smoothing on graphs. For example, Wang et al. (2016) generalized the concept of trend filtering Kim et al. (2009); Tibshirani (2014) to graphs and compared it to Laplacian smoothing and Wavelet smoothing. In contrast to our work, these regression estimators for graphs are applied to data where both the inputs and responses are located on the vertices of a given graph. As a result, these graph smoothers, which include different regularizations, can only fit values on the vertices, and do not model the regression function on the edges (Wang et al. (2016) mentioned the possibility of linear interpolation with the trend filtering). It is possible to generalize these methods to skeleton by constructing responses on the knots in the skeleton graph as the mean values of the corresponding Voronoi cell, and then graph smoothers can apply. Some interpolation methods can again be used to predict the responses on the edge, and this can lead to another skeleton-based regression estimator. * **Time-varying covariates and responses.** A possible avenue for future research is to extend the skeleton regression framework to handle time-varying covariates and responses. Specifically, covariates collected at different time could be used together to construct knots in a skeleton. The edges in the skeleton can change dynamically according to the covariate distribution at different times, providing insight into how the covariate distributions have evolved. Additionally, representing the regression function on the skeleton would make it simple to visualize how the function changes over time. * **Streaming data and online skeleton update.** As streaming data becomes increasingly common, a potential area of future research is to investigate methods for updating the skeleton structure and its regression function in a real-time or online fashion. Reconstructing the entire skeleton can be computationally costly, but local updates to edges and knots can be more efficient. We plan to explore ways to develop a simple yet reliable method for updating the skeleton in the future. #### Acknowledgments YC is supported by NSF DMS-195278, 2112907, 2141808 and NIH U24-AG072122. JW is supported by NSF DMS - 2112907.
2305.02474
MLHOps: Machine Learning for Healthcare Operations
Machine Learning Health Operations (MLHOps) is the combination of processes for reliable, efficient, usable, and ethical deployment and maintenance of machine learning models in healthcare settings. This paper provides both a survey of work in this area and guidelines for developers and clinicians to deploy and maintain their own models in clinical practice. We cover the foundational concepts of general machine learning operations, describe the initial setup of MLHOps pipelines (including data sources, preparation, engineering, and tools). We then describe long-term monitoring and updating (including data distribution shifts and model updating) and ethical considerations (including bias, fairness, interpretability, and privacy). This work therefore provides guidance across the full pipeline of MLHOps from conception to initial and ongoing deployment.
Faiza Khan Khattak, Vallijah Subasri, Amrit Krishnan, Elham Dolatabadi, Deval Pandya, Laleh Seyyed-Kalantari, Frank Rudzicz
2023-05-04T00:50:27Z
http://arxiv.org/abs/2305.02474v1
# MLHOps: Machine Learning for Healthcare Operations ###### Abstract Machine Learning Health Operations (MLHOps) is the combination of processes for reliable, efficient, usable, and ethical deployment and maintenance of machine learning models in healthcare settings. This paper provides both a survey of work in this area and guidelines for developers and clinicians to deploy and maintain their own models in clinical practice. We cover the foundational concepts of general machine learning operations, describe the initial setup of MLHOps pipelines (including data sources, preparation, engineering, and tools). We then describe long-term monitoring and updating (including data distribution shifts and model updating) and ethical considerations (including bias, fairness, interpretability, and privacy). This work therefore provides guidance across the full pipeline of MLHOps from conception to initial and ongoing deployment. keywords: MLOps, Healthcare, Responsible AI + Footnote †: journal: Journal of Biomedical Informatics ## 1 Introduction Over the last decade, efforts to use health data for solving complex medical problems have increased significantly. Academic hospitals are increasingly dedicating resources to bring machine learning (ML) to the bedside and to addressing issues encountered by clinical staff. These resources are being utilized across a range of applications including clinical decision support, early warning, treatment recommendation, risk prediction, image informatics, telediagnosis, drug discovery, and intelligent health knowledge systems. There are various examples of ML being applied to medical data, including prediction of sepsis [239], in-hospital mortality, prolonged length-of-stay, patient deterioration, and unplanned readmission [218]. In particular, sepsis is one of the leading causes of in-hospital deaths. A large-scale study demonstrated the impact of an early warning system to reduce the lead time for detecting the onset of sepsis, and hence allowing more time for clinicians to prescribe antibiotics [8]. Similarly, deep convolutional neural networks have been shown to achieve superior performance in detecting pneumonia and other pathologies from chest X-rays, compared to practicing radiologists [219]. These results highlight the potential of ML models when they are strongly integrated into clinical workflows. When deployed successfully, data-driven models can free time for clinicians[109], improve clinical outcomes [217], reduce costs [28], and provide improved quality care for patients. However, most studies remain preliminary, limited to small datasets, and/or implemented in select health sub-systems. Integrating with clinical workflows remains crucial [278, 266] but, despite recent computational advances and an explosion of health data, deploying ML in healthcare responsibly and reliably faces several operational and engineering challenges, including: * Standardizing data formats, * Strengthening methodologies for evaluation, monitoring and updating, * Building trust with clinicians and hospital staff, * Adopting interoperability standards, and * Ensuring that deployed models align with ethical considerations, do not exacerbate biases, and adhere to privacy and governance policies In this review, we articulate the challenges involved in implementing successful Machine Learning Health Operations (MLHOps) pipelines, specific to clinical use cases. We begin by outlining the foundations of model deployment in general, and provide a comprehensive study of the emerging discipline [251, 167]. We then provide a detailed review of the different components of development pipelines specific to healthcare. We discuss data, pipeline engineering, deployment, monitoring and updating models, and ethical considerations pertaining to healthcare use cases. While MLHOps often requires aspects specific to healthcare, best practices and concepts from other application domains are also relevant. This summarizes the primary outcome of our review, which is to provide a set of recommendations for implementing MLHOps pipelines in practice - i.e., a "how-to" guide for practitioners. ## 2 Foundations of MLOps ### What is MLOps? Machine learning operations (MLOps) is a combination of tools, techniques, standards, and engineering best practices to standardize ML system development and operations [251]. It is used to streamline and automate the deployment, monitoring, and maintenance of machine learning models, in order to ensure they are robust, reliable, and easily updated or upgraded. ### MLOps Pipeline Pipelines are processes of multiple modules that streamline the ML workflow. Once the project is defined, the MLOps pipeline begins with identifying the inputs and outputs relevant to the problem, cleaning, and transforming the data towards useful and efficient representations for machine learning, training, and evaluating model performance, and deploying selected models in production while continuing to monitor their performance. Figure 1 illustrates a general MLOps pipeline. Common types of pipelines include: * _Automated pipelines:_ An end-to-end pipeline that is automated towards a single task, e.g., a model training pipeline. * _Orchestrated pipelines:_ A pipeline that consists of multiple modules, designed for several automated tasks, and managed and coordinated in a dynamic workflow, e.g., the pipeline managing MLOps. Recently, MLOps has become more well-defined and widely implemented due to the reusability and standardization benefits across various applications [229]. As a result, the structure and definitions of different components are becoming quite well-established. ### MLOps Components MLOps pipelines consist of different components and key concepts [108; 134], stated below (and shown in Figure 1): * Stores encapsulate the tools designed to centralize building, managing, and sharing either features or models across different teams and applications in an organization. * **Raw data source:* * A raw data store is a centralized repository that stores data in its raw, unprocessed form. It is a staging area where data is initially collected and stored before processing or transformation. * **Feature store:* * A centralized online repository for storing, managing, and sharing features used in ML models. These features are acquired by processing the raw data and are made available for real-time serving through the feature store. * **ML metadata store:* * A ML metadata store helps record and retrieve metadata associated with an ML pipeline including information about various pipeline components, their executions (e.g. training runs), and resulting artifacts (e.g. trained models). * **Serving:** Serving is the task of hosting ML artifacts (usually models) Figure 1: MLOps pipeline either on the cloud or on-premise so that their functions are accessible to multiple applications through remote function calls (i.e., application programming interfaces (APIs)). * In _batch serving_, the artifact is used by scheduled jobs. * In _online serving_, the artifact processes requests in real-time. Communication and access point channels, traffic management, pre- and post-processing requests, and performance monitoring should all be considered while serving artifacts. * **Data query:** The component queries the data, processes it and stores it in a format that models can easily utilize. * **Experimentation:** The experimentation component consists of model training, model evaluation, and model validation. * **Model registry:** The model registry is a centralized repository that stores trained machine learning models, their metadata, and their versions. * **Drift-detection:** The drift-detection component is responsible for monitoring the AI system for potentially harmful drift and issuing an alert when drift is detected. * **Workflow orchestration:** The workflow orchestration component responsible for the process of automating and managing the end-to-end flow of the ML pipeline. * **Source repository:** The source repository is a centralized code repository that stores the source code (and its history) for ML models and related components. * **Containerization:** Containerization involves packaging models with the components required to run them; this includes libraries and frameworks so they can run in isolated user spaces with minimal configuration of the underlying operating system [86]. Sometimes, source code is also included in these containers. ### Levels of MLOps maturity MLOps practices can be divided into different levels based on the maturity of the ML system automation process [118; 251], as described below. * Manual ML pipeline:** Every step in the ML pipeline, including data processing, model building, evaluation, and deployment, are manual processes. In Level 0, the experimental and operational pipelines are distinct and the data scientists provide a trained model as an artifact to the engineering team to deploy on their infrastructure. Here, only the trained model is served for deployment and there are infrequent model updates. Level 0 processes typically lack rigorous and continuous performance monitoring capabilities. * Continuous Model Training and Delivery:** Here, the entire ML pipeline is automated to perform continuous training of the model as well as continuous delivery of model prediction services. Software orchestrates the execution and transition between the steps in the pipeline, leading to rapid iteration over experiments and an automatic process for deploying a selected model into production. Contrary to Level 0, the entire training pipeline is automated, and the deployed model can incorporate newer data based on pipeline triggers. Given the automated nature of Level 1, it is necessary to continuously monitor, evaluate, and validate models and data to ensure expected performance during production. * Continuous Integration and Continuous Delivery:** This involves the highest maturity in automation through enforcing combined practice of continuous integration and delivery which enables for a rapid and reliable update of the pipelines in production. Through automated test and deployment of new pipeline implementations, any rapid changes in data and business environment can be addressed. In this level, the pipeline and its components are automatically built, tested, and packaged when new code is committed or pushed to the source code repository. Moreover, the system continuously delivers new pipeline implementations to the target environment that in turn delivers prediction services of the newly trained model. Ultimately implementation of MLOps leads to many benefits, including better system quality, increased scalability, simplified management processes, improved governance and compliance, increased cost savings and improved collaboration. ## 3 MLHOps Setup Operationalizing ML models in healthcare is unique among other application domains. Decisions made in clinical environments have a direct impact on patient outcomes and, hence, the consequences of integrating ML models into health systems need to be carefully controlled. For example, early warning systems might enable clinicians to prescribe treatment plans with increased lead time [109]; however, these systems might also suffer from a high false alarm rate, which could result in alarm fatigue and possibly worse outcomes. The requirements placed on such ML systems are therefore very high and, if they are not adequately satisfied, the result is diminished adoption and trust from clinical staff. Rigorous long-term evaluation is needed to validate the efficacy and to identify and assess risks, and this evaluation needs to be reported comprehensively and transparently [265]. While most MLOps best practices extend to healthcare settings, the data, competencies, tools, and model evaluation differ significantly [179; 172; 255; 17]. For example, typical performance metrics (e.g., positive predictive value and F1-scores) may differ between clinicians and engineers. Therefore, unlike in other industries, it becomes necessary to evaluate physician experience when predictions and model performance are presented to clinical staff [272]. In order to build trust in the clinical setting, the interpretability of ML models is also exceptionally important. As more ML models are integrated into hospitals, new legal frameworks and standards for evaluation need to be adopted, and MLHOps tools need to comply with existing standards. In the following sections, we explore the different components of MLHOps pipelines. ### Data Successfully digitizing health data has resulted in a prodigious increase in the volume and complexity of patient data collected [218]. These datasets are now stored, maintained, and processed by hospital IT infrastructure systems which in turn use specialized software systems. #### 3.1.1 Data sources There could be multiple sources of data, which are categorized as follows: Electronic health records (EHRs) record, analyze, and present information to clinicians, including: 1. **Patient demographic data:** E.g., age and sex. 2. **Administrative data:** E.g., treatment costs and insurance. 3. **Patient observations records:** E.g., chart events such as lab tests and vitals. These include a multitude of physiological signals captured using various methods such as heart rate, blood pressure, skin temperature, and respiratory rate. 4. **Interventions:** These are steps that significantly alter the course of patient care, such as mechanical ventilation, dialysis, or blood transfusions. 5. **Medications information:** E.g., medications administered and their dosage. 6. **Waveform data:** This digitizes physiological signals collected from bedside patient monitors. 7. **Imaging reports and metadata:** E.g., CT scans, MRI, ultrasound, and corresponding radiology reports. 8. **Medical notes:** These are made by clinical staff on patient condition. These can also be transcribed text of recorded interactions between the patient and clinician. Other sources of health data include primary care data, wearable data (e.g., smartwatches), genomics data, video data, surveys, medical claims, billing data, registry data, and other patient-generated data [216; 30; 45]. Figure 2 illustrates the heterogeneous nature of health data. The stratification shown can be extended further to contain more specialized data. For example, genomics data can be further stratified into different types of data based on the method of sequencing; observational EHR data can be further stratified to include labs, vital measurements, and other recorded observations. With such large volumes and variability in data, standardization is key to achieve scalability and interoperability. Figure 3 illustrates the different levels of standardization that need to be achieved with respect to health data. #### 3.1.2 Common Data Model (CDM) Despite the widespread adoption of EHR systems, clinical events are not captured in a standard format across observational databases [195]. For effective research and implementation, data must be drawn from many sources and compared and contrasted to be fully understood. Databases must also support scaling to large numbers of records which can be processed concurrently. Hence, efficient storage systems along with computational techniques are needed to facilitate analyses. One of the first steps Figure 2: Stratification of health data. Further levels of stratification can be extended as the data becomes richer. For example, observational EHR data could include labs, vital measurements, and other recorded observations. towards scalability is to transform the data to a common data standard. Once available in a common format, the process of extracting, transforming, and loading (ETL) becomes simplified. In addition to scale, patient data require a high level of protection with strict data user agreements and access control. A common data model addresses these challenges by allowing for downstream functional access points to be designed independent of the data model. Data that is available in a common format promotes collaboration and mitigates duplicated effort. Specific implementations and formats of data should be hidden from users, and only high-level abstractions need to be visible. The Systematized Nomenclature of Medicine (SNOMED) was among the first efforts to standardize clinical terminology, and a corresponding dictionary with a broad range of clinical terminology is available as part of SNOMED-CT [67]. Several data models use SNOMED-CT as part of their core vocabulary. Converting datasets to a common data model like the Observational Medical Outcomes Partnership (OMOP) model involves mapping from a source database to the target content delivery manager. This process is usually time-consuming and involves a lot of manual effort undertaken by data scientists. Tools to simplify the mapping and conversion process Figure 3: The hierarchy of standardization that common data models and open standards for interoperability address. The lowest level is about achieving standardization of variable names such as lab test names, medications and diagnosis codes, as well as the data types used to store these variables (i.e. integer vs. character). The next level is about having abstract concepts such that data can be mapped and grouped under these concept definitions. The top level of standardization is about data exchange formats, e.g. JSON, XML, protocols, along with protocols for information exchange like supported RESTful API architectures. This level addresses questions on interoperability and how data can be exchanged across sites and EHR systems. can save time and effort and promote adoption. For OMOP, the ATLAS tool [195] developed by Observational Health Data Sciences and Informatics (OHDSI) provides such a feature through their web based interactive analysis platform. #### 3.1.3 Interoperability and open standards As the volume of data grows in healthcare institutions and applications ingest data for different use cases, real-time performance and data management is crucial. To enable real-time operation and easy exchange of health data across systems, an interoperability standard for data exchange along with protocols for accessing data through easy-to-use programming interfaces is necessary. Some of the popular healthcare data standards include Health Level 7 (HL7), Fast Healthcare Interoperability Resources (FHIR), Health Level 7 v2 (HL7v2), and Digital Imaging and Communications in Medicine (DICOM). The FHIR standard [31] is a leading open standard for exchanging health data. FHIR is developed by Health Level 7 (HL7), a not-for-profit standards development organization that was established to develop standards for hospital information systems. FHIR defines the key entities involved in healthcare information exchange as resources, where each resource is a distinct identifiable entity. FHIR also defines APIs which conform to the representational state transfer (REST) architectural style for exchanging resources, allowing for stateless Hypertext Transfer Protocol (HTTP) methods, and exposing directory-structure like URIs to resources. RESTful architectures are light-weight interfaces that allow for faster transmission, which is more suitable for mobile devices. RESTful interfaces also facilitate faster development cycles because of their simple structure. DICOM is the standard for the communication and management of medical imaging information and related metadata. The DICOM standard specifies the format and protocol for exchange of digital information between medical imaging equipment and other systems. Persistent information objects which encode images are exchanged and an instance of such an information object may be exchanged across many systems and many organizational contexts, and over time. DICOM has enabled deep collaboration and standardization across different disciplines such as radiology, cardiology, pathology, ophthalmology, and related disciplines. #### 3.1.4 Quality assurance and validation Data collected in retrospective databases for analysis and ML use cases need to be checked for quality and consistency. Data validation is an important step towards ensuring that ML systems developed using the data are highly performant, and do not incorporate biases from the data. Errors in data propagate through the MLOps pipeline and hence specialized data quality assurance tools and checks at various stages of the pipeline are necessary [223]. A standardized data validation framework that includes i) data element pre-processing, ii) checks for completeness, conformance, and plausibility, and iii) a review process by clinicians and other stakeholders should capture generalizable insight across various clinical investigations [238]. ### Pipeline Engineering Data stored in raw formats need to be processed to create feature representations for ML models. Each transformation is a computation, and a chain of these processing elements, arranged so that the output of each element is the input of the next, constitutes a pipeline[134] and using software tools and workflow practices that enable such pipelines is pipeline engineering. There are advantages to using such a pipeline approach, including: * **Modularization**: By breaking the chain of transformations into small steps, modularization is naturally achieved. * **Testing**: Each transformation step can be tested independently, which facilitates quality assurance and testing. * **Debugging**: Version controlling the outputs at each step makes it easier to ensure reproducibility, especially when many steps are involved. * **Parallelism**: If any step in the pipeline is easily parallelizable across multiple compute nodes, the overall processing time can be reduced. * **Automation**: By breaking a complex task into a series of smaller tasks, the completion of each task can be used to trigger the start of the next task, and this can be automated using continuous integration tools such as Jenkins, Github actions and Gitlab CI. In health data processing, the following steps are crucial: 1. **Cleaning**: Formatting values, adjusting data types, checking and fixing issues with raw data. 2. **Encoding**: Computing word embeddings for clinical text, encoding the text and raw values into embeddings [127; 15]. Encoding is a general transformation step that can be used to create vector representations of raw data. For example, transforming images to numeric representations can also be considered to be encoding. 3. **Aggregation**: Grouping values into buckets, e.g., for aggregating measurements into fixed time-intervals, or grouping values by patient ID. 4. **Normalization**: Normalizing values into standard ranges or using statistics of the data. 5. **Imputation**: Handling missing values in the data. For various clinical data,'missingness' can actually provide valuable contextual information about the patient's health and needs to be handled carefully [47]. Multiple data sources such as EHR data, clinical notes and text, imaging data, and genomics data can be processed independently to create features and they can be combined to be used as inputs to ML models. Hence, composing pipelines of these tasks facilitates component reusability [115]. Furthermore, since the ML development life-cycle constitutes a chain of tasks, the pipelining approach becomes even more desirable. Some of the high level tasks in the MLHOps pipeline include feature creation, feature selection, model training, evaluation, and monitoring. Evaluating models across different slices of data, hyper-parameters, and other confounding variables is necessary for building trust. Table 7 lists popular open-source tools and packages specific to health data and ML processing. These tools are at different stages of development and maturity. Some examples of popular tools include MIMIC-Extract[273], Clairvoyance[115] and CheXstray[245]. ### Modelling At this stage, the data has been collected, cleaned, and curated, ready to be fed to the ML model to accomplish the desired task. The modelling phase involves choosing the available models that fit the problem, training & testing the models, and choosing the model with the best performance & reliability guarantees. Given the the existence of numerous surveys summarizing machine learning and deep learning algorithms for general healthcare scenarios [74; 1], as well as specific use cases such as brain tumor detection [18], COVID-19 prevention[26], and clinical text representation [127], we omit this discussion and let the reader explore the surveys relevant to their prediction problem. ### Infrastructure and System Hospitals typically use models developed by their EHR vendor which are deployed through the native EHR vendor configuration. Often, inference is run locally or in a cloud instance, and the model outputs are communicated within the EHR [124]. Predominantly, these models are pre-trained and sometimes fine-tuned on the specific site's data. A feature store is a ML-specific data system used to centralize storage, processing, and access to frequently used features, making them available for reuse in the development of future machine learning models. Feature stores operationalize and streamline the input, tracking, and governance of the data as part of feature engineering for machine learning [134]. To ensure reliability, the development, staging, and production environments are separated and have different requirements. The staging and production environments typically consist of independent virtual machines with adequate compute and storage, along with reliable and secure connections to the databases. The infrastructure and software systems also have to follow and comply with cybersecurity, medical software design and software testing standards [65]. #### 3.4.1 Roles and Responsibilities Efficient and successful MLHOs requires a collaborative, interdisciplinary team across a range of expertise and competencies commonly found in data science, ML, software, operations, production engineering, medicine, and privacy capabilities [134]. Similar to general MLOps practices, data and ML scientists, data, DevOps, and ML engineers, solution and data architects, ML and software fullstack developers, and project managers are needed. In addition, the following role are required, which are distinct to healthcare (for more general MLOps roles see Table 5): * **Health AI Project Managers:** Responsibilities include panning projects, establishing guidelines, milestone tracking, managing risk, supporting the teams and governing partnerships with collaborators from other health organizations. * **Health AI Implementation Coordinator:** Liaison that engages with key stakeholders to facilitate the implmentation of clinical AI systems. * **Healthcare Operations Manager:** Oversees and coordinates quality management, resource management, process improvement, and patient safety in clinical settings like hospitals. * **Clinical Researchers & Scientists:** Domain experts that provide critical domain-specific knowledge relevant to model development and implementation. * **Patient-Facing Practitioners:** Responsibilities include providing system requirements, pipeline usage feedback, and perspective about the patient experience (e.g. clinicians, nurses). * **Ethicists:** Provides support regarding ethical implications of clinical AI systems. * **Privacy Analysts:** Provides assessments regarding privacy concerns pertaining to the usage of patient data. * **Legal Analysts:** Works closely with privacy analysts and ethicists to evaluate the legal vulnerabilities of clinical AI systems. ### Reporting Guidelines Many clinical AI systems do not meet reporting standards because of a failure to assess for poor quality or unavailable input data, insufficient analysis of performance errors, or a lack of information regarding code or algorithm availability [208]. Systematic reviews of clinical AI systems suggest there is a substantial reporting burden, and additions regarding reliability and fairness can improve reporting [164]. As a result, guidelines informed by challenges in existing AI deployments in health settings have become imperative [57]. Reporting guidelines including CONSORT-AI [158], DECIDE-AI [265], and SPIRIT-AI [225] were developed by a multidisciplinary group of international experts using the Delphi process to ensure complete and transparent reporting of randomized clinical trials (RCT) that evaluate interventions with an AI model. Broadly these guidelines suggest inclusion of the following criteria [65]: * **Intended use**: Inclusion of the medical problem and context, current standard practice, intended patient population(s), how the AI system will be integrated into the care pathway, and the intended patient outcomes. * **Patient and user recruitment**: Well-defined inclusion and exclusion criteria. * **Data and outcomes**: The use of a representative patient population, data coding and processing, missing- and low-quality data handling, and sample size considerations. * **Model**: Inclusion of inputs, outputs, training, model selection, parameter tuning, and performance. * **Implementation**: Inclusion of user experience with the AI system, user adherence to intended implementation, and changes to clinical workflow. * **Modifications**: A description protocol for changes made, timing and rationale for modifications, and outcome changes after each modification. * **Safety and errors**: Identification of system errors and malfunctions, anticipated risks and mitigation strategies, undesirable outcomes, and worst-case scenarios. * **Ethics and fairness**: Inclusion of subgroup analyses, and fairness metrics. * **Human-computer agreement**: Report of user agreement with the AI system, reasons for disagreement, and cases of users changing their mind based on the AI system. * **Transparency**: Inclusion of data and code availability. * **Reliability**: Inclusion of uncertainty measures, and performance against realistic baselines. * **Generalizability**: Inclusion of measures taken to reduce overfitting, and external performance evaluations. #### 3.5.1 Tools and Frameworks Understanding the MLOps pipeline and required expertise is just the first step to addressing the problem. Once this has been accomplished, it is necessary to create and/or adopt appropriate tooling for transforming these principles into practice. There are seven broad categories of MLOps tools as shown in Table 1 whereby different tools to automate different phase of the workflows involved in MLOps processes exist. A compiled list of tools within each category is shown in Table 1 \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline **Category** & **Description** & **Tooling Examples** \\ \hline Model metadata storage and management & **Section**3.1 & \(\bullet\) _MLFlow_[1] \\ \hline Data and pipeline versioning & **Section**3.2 & \(\bullet\) _Compet_[2] \\ \hline Model deployment and serving & **Section**3.3 & \(\bullet\) _Neptune_[3] \\ \hline Production model monitoring & **Section**4 & \(\bullet\) _DPC_[4] \\ \hline Run orchestration and workflow pipelines & **Section**4 & \(\bullet\) _DPC_[10] \\ \hline Collaboration Tool & **Orchestrating the execution of preprocessing, training, and evaluation pipelines. Section**3.4 \& \(\bullet\) _DPC_[11] \\ \hline Collaboration Tool & Setting up an MLOps pipeline requires collaboration between different people. **Section**3.4.1 & \(\bullet\) _ChatOps_[15] \\ \hline \end{tabular} \end{table} Table 1: MLOps tools ## 4 MLHOps Monitoring and Updating Once an MLHOps pipeline and required resources are setup and deployed, robust monitoring protocols are crucial to the safety and longevity of clinical AI systems. For example, inevitable updates to a model can introduce various operational issues (and vice versa), including bias (e.g., a new hospital policy that shifts the nature of new data) and new classes (e.g., new subtypes in a disease classifier) [287]. Incorporating expert labels can improve model performance; however, the time, cost, and expertise required to acquire accurate labels for very large imaging datasets like those used in radiology- or histology-based classifiers makes this difficult [138]. As a result, there exist monitoring frameworks with policies to determine when to query experts for labels [300]. These include: * **Periodic Querying**, a non-adaptive policy whereby labels are periodically queried in batches according to a predetermined schedule; * **Request-and-Reverify** which sets a predetermined threshold for drift and queries a batch of labels whenever the drift threshold is exceeded [288]; * **MLDemon** which follows a periodic query cycle and uses a linear estimate of the accuracy based on changes in the data [90]. ### Time-scale windows Monitoring clinical AI systems requires evaluating robustness to temporal shifts. Since the time-scale used can change the types of shifts detected (i.e., gradual versus sudden shifts), multiple time windows should be considered (e.g., week, month). Moreover, it is important to use both 1) **cumulative statistics**, which use a single time window and updates at the beginning of each window and 2) **sliding statistics**, which retain previous data and update with new data. ### Appropriate metrics It is critical to choose evaluation and monitoring metrics optimal for each clinical context. The quality of labels is highly dependent on the data from which they are derived and, as such, can possess inherent biases. For instance, sepsis labels derived from incorrect billing codes will inherently have a low positive predictive value (PPV). Moreover, clinical datasets are often imbalanced, consisting of far fewer positive instances of a label than negative ones. As a result, measures like accuracy that weigh positive and negative labels equally can be detrimental to monitoring. For instance, in the context of disease classification, it may be particularly important to monitor sensitivity, in contrast to more time-sensitive clinical scenarios like the intensive care unit (ICU) where false positives (FP) can have critical outcomes [20]. ### Detecting data distribution shift Data distribution shift occurs when the underlying distribution of the training data used to build an ML model differs from the distribution of data applied to the model during deployment [214]. When the difference between the probability distributions of these data sets is sufficient to deteriorate the model's performance, the shift is considered malignant. In healthcare, there are multiple sources of data distribution shifts, many of which can occur concurrently [78; 248]. Common occurrences of malignant shifts include differences attributed to: * These differences can arise when comparing teaching to non-teaching hospitals, government-owned to private hospitals, or general to specialized hospitals (e.g., paediatric, rehabilitation, trauma). These institutions can have differing local clinical practices, resource allocation schemes, medical instruments, and data-collection and processing workflows that can lead to downstream variation. This has previously been reported in Pneumothorax classifiers when evaluated on external institutions [130]. * Temporal changes in behaviour at the systemic, physician and patient levels are unavoidable sources of data drift. These changes include new healthcare reimbursement incentives, changes in the standard-of-care in medical practice, novel therapies, and updates to hospital operational processes. An example of this is the COVID-19 pandemic, which required changes in resource allocation to cope with hospital bed shortages [132; 201]. * Differences in factors like age, race, gender, religion, and socioeconomic background can arise for various reasons including epidemiological transitions, gentrification of neighbourhoods around a health system, and new public health and immigration policies. Distribution shifts due to demographic differences can disproportionately deteriorate model performance in specific patient populations. For instance, although Black women are more likely to develop breast tumours with poor prognosis, many breast mammography ML classifiers experience deterioration in performance on this patient population [284]. Similarly, skin-lesion classifiers trained primarily on images of lighter skin tones may show decreased performance when evaluated on images of darker skin tones [9; 69]. * Data shifts can be attributed to changes in technology between institutions or over time. This includes chest X-ray classifiers trained on portable radiographs that are evaluated on stationary radiographs or deterioration of clinical AI systems across EHR systems (e.g., Philips Carevue vs. Metavision) [188]. Although evaluated differently, data shifts are present across various modalities of clinical data such as medical images [98] and EHR data [70; 201]. In order to effectively prevent these malignant shifts from occurring, it is necessary to perform prospective evaluation of clinical AI systems [303] in order to understand the circumstances under which they arise, and to design strategies that mitigate model biases and improve models for future iterations [290]. Broadly, these data shifts can be categorized into three groups which can co-occur or lead to one another: #### Covariate Shift Covariate shift is a difference in the distribution of input variables between source and target data. It can occur due to a lack of randomness, inadequate sampling, biased sampling, or a non-stationary environment. This can be at the level of a single input variable (i.e. feature shift) or a group of input features (i.e. dataset shift). Table 4.3.1 contains a list of commonly used methods used for covariate shift detection. **Feature Shift Detection:** Feature shift refers to the change in distribution between the source and target data for a single input feature. Feature shift detection can be performed using two-sample univariate tests such as the Kolmogorov-Smirnov (KS) test [215]. Publicly available tools like TensorFlow Extended (TFX) apply univariate tests (i.e., \(L\)-infinity distance for categorical variables, Jensen-Shannon divergence for continuous variables) to perform feature shift detection between training and deployment data and provide users with summary statistics (Table 4.4). It is also possible to detect feature shift while conditioning on the other features in a model using conditional distribution tests [135]. **Dataset Shift Detection:** Dataset shift refers to the change in the joint distribution between the source and target data for a group of input features. Multivariate testing is crucial because input to ML models typically consist of more than one variable and multiple modalities. In order to test whether the distribution of the target data has drifted from the source data two main approaches exist: 1) **two-sample testing** and 2) **classifiers**. These approaches often work better on low-dimensional data compared to high-dimensional data, therefore dimensionality reduction is typically applied first [215]. For instance, variational autoencoders (VAE) have been used to reduce chest X-ray images to a low-dimensional space prior to two-sample testing [245]. In the context of medical images, including chest X-rays [211][289], diabetic retinopathies [41], and histology slides [246], classifier methods have proven effective. For EHR data, dimensionality reduction using clinically meaningful patient representations has improved model performance [188]. For clinically relevant drift detection, it is important to ensure that drift metrics correlate well with ground truth performance differences. #### 4.3.2 Concept Shift Concept shift is a difference in the relationship (i.e., joint distribution) of the variables and the outcome between the source and target data. In healthcare, concept shift can arise due to changes in symptoms for a disease or antigenic drift. This has been explored in the context of surgery prediction [32] and medical triage for emergency and urgent care [112]. **Concept Shift Detection:** There are three broad categories of concept shift detection based on their approach. 1. **Distribution techniques** which use a sliding window to divide the incoming data streams into windows based on data size or time interval and that compare the performance of the most recent observations with a reference window [84]. ADaptive WINdowing (ADWIN), and its extension ADWIN2, are windowing techniques which use the Hoeffding bound to examine the change between the means of two sufficiently large subwindows [106]. 2. **Sequential Analysis** strategies use the Sequential Probability Ratio Test (SPRT) as the basis for their change detection algorithms. A well-known algorithm is CUMSUM which outputs an alarm when the mean of the incoming data significantly deviates from zero [29]. 3. **Statistical Process Control** (SPC) methods track changes in the online error rate of classifiers and trigger an update process when there is a statistically significant change in error rate [163]. Some common SPC methods include: Drift Detection Method (DDM), Early Drift Detection Method (EDDM), and Local Drift Detection (LLDD) [23]. \begin{table} \begin{tabular}{|l||l||l|} \hline Method & Shift & Test Type \\ \hline L-infinity distance & Feature (c) & 2-ST \\ Cramer-von Mises & Feature (c) & 2-ST \\ Fisher’s Exact Test & Feature (c) & 2-ST \\ Chi-Squared Test & Feature (c) & 2-ST \\ Jensen-Shannon divergence & Feature (n) & 2-ST \\ Kolmogorov-Smirnov [174] & Feature (n) & 2-ST \\ Feature Shift Detector [135] & Feature & Model \\ Maximum Mean Discrepancy (MMD) [93] & Dataset & 2-ST \\ Least Squares Density Difference [37] & Dataset & 2-ST \\ Learned Kernel MMD [155] & Dataset & 2-ST \\ Context Aware MMD [56] & Dataset & 2-ST \\ MMD Aggregated [236] & Dataset & 2-ST \\ Classifier [161] & Dataset & Model \\ Spot-the-diff [117] & Dataset & Model \\ Model Uncertainty [240] & Dataset & Model \\ Mahalanobis distance [222] & Dataset & Model \\ Gram matrices [202][234] & Dataset & Model \\ Energy Based Test [157] & Dataset & Model \\ H-Divergence [299] & Dataset & Model \\ \hline \end{tabular} \end{table} Table 2: **Covariate Shift Detection Methods c: categorical; n: numeric; 2-ST: Two-Sample Test** #### 4.3.3 Label Shift Label shift is a difference in the distribution of class variables in the outcome between the source and target data. Label shift may appear when some concepts are under-sampled or over-sampled in the target domain compared to the source domain. Label shift arises when class proportions differ between the source and target, but the feature distributions of each class do not. For instance, in the context of disease diagnosis, a classifier trained to predict disease occurrence is subject to drift due to changes in the baseline prevalence of the disease across various populations. **Label Shift Detection:** Label shift can be detected using moment matching-based estimator methods that leverage model predictions like Black Box Shift Estimation (BBSE) [151] and Regularized Learning under Label Shift (RLLS) [22]. Assuming access to a classifier that outputs the true source distribution conditional probabilities \(p_{s}(y|x)\) Expectation Maximization (EM) algorithms like Maximum Likelihood Label Shift (MLLS) can also be used to detect label shift [87]. Furthermore, methods using bias-corrected calibration show promise in correcting label shift [14]. ### Model Updating and Retraining As the implementation of ML-enabled tools is realized in the clinic, there is a growing need for continuous monitoring and updating in order to improve models over time and adapt to malignant distribution shifts. Retraining of ML models has demonstrated improved model performance in clinical contexts like pneumothorax diagnosis [130]. However, proposed modifications can also degrade performance and introduce bias [149]; as a result it may be preferable to avoid making a prediction and defer the decision to a downstream expert [186]. When defining a model updating or retraining strategy for clinical AI models there are several factors to consider [279], we outline they key criteria in this section. #### 4.4.1 Quality and Selection of Model Update Data When updating a model it is important to consider the relevance and size of the data to be used. This is typically done by defining a window of data to update the model: i) **Fixed window** uses a window that remains constant across time. ii) **Dynamic window** uses a window that changes in size due to an adaptive data shift, iii) **Representative subsample** uses a subsample from a window that is representative of the entire window distribution. #### 4.4.2 Updating Strategies There are several ways to update a model including: i) **Model recalibration** is the simplest type of model update, where continuous scores (e.g. predicted risks) produced by the original model are mapped to new values [52]. Some common methods to achieve this include Platt scaling [209], temperature scaling, and isotonic regression [191]. ii) **Model updating** includes changes to an existing model, for instance, fine-tuning with regularization [139] or model editing where pre-collected errors are used to train hypernetworks that can be used to edit a model's behaviour by predicting new weights or building a new classifier [182]. iii) **Model retraining** involves retraining a model from scratch or fitting an entirely different model. \begin{table} \begin{tabular}{|p{142.3pt}||p{142.3pt}|} \hline Name of tool & Capabilities \\ \hline Evidently [20] & Interactive reports to analyze ML models during validation or production monitoring. \\ NannyML [21] & Performance estimation and monitoring, data drift detection and intelligent alerting for deployment. \\ River [185] & Online metrics, drift detection and outlier detection for streaming data. \\ SeldonCore [262] & Serving, monitoring, explaining, and management of models using advanced metrics, explainers, and outlier detection. \\ TFX [22] & Explore and validate data used for machine learning models. \\ TorchDrift [23] & Covariate and concept drift detection. \\ deepchecks [54] & Testing for continuous validation of ML models and data. \\ EHR OOD Detection [258] & Uncertainty estimation, OOD detection and (deep) generative modelling for EHRs. \\ Avalanche [160] & Prototyping, training and reproducible evaluation of continual learning algorithms. \\ Giskard [24] & Evaluation, monitoring and drift testing. \\ \hline \end{tabular} \end{table} Table 3: List of open-source tools available on Github that can be used for ML Monitoring and Updating #### 4.4.3 Frequency of Model Updates In practice, retraining procedures for clinical AI models have generally been locked after FDA approval [140] or confined to ad-hoc one-time updates [261][104]. The timing of when it is necessary to update or retrain a model varies across use case. As a result, it is imperative to evaluate the appropriate frequency to update a model. Strategies employed include: i) **Periodic training** on a regular schedule (e.g. weekly, monthly). ii) **Performance-based trigger** in response to a statistically significant change in performance. iii) **Data-based trigger** in response to a statistically significant data distribution shift. iv) **Retraining on demand** is not based on a trigger or regular schedule, and instead initiated based on user prompts. #### 4.4.4 Continual Learning Continual learning is a strategy used to update models when there is a continuous stream of input data that may be subject to changes over time. Prior to deployment, it is crucial to simulate the online learning procedure on retrospective data to assess robustness to data shifts [51][198]. When models are retrained on only the most recent data, this can result in "catastrophic forgetting" [267][140], in which the integration of new data into the model can overwrite knowledge learned in the past and interfere with what the model has already learned [138]. Contrastingly, procedures that retrain models on all previously collected data can fail to adapt to important temporal shifts and are computationally expensive. More recently, strategies leveraging **multi-armed bandits** have been utilized to select important samples or batches of data for retraining [92][301]. This is an important consideration in healthcare contexts like radiology, where the labelling of new data can be a time-consuming bottleneck [100][206]. To ensure continual learning satisfies performance guarantees, hypothesis testing can be used for approving proposed modifications [62]. An effective approach for parametric models include continual updating procedures like online recalibration/revision [76]. Strategies for continual learning can broadly be categorized into: 1) **Parameter isolation** where changes to parameters that are important for the previous tasks are forbidden e.g. Local Winner Takes All (LWTA), Incremental Moment Matching (IMM) [260]; 2) **Regularization methods** which builds on the observation forgetting can be reduced by protecting parameters that are important for the previous tasks e.g. elastic weight consolidation (EWC), Learning Without Forgetting (LWF); and 3) **Replay-based approaches** that retain some samples from the previous tasks and use them for training or as constraints to reduce forgetting e.g. episodic representation replay (ERR) [66]. Evaluation of several continual learning methods on ICU data across a large sequence of tasks indicate replay-based methods achieves more stable long-term performance, compared to regularization and rehearsal based methods [19]. In the context of chest X-ray classification, Joint Training (JT) has demonstrated superior model performance, with LWF as a promising alternative in the event that training data is unavailable at deployment [141]. For sepsis prediction using EHR data, a joint framework leveraging EWC and ERR has been proposed [16]. More recently, continual model editing strategies have shown promise in overcoming the limitations of continual fine-tuning methods by updating model behavior with minimal influence on unrelated inputs and maintaining upstream test performance [105]. #### 4.4.5 Domain Generalization and Adaptation Broadly, domain generalization and adaptation methods are used to improve clinical AI model stability and robustness to data shifts by reducing distribution differences between training and test data [293][95]. However, it is critical to evaluate several methods over a range of metrics, as the effectiveness of each method varies based on several factors including the type of shift and data modality [285]. * **Data-based methods** perform manipulations based on the patient data to minimize distribution shifts. This can be done by re-weighting observations during training based on the target domain [133], upsampling informative training examples [153] or leveraging a combination of labeled and pseudo-labeled [147]. * **Representation-based methods** focus on achieving a feature representation such that the source classifier performs well on the target domain. In clinical data this has been explored using strategies including invariant risk minimization (IRM), distribution matching (e.g. CORAL) and domain-adversarial adaptation networks (DANN). DANN methods have demonstrated a reduction on the impact of data shift on cross-institutional transfer performance for diagnostic prediction [296]. However, it has been shown that for clinical AI models subject to real life data shifts, in contrast to synthetic perturbations, empirical risk minimization outperforms domain generalization and unsupervised domain adaptation methods [97][294]. * **Inference-based methods** introduce constraints on the optimization procedure to reduce domain shift [133]. This can be done by estimating a model's performance on the "worst-case" distribution [249] or constraining the learning objective to enforces closeness between protected groups [237]. Batch normalization statistics can also be leveraged to build models that are more robust to covariate shifts [235]. #### 4.4.6 Data Deletion and Unlearning In healthcare there are two primary reasons for wanting to remove data from models. Firstly, with the growing concerns around privacy and ML in healthcare, it may become necessary to remove patient data for privacy reasons. Secondly, it may also be beneficial to a model's performance to delete noisy or corrupted training data [35]. The naive approach to data deletion is to exclude unwanted samples and retrain the model from scratch on the remaining data, however this approach can quickly become time consuming and resource-intensive [114]. As a result, more sophisticated approaches have been proposed for unlearning in linear and logistic models [114], random forest models [36], and other non-linear models [96]. #### 4.4.7 Feedback Loops Feedback loops that incorporate patient outcomes and clinician decisions are critical to improving outcomes in future model iterations. However, retraining feedback loops can also lead to error amplification, and subsequent downstream increases in false positives [6]. As a result, it is important to consider model complexity and choose an appropriate classification threshold to ensure minimization of error amplification [7]. ## 5 Responsible MLHOps AI has surged in healthcare, out of necessity or/and [290, 199], but many issues still exist. For instance, many sources of bias exist in clinical data, large models are opaque, and there are malicious actors who damage or pollute the AI/ML systems. In response, responsible AI and trustworthiness have together become a growing area of study [176, 264]. Responsible AI, or trustworthy MLOps, is defined as an ML pipeline that is fair and unbiased, explainable and interpretable, secure, private, reliable, robust, and resilient to attacks. In healthcare, trust is critical to ensuring a meaningful relationship between the healthcare provider and patient [63]. In this section, we discuss components of responsible and trustworthy AI [142], which can be applied to the MLHOps pipeline. In Section 5.1, we review the main concepts of responsible AI and in Section 5.2 we explore how these concepts can be embedded in the MLHOps pipeline to enable safe deployment of clinical AI systems. ### Responsible AI in healthcare **Ethics in healthcare:** Ethics in healthcare primarily consists of the following criteria [263]: 1. **Nonmaleficence:** Do not harm the patient. 2. **Beneficence:** Act to the benefit of the patient. 3. **Autonomy:** The patient (when able) should have the freedom to make decisions about his/her body. More specifically, the following aspects should be taken care of: * **Informed Consent:** The patient (when able) should give informed consent for any medical or surgical procedure, or for research. * **Truth-telling:** The patient (when able) should receive full disclosure to his/her diagnosis and prognosis. * **Confidentiality:** The patient's medical information should not be disclosed to any third party without the patient's consent. 4. **Justice:** Ensure fairness to the patient. To supplement these criteria, guiding principles drawn from surgical settings [152, 228] include: 1. **Rescue:** A patient surrenders to the healthcare provider's expertise to be rescued. 2. **Proximity:** The emotional proximity to the patient should be limited to maintain self-preservation and stability in case of any failure. 7. **Ordeal:** A patient may have to face an ordeal (i.e., go through painful procedures) in order to be rescued. 8. **Aftermath:** The physical and psychological aftermath that may occur to the patient due to any treatment must be acknowledged. 9. **Presence:** An empathetic presence must be provided to the patient. While some of these criteria relate to the humanity of the healthcare provider, others relate to the following topics in ML models: * **Fairness** involves the justice component in the healthcare domain [50]. * **Interpretability & explainability** relate to explanations and better understanding of the ML models' decisions, which can help in achieving nonmaleficence, benefcence, informed consent, and truth-telling principles in healthcare. Interpretability can help identify the reasons for a given model outcome, which can help inform healthcare providers and patients on how to respond accordingly [179]. * **Privacy and security** relate to confidentiality. [126]. * **Reliability, robustness, and resilience** addresses rescue [227]. We discuss these concepts further in Sections 5.1.1, 5.1.2, 5.1.3 and 5.1.4. #### 5.1.1 Bias & Fairness The fairness of AI-based decision support systems have been studied generally in a variety of applications including occupation classifiers [64], criminal risk assessments algorithms [55], recommendation systems [71], facial recognition algorithms [38], search engines [85], and risk score assessment tools in hospitals [193]. In recent years, the topic of fairness in AI models in healthcare has received a lot of attention [193, 241, 137, 49, 278, 242]. Unfairness in healthcare manifests as differences in model performance against or in favour of a sub-population, for a given predictive task. For instance, disproportionate performance differences for disease diagnosis in Black versus White patients [241]. #### 5.1.1.1 Causes A lack of fairness in clinical AI systems may be a result of various contributing causes: * **Unfair objective functions:* * The initial objective used in developing a ML approach may not consider fairness. This does not mean that the developer explicitly (or implicitly) used an unfair objective function to train the model, but the oversimplification of that objective can lead to downstream issues. For example, a model designed to maximize accuracy across all populations may not inherently provide fairness across different sub-populations even if it reaches state-of-the-art performance on average, across the whole population [241; 242]. * **Incorrect presumptions:* * In some instances, the objective function includes incorrect interpretations of features, which can lead to bias. For instance, a commercial algorithm used in the USA, used health costs as a proxy for health needs[193]; however, due to financial limitations, Black patients with the same need for care as White patients often spend less on healthcare and therefore have a lower health cost. As a result, the model falsely inferred that Black patients require less care compared to White patients because they spend less [193]. Additionally, patients may be charged differently for the same service based on their insurance, suggesting cost may not be representative of healthcare needs. * **Inclusion and exclusion**: It is important to clearly outline the conditions and procedures utilized for patient data collection, in order to understand patient inclusion criteria and any potential selection biases that could occur. For instance, the Chest X-ray dataset [275] was gathered in a research hospital that does not routinely conduct diagnostic and treatment procedures25. This dataset therefore includes mostly critical cases, and few patients at the early stages of diagnosis. Moreover, as a specialized hospital, patient admission is selective and chosen solely by institute physicians based on if they have an illness being studied by the given institute 26. Such a dataset will not contain the diversity of disease cases that might be seen in hospitals specialized across different diseases, or account for patients visiting for routine treatment services at general hospitals. Footnote 26: from [https://clinicalcenter.nih.gov/about/welcome/faq.html](https://clinicalcenter.nih.gov/about/welcome/faq.html). * **Insufficient sample size:** Insufficient sample sizes of under-represented groups can also result in unfairness [89]. For instance, patients of low socioeconomic status may use healthcare services less, which reduces their sample size in the overall dataset, resulting in an unfair model [294, 38, 49]. In another instance, an algorithm that can classify skin cancer [73] with high accuracy will not be able to generalize to different skin colours if similar samples have not been represented sufficiently in the training data [38]. * **Missing essential representative features**: Sometimes, essential representative features are missed or not collected during the dataset curation process, which prohibits downstream fairness analyses. For instance, if the patient's race has not been recorded, it is not possible to analyze whether a model trained on that data is fair with respect to that race [242]. Failure to include sensitive features can generate discrimination and reduce transparency [48]. * **Social bias reflection on labels:* * Biases in healthcare systems widely reflect existing biases in society [168, 250, 269]. For instance, race and sex biases exist in COPD underdiagnosis [168], in medical risk score analysis (whereby there exists a higher threshold for Black patients to gain access to clinical resources) [269], and in the time of diagnosis for cardiovascular disease (whereby female patients are diagnosed much later compared to the male patients with similar conditions) [250]. These biases are reflected in the labels used to train clinical AI systems and, as a result, the model will learn to replicate this bias. **Bias of automatic labeling:** Due to the high cost and labour-intensive process of acquiring labels for healthcare data, there has been a shift away from hand-labelled data, towards automatic labelling [39, 113, 120]. For instance, instead of expert-labeled radiology images, natural language processing (NLP) techniques are applied to radiology reports in order to extract labels. This presents concerns as these techniques have shown racial biases, even after they have been trained on clinical notes [295]. Therefore, using NLP techniques for automatic labeling may sometimes amplify the overall bias of the labels [242]. * **Limited computational resources:* * Not all centers have enough labeled data or computational resources to train ML models 'from scratch' and must use pretrained models for inference or transfer learning. If the original model has been trained on biased (or differently distributed) data, it will unfairly influence the outcome, regardless of the quality of the data at the host center. #### 5.1.1.2 Evaluation To evaluate the fairness of a model, we need to decide which fairness metric to use and what sensitive attributes to consider in our analysis. * **Fairness metric(s):** There are many ways to define fairness metrics. For instance, [55] and [103] discussed several fairness criteria and suggested balancing the error rate between different subgroups [58, 292]. However, it is not always possible to satisfy multiple fairness constraints concurrently [242]. Jon Kleinberg et al., [131] showed that three fairness conditions evaluated could not be simultaneously satisfied. As a result, a trade-off between the different notions of fairness is required, or a single fairness metric can be chosen based on domain knowledge and the given clinical application. * **Sensitive attributes:** Sensitive attributes are protected groups that we want to consider when evaluating the fairness of an AI model. Sex and race are two commonly used sensitive attributes [292, 241, 242, 295]. However, a lack of fairness in an AI system with respect to other sensitive attributes such as age [241, 242], socioeconomic status, [241, 242, 295], and spoken language [295] are also important to consider. Defining AI fairness is context- and problem-dependent. For instance, if we build an AI model to support decision making for disease diagnosis with the goal of using it in the clinic, then it is critical to ensure equal opportunity in the model is provided; i.e., patients from different races should have equal opportunity to be accurately diagnosed [241]. However, if an AI model is to be used to triage patients, then ensuring the system does not underdiagnose unhealthy patients of a certain group may be of greater concern compared to the specific disease itself because the patient will lose access to timely care [242]. #### 5.1.2 Interpretability & Explainability In recent years, interpretability has received a lot of interest from the ML community [184, 253, 172]. In machine learning, interpretability is defined as the ability to explain the rationale for an ML model's predictions in terms that a human can understand [68] and explainability refers to a detailed understanding of the model's internal representations, _a priori_ of any decision. After other research in this area [170], we use 'interpretability' and 'explainability' interchangeably. Interpretability is not a pre-requisite for all AI systems [68, 184], including in low-risk environments (in which miscalculations have very limited consequences) and in well-studied problems (which have been tested and validated extensively according to robust MLOps methods). However, interpretability can be crucial in many cases, especially for systems deployed in the healthcare domain [88]. The need for interpretability arises from the _incompleteness_ of the problem where system results require an accompanying rationale. #### 5.1.2.1 Importance of interpretability Interpretability applied to an ML model can be useful for the following reasons: * **Trust:** Interpretability enhances trust when all components are well-explained. This builds an understanding of the decisions made by a model and may help integrate it into the overall workflow. * **Reliability & robustness:** Interpretability can help in auditing ML models, further increasing model reliability. * **Privacy & security:** Interpretability can be used to assess if any private information is leaked from the results. While some researchers claim that interpretability may hinder privacy [244, 102, 244] as the interpretable features may leak sensitive information, others have shown that it can help make the system robust against the adversarial attacks [145, 297]. * **Fairness:** Interpretability can help in identifying and reducing biases discussed in Sec. 5.1.1. However, the quality of these explanations can differ significantly between subgroups and, as such, it is important to test various explanation models in order to carefully select an equitable model with high overall fidelity [24]. * **Better understanding and knowledge:** A good interpretation of the model can lead to the identification of the factors that most impact the model. This can also result in a better understanding of the use case itself and enhance knowledge in that particular area. * **Causality:** Interpretability gives a better understanding of the model decisions and the features and hence can help to identify causal relationships of the features [43]. #### 5.1.2.2 Types of approaches for interpretability in ML: Many methods have been developed for better interpretability in ML, such as explainable AI for trees [165], Tensorflow Lattice27, DeepLIFT [143], InterpretML[192], LIME [224], and SHAP [166]. Some of these have been applied to healthcare [2, 247]. The methods for interpretability are usually categorized as: Footnote 27: [https://www.tensorflow.org/lattice](https://www.tensorflow.org/lattice) * Model-specific interpretability can only be used for a particular model. Usually, this type of interpretability uses the model's internal structure to analyze the impact of features, for example. * **Model-agnostic:** Interpretability is not restricted to a specific machine learning model and can be used more generally with several. * Relatively simple methods, such as height-bound decision trees, are easier for humans to understand. * After the model has produced output, interpretation proceeds for more complex methods. * **Locally interpretable:* * Interprets individual or per-instance predictions of the model. * **Globally interpretable:* * Interprets the model's overall prediction set and provides insight into how the model works in general. * **Methodology-based approach* * Methods that interpret the models based on the impact of the features on that model. E.g., weight plot, feature selection, etc. * Methods that interpret the model by perturbing the settings or features of the model. E.g., LIME [224], SHAP [166] and anchors. * Methods that apply rules on features to identify their impact on the model e.g., BETA, MUSE, and decision trees. * Methods where important inputs are shown using images superimposed over the input e.g., saliency maps [10]. #### 5.1.2.3 Interpretability in healthcare In recent years, interpretability has become common in healthcare [2; 179; 220]. In particular, Abdullah _et al._[2] reported that interpretability methods (e.g., decision trees, LIME, SHAP) have been applied to extract insights into different medical conditions including cardiovascular diseases, eye diseases, cancer, influenza, infection, COVID-19, depression, and autism. Similarly, Meng _et al._[179] performed interpretability of deep learning mortality prediction models and fairness analysis on the MIMIC-III dataset [119], showing connections between interpretability methods and fairness metrics. #### 5.1.3 Privacy & Security While digitizing healthcare has led to centralized data and improved access for healthcare professionals, it has also increased risks to data security and privacy [189]. After previous work [3], _privacy_ is the individual's ability to control, interact with, and regulate their personal information and _security_ is a systemic protection of data from leaks or cyber-attacks. #### 5.1.3.1 Security & privacy requirements In order to ensure privacy and security, the following requirements should be met [189]: * **Authentication:** Strong authentication mechanisms for accessing the system. * **Confidentiality:** Access to data and devices should be restricted to authorized users. * **Integrity:** Integrity-checking mechanisms should be applied to restrict any modifications to the data or to the system. * **Non-repudiation:** Logs should be maintained to monitor the system. Access to those logs should be restricted and avoid any tampering. * **Availability:** Quick, easy, and fault-tolerant availability should be ensured at all times. * **Anonymity:** Anonymity of the device, data, and communication should be guaranteed. * **Device unlinkability:** An unauthorized person should not be able to establish a connection between data and the sender. * **Auditability and accountability:** It should be possible to trace back the recording time, recording person, and origins of the data to validate its authenticity. #### 5.1.3.2 Types of threats Violation of privacy & security can occur either due to human error (unintentional or non-malicious) or an adversarial attack (intentional or malicious). 1. **Human error:** Human error can cause data leakage through the carelessness or incompetence of authorized individuals. Most of the literature in this context [148; 75] divides human error into two types: 1. **Slip:** the wrong execution of correct, intended actions; e.g., incorrect data entry, forgetting to secure the data, giving access of information to unauthorized persons using the wrong email address. 2. **Mistake:** the right execution of incorrect, unintended actions; e.g., collecting data that is not required, using the same password for different systems to avoid password recovery, giving access of information to unauthorized persons assuming they can have access. While people dealing with data should be trained to avoid such negligence, some researchers have suggested policies, frameworks, and strategies such as _error avoidance_, _error interception_, or _error correction_ to prevent or mitigate these issues [148; 75]. 2. **Adversarial attacks:** A primary risk for any digital data or system is from adversarial attackers [99] who can damage, pollute, or leak information from the system. An adversarial attacker can attack in many ways; e.g., they can be remote or physically present, they can access the system through a third-party device, or they can be personified as a patient [189]. The most common types of attacks are listed below. * **Hardware or software attack:** Modifying the hardware or software to use it for malicious purposes. * **System unavailability:** Making the device or data unavailable. * **Communication attack:** Interrupting the communication or forcing a device to communicate with unauthorized external devices. * **Data sniffing:** Illegally capturing the communication to get sensitive information. * **Data modification:** Maliciously modifying data. * **Information leakage:** Retrieving sensitive information from the system. #### 5.1.3.3 Healthcare components and security & privacy Extra care needs to be taken to protect healthcare data [5]. Components [194] include: * **Electronic health data:** This data can be leaked due to human mistakes or malicious attacks, which can result in tampering or misuse of data. In order to overcome such risks, measures such as access control, cryptography, anonymization, blockchain, steganography, or watermarking can be used. * **Medical devices:** Medical devices such as smartwatches and sensors are also another source of information that can be attacked. Secure hardware and software, authentication and cryptography can be used to avoid such problems. * **Medical network:** Data shared across medical professionals and organizations through a networks may be susceptible to eavesdropping, spoofing, impersonating, and unavailability attacks. These threats can be reduced by applying encryption, authentication, access control, and compressed sensing. * **Cloud storage:** Cloud computing is becoming widely adopted in healthcare. However, like any system, it is also prone to unavailability, data breaches, network attacks, and malicious access. Similar to those above, threats to cloud services can be avoided through authentication, cryptography, and decoying (i.e., a method to make an attacker erroneously believe that they have acquired useful information). #### 5.1.3.4 Healthcare privacy & security laws Due to the sensitivity of healthcare data and communication, many countries have introduced laws and regulations such as the Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada, the Health Insurance Portability and Accountability Act (HIPPA) in the USA, and the Data Protection Directive in the EU [280]. These acts mainly aim at protecting patient data from being shared or used without their consent but while allowing them to access to their own data. #### 5.1.3.5 Attacks on ML pipeline Any ML model that learns from data can also leak information about the data, even if it is generalized well; e.g., using membership inference (i.e., determining if a particular instance was used to train the model) [111; 178] or using property inference (i.e., inferring properties of the training dataset from a given model) [178; 200]. Adversarial attacks in the context of the MLOps pipeline can occur in the following phases [99]: * **Data collection phase:** At this phase, a _poisoning attack_ results in modified or polluted data, impacting the training of the model and lowering performance on unmodified data. * **Modelling phase:** Here, the _Trojan AI attack_ can modify a model to provide an incorrect response for specific trigger instances [271] by changing the model architecture and parameters. Since it is now common to use pre-trained models, these models can be modified or replaced by attackers. * **Production and deployment phases:** At these phases, both _Trojan AI attacks_ and _evasion attacks_ can occur. Evasion attacks consist, e.g., of modifying test data to have them misclassified [207]. #### 5.1.4 Reliability, robustness and resilience A trustworthy MLOps system should be reliable, robust, and resilient. These terms are defined as follows [302]: * **Reliability:** The system performs in a satisfactory manner under specific, unaltered operating conditions. * **Robustness:** The system performs in a satisfactory manner despite changes in operating conditions, e.g., data shift. * **Resilience:** The system performs in a satisfactory manner despite a major disruption in operating conditions; e.g., adversarial attacks. These aspects have been studied in the healthcare domain [181, 213] and different approaches such as interpretability, security, privacy, and methods to deal with data shift (discussed in Sections 5.1.2 and 5.1.3) have been suggested. **Trade-off between accuracy and trustworthiness:** In Section 5.1, we discussed different important components of trustworthy AI that should be considered while designing an ML system; however, literature shows that there can be a trade-off between accuracy, interpretability, and robustness [220, 256]. While a main reason for the trade-off is that robust models learn a different feature representation that may decrease accuracy, it is better perceived by humans [256]. ### Incorporating Responsibility and Trust into MLHOps In recent years, responsible and trustworthy AI have gained a lot of attention in general as well as for healthcare due to its implications on society [220]. There are several definitions of trustworthiness [220], and they are related to making the system robust, unbiased, generalizable, reproducible, transparent, explainable, and secure. However, the lack of standardized practices for applying, explaining, and evaluating trustworthiness in AI for healthcare makes this very challenging [220]. In this section, we discuss how we can incorporate all these qualities at each step of the pipeline. #### 5.2.1 Data The process of a responsible and trustworthy MLOps pipeline starts with data collection and preparation. The impact of biased or polluted data propagates through all the subsequent steps of the pipeline [82]. This can be even more important and challenging in the healthcare domain due to the privacy and sensitivity of the data [21]. If compromised, this information can be tempered or misused in various ways (e.g., identity theft, information sold to a third party) and introduce bias in the healthcare system. Such challenges can also cause economic harm (such as job loss), psychological harm (e.g., causing embarrassment due to a medical issue), and social isolation (e.g., due to a serious illness such as HIV) [187, 4]. It can also impact ML model performance and trustworthiness [50]. #### 5.2.1.1 Data collection In healthcare, data can be acquired through multiple sources [257], which increases the chance of the data being polluted by bias. Bias can concern, for example, race[284], gender, sexual orientation, gender identity, and disability. Bias in healthcare data can be mitigated against by increasing diversity in data, e.g., by including underrepresented minorities (URMs), which can lead to better outcomes [169]. Debiasing during data collection can include: 1. **Identifying & acknowledging potential real-world biases:** Bias in healthcare is introduced long before the data collection stage. Although increasingly less common in many countries28, bias can still occur in medical school admission, job interviews, patient care, disease identification, research samples, and case studies. Such biases lead to the dominance of people from certain communities [169] or in-group vs. out-group bias [91], which can result in stereotyped and biased data generation and hence biased data collection. Bias can be unconscious or conscious [169, 79]. Unconscious bias stems from implicit or unintentional associations outside conscious awareness resulting from stereotypical perceptions and experiences. On the other hand, conscious bias is explicit and intentional and has resulted in abuse and criminal acts in healthcare; e.g., the Tuskegee study of untreated Syphilis in black men demonstrated intentional racism [80]. Both conscious and unconscious biases damage the validity of the data. Since conscious bias is relatively more visible, it is openly discouraged not only in healthcare but also in all areas of society. However, unconscious bias is more subtle and not as easy to identify. In most cases, unconscious bias is not even known to the person suffering from it. Different surveys, tests, and studies have found the following types of biases (conscious or unconscious) common in healthcare [169]: 1. **Racial bias** e.g., Black, Hispanic, and Native American physicians are underrepresented [197]. According to one study, white males from the upper classes are preferred by the admission committees [42] (although some other sources suggest the opposite28). Footnote 28: [https://applymd.utoronto.ca/admission-stats](https://applymd.utoronto.ca/admission-stats) * **Gender bias:** e.g., professional women in healthcare being less likely to be invited to give talks [177], to be introduced using professional titles [77], to experience harassment or exclusion, to receive insufficient support at work or negative comparisons with male colleagues, and to be perceived as weak & less competitive [150, 252]. * **Gender minority bias** e.g., LGBTQ people receive lower quality healthcare [226] and faced challenges to get jobs in healthcare [232]. * **Disability bias** e.g., people with disabilities receive limited accessibility supports to all facilities and have to work harder to be feel validated or recognized [175]. Various tests identify the existence of unconscious bias, such as the Implicit Association Test (IAT), and have been reported to be useful. For example, Race IAT results detected unintentional bias in 75% of the population taking the test [25]. While debate continues regarding the degree of usefulness of these tests [34], they may still capture some subtle human behaviours. Some other assessment tools (e.g., Diversity Engagement Survey (DES) [203]) have also been built for successfully measuring inclusion and diversity in medical institutes. According to Marcelin et al. [169], the following measures can help in reducing unintentional bias: * Using IAT to identify potential biases in admissions or hiring committee members in advance. * Promoting equity, diversity, inclusion, and accessibility (EDIA) in teams. Including more people from underrepresented minorities (URM) in the healthcare profession, especially in admissions and hiring committees. * Conducting and analyzing surveys to keep track of the challenges faced by URM individuals due to the biased perception of them. * Training to highlight the existence and need for mitigation of bias. * Self-monitoring bias can be another way to incorporate inclusion and diversity. 2. **Debiasing during data collection and annotation:** In addition to human factors, we can take steps to improve the data collection process itself. In this regard, the following measures can be taken [156]: 1. **Investigating the exclusion:** In dataset creation, an important step is to carefully investigate which patients are included in the dataset. An exclusion criterion in dataset creation may be conscious and clinically motivated, but there are many unintentional exclusion criteria that are not very well visible and enforce biases. For instance, a dataset that is gathered in a research hospital that does not routinely provide standard diagnostic and treatment services and select the patients only because they have an illness being studied by the Institutes have a different type of patients compared to clinical hospitals that do not have these limitations [242]. Alternatively, whether the service delivered to the patient is free or covered by insurance would change the distribution of the patients and infect biases into the resulting AI model [241]. 2. **Annotation with explanation:** Adding justification for choosing the label by the human annotators not only helps them identify their own unconscious biases but also can help in setting standards for unbiased annotations and avoid any automatic association and stereotyping (e.g., high prevalence HIV in gay men led to underdiagnosis of this disease in women and children [169]. Moreover, these explanations can be a good resource for training explainable AI models [277]. 3. **Data provenance:** This involves tracking data lineage through the data source, dependencies, and data collection process. Healthcare data can come from multiple sources which increases the chances of it being biased [45]. Data provenance improves data quality, integrity, audibility, and transparency [283]. Different tools for data provenance are available including _Fast Healthcare Interoperability Resources (FHIR)_[233] and _Atmolytics_[283]. [171] 4. **Data security & privacy during data collection:** Smart healthcare technologies have become a common practice [45]. A wide variety of smart devices is available, including wearable de vices (e.g., smartwatches, skin-based sensors), body area networks (e.g., EEG sensors, blood pressure sensors), tele-healthcare (e.g., tele-monitoring, tele-treatment), digital healthcare systems (e.g., electronic health records (EHR), electronic medical records (EMR)), and health analytics (e.g., medical big-data). While the digitization of healthcare has improved access to medical facilities, it has increased the risk of data leakage and malicious attacks. Extra care should be taken while designing an MLOps pipeline to avoid privacy and security risks, as it can lead to serious life-threatening consequences. Other issues include the number of people involved in using the data and proper storage for high volumes of data. Chaudhry et al. [45] proposed an AI-based framework using 6G-networks for secure data exchange in digital healthcare devices. In the past decade, the blockchain has also emerged as a way of ensuring data privacy and security. Blockchain is a distributed database with unique characteristics such as immutability, decentralization, and transparency. This is especially relevant in healthcare because of security and privacy issues [101; 286; 190]. Using blockchain can help in more efficient and secure management of patient's health records, transparency, identification of false content, patient monitoring, and maintaining financial statements [101]. 5. **Data-sheet:** Often, creating a dataset that represents the full diversity of a population is not feasible, especially for very multicultural societies. Additionally, the prevalence of diseases among different sub-populations may be different [242]. If it is not possible to build an ideal dataset with the above specifications, the data needs to be delivered by a data-sheet. The data-sheet is meta-data that helps to analyze and specify the characteristics of the data, clearly explain exclusion and inclusion criteria, detail demographic features of the patients, and statistics of the data distribution over sub-populations, labels and features. #### 5.2.1.2 Data pre-processing 1. **Data quality assurance:** Sendak et al. [238] argued that clinical researchers choose data for research very carefully but the machine learning community in healthcare does not follow this practice. To overcome this gap, they suggest that data points are identified by the clinicians and extracted into a project-specific data store. After this, a three-step framework is applied: (1) use different measures for data pre-processing to ensure the correctness of all data elements (e.g, converting each lab measurement to the same unit), (2) ensure completeness, conformance, plausibility, and possible data shifts, and (3) adjudicate the data with the clinicians. 2. **Data anonymization:** Due to the sensitivity of healthcare data preparation, data anonymization should minimize the chances of it being de-anonymized. Olatunji et al. [196] provide a detailed overview of data anonymization models and techniques in healthcare such as k-anonymity, k-map, l-diversity, t-closeness, \(\delta\)-disclosure privacy, \(\beta\)-likeness, \(\delta\)-presence, and (\(\epsilon\), \(\delta\))-differential privacy. To avoid data leakage, many tools for data anonymization and its evaluation tools [268] such as Section-Graph [116], ARX- tool for anonymizing biomedical data [212], Amnesia29[254], PySyft [230], Synthea [270] and Anonimatron30 (open-source data anonymization tool written in Java) can be incorporated in the MLHOps pipeline. Footnote 29: [https://www.openaire.eu/item/amnesia-data-anonymization-made-easy](https://www.openaire.eu/item/amnesia-data-anonymization-made-easy) 3. **Removing subgroups indicators**. Changing the race of the patients can have a dramatic impact on the outcome of an algorithm that is designed to fill a prompt [295]. Therefore, the existence of race attributes in the text can decrease the fairness of the model dramatically. In some specific problems, removing subgroup indicators such as the sex of a job candidate from their application has shown to have minimal influence on classifier accuracy while improving the fairness [64]. This method is applicable mostly in text-based data where sensitive attributes are easily removable. As a preprocessing step, one can estimate the effect of keeping or removing such sensitive attributes on the overall accuracy and fairness of a developed model. At the same time, it is not always possible to remove the sensitive attributes from the data. For example, AI models can predict patient race from medical images, but it is not yet clear _how_ they can do it [276]. In one study [276], researchers did not provide the patient race during model training, but they also could not find a particular patch or region in the data for which AI failed to detect race by removing that part. 4. **Differential privacy:** Differential privacy [61] aims to provide information about inherent groups while withholding the information about the individuals. Many algorithms and tools have been developed for this, including CapC[53] and PySyft [230]. #### 5.2.2 Methodology The following sections overview the steps to put these concepts into practice. ##### 5.2.2.1 Algorithmic fairness Algorithmic fairness [183, 282, 83] attempts to ensure the unbiased output across the available classes. Here, we discuss how we can overcome this challenge at different stages of model training [183, 282]. 1. **Pre-processing** * _Choice of sampling & data augmentation:_ Making sure that the dataset is balanced (having approximately an equal number of instances from each class) and all the classes get equal representation in the dataset using simple under- or over-sampling methods [282]. This can also be done by data augmentation [180, 81] to improve the counterfactual fairness by counterfactual text generation and using it to augment data. Augmentation methods include _Synthetic Minority Oversampling Technique_ (SMOTE) [46] and Adaptive Synthetic Sampling (ADASYN) [107]. Since synthetic samples may not be universally beneficial for the healthcare domain, acquiring more data and undersampling may be the best strategy [282]. * _Causal fairness using data pre-processing:_ Causal fairness is achieved by reducing the impact of protected or sensitive attributes (e.g., race and gender) on predicted variables and different methods have been developed to accomplish this [83, 281]. Kamiran _et al._[121] proposed "massaging the data" before using traditional classification algorithms. * _Re-weighing:_ In a pre-processing approach, one may re-weight the training dataset samples or remove features with high correlation to sensitive attributes as well as the sensitive attribute itself [122], learning representations that are relatively invariant to sensitive attribute [162]. One might also adjust representation rates of protected groups and achieve target fairness metrics [44], or utilize optimization to learn a data transformation that reduce discrimination [40]. 2. **In-processing** * _Adversarial learning:_ It is also possible to enforce fairness during model training, using adversarial debiasing [291, 221, 274]. Adversarial learning refers to the methods designed to intentionally confound ML models during training, through deceptive or misleading inputs, to make those models more robust. This technique has been used in healthcare to create robust models [125], and for bias mitigation, by intentionally inputting biased examples [144, 204]. * _Prejudice remover:_ Another important aspect is prejudice injected into the features [123]. Prejudice can be (a) _Direct prejudice_: using a protected attribute as a prediction variable, (b) _Indirect prejudice_: statistical dependence between protected attributes and prediction variables, and (c) _Latent prejudice_: statistical dependence between protected attributes and non-protected attributes. Kamishaima _et al._[123] proposed a method to remove prejudice using regularization. Similarly, Grgic _et al._[94] introduced a method using constraints for classifier optimization objectives to remove prejudice. * _Enforcing fairness in the model training:_ Fairness can also be enforced by making changes to the model through constraint optimization [159], modifying loss functions to penalize deviation from the general population for subpopulations [205], regularizing loss function to minimize mutual information between feature embedding and bias [128], or adding regularizer to identify and treat latent discriminating features [123]. * _Up-weighing:_ It is possible to improve the outcome on worst case group by up weighting the groups with the largest loss [292, 231, 173]. However, all these methods need awareness about the membership of the instance to sensitive attributes. There are also group un-aware methods where they try to weights each sample with an adversary that tries to maximize the weighted loss [136], or trains an additional classier that up-weights samples classified incorrectly in the last training step [154]. 3. **Post-processing:** The post-processing fairness mitigation approaches may target post-hoc calibration of model predictions. This method has sown impact in bias mitigation in both non-healthcare [103, 210] and healthcare [129] applications. There are some software tools and libraries for algorithmic fairness check, listed in [282], which can be used by developer and end user to evaluate the fairness of the AI model outcomes. #### 5.2.3 Development & evaluation At this stage, the ML system is evaluated to make sure its trustworthiness, which includes evaluating the evaluation methods [220, 17]. ##### 5.2.3.1 Model interpretability & explainability At this stage, model evaluation can be done through interpretability and explainability methods to mitigate any potential issues such as possible anomalies in the data or the model. However, it should be noted that the methods perform interpretability and explainability should also be evaluated carefully before relying on them, which can be performed using different methods such as human evaluation [170, 24]. ## 6 Concluding remarks Machine learning (ML) has been applied to many clinically-relevant tasks and many relevant datasets in the research domain but, to fully realize the promise of ML in healthcare, practical considerations that are not typically necessary or even common in the research community must be carefully designed and adhered to. We have provided a deep survey into a breadth of these ML considerations, including infrastructure, human resources, data sources, model deployment, monitoring and updating, bias, interpretability, privacy and security. As there are an increasing number of AI systems being deployed into medical practice, it is important to standardize and to specify specific engineering pipelines for medical AI development and deployment, a process we term MLHOps. To this end, we have outlined the key steps that should be put into practice by multidisciplinary teams at the cutting-edge of AI in healthcare to ensure the responsible deployment of clinical AI systems. ## 7 Appendix \begin{table} \begin{tabular}{|p{85.4pt}||p{284.5pt}|} \hline Name of tool & Description \\ \hline MIMIC-Extract & Pipeline to transform data from MIMIC-III into DataFrames that are directly usable for ML modelling \\ Clairvoyance & End-to-End AutoML Pipeline for Medical Time Series \\ Pyhealth & A python library for health predictive models \\ ROMOP & R package to easily interface with OMOP-formatted EHR data \\ ATLAS & Research tool to conduct scientific analyses on data available in OMOP format \\ FIDDLE & Preprocessing pipeline that transforms structured EHR data into feature vectors for clinical use cases \\ hi-ml & Toolbox for deep learning for medical imaging and Azure integration \\ MedPerf & An open benchmarking platform for medical artificial intelligence using Federated Evaluation. \\ MONAI & AI Toolkit for Healthcare Imaging \\ TorchKRayVision & A library of chest X-ray datasets and models \\ Leaf & Clinical Data Explorer \\ \hline \end{tabular} \end{table} Table 4: List of open-source tools available on Github that can be used for ML system development specific to health. \begin{table} \begin{tabular}{l|l|l|l} \hline **Role** & **Alternatively** & **Description** \\ \hline \multirow{3}{*}{Domain Expert} & \(\bullet\) & _Business Translator_ & \\ & \(\bullet\) & _Business Stakeholder_ & An instrumental role in any phase of \\ & \(\bullet\) & _PO/Manager_ & the MLOps process where a deeper \\ & & & understanding of the data and the \\ & & & domain is required. \\ \hline \multirow{3}{*}{Solution Architect} & \(\bullet\) & _IT Architect_ & \\ & \(\bullet\) & _ML Architect_ & Unifying the work of data scientists, \\ & & & data engineers, and software developers through developing strategies \\ & & & for MLOps processes, defining the \\ & & & project lifecycle, and identifying the \\ & & & best tools and assemble the team of \\ & & & engineers and developers to work on \\ & & & projects. \\ \hline \multirow{3}{*}{Data Scientist} & \(\bullet\) & _ML Specialist_ & \\ & \(\bullet\) & _ML Developer_ & A central player in any MLOps team \\ & & & responsible for creating the data and \\ & & & ML model pipelines. The pipelines \\ & & & include analysing and processing the \\ & & & data as well as building and testing \\ & & & the ML models. \\ \hline \multirow{3}{*}{Data Engineer} & \(\bullet\) & _DataOps Engineer_ & \\ & \(\bullet\) & _Data Analyst_ & Working in coordination with product manager and domain expert to \\ & & & uncover insights from data through \\ & & & data ingestion pipelines. \\ \hline \multirow{3}{*}{Software Developer} & \(\bullet\) & _Full-stack engineer_ & Focusing on the productionizing of \\ & & & ML models and the supporting infrastructure based on the ML architect's blueprints. They standardize \\ & & & the code for compatibility and \\ & & & usability \\ \hline DevOps Engineer & \(\bullet\) & _CI/CD Engineer_ & Facilitating access to the specialized tools and high performance computing infrastructure, enabling transition from development to deployment and monitoring, and automating ML lifecycle. \\ \hline ML Engineer & \(\bullet\) & _MLOps Engineer_ & Highly skilled programmers supporting designing and deploying ML models in close collaboration with Data Scientists and DevOps Engineers. \\ \hline \end{tabular} \end{table} Table 5: Key Roles in an MLOps Team
2302.09586
Occupant's Behavior and Emotion Based Indoor Environment's Illumination Regulation
This paper presents an efficient approach for building occupancy modeling to reduce energy consumption. In this work, a novel approach to occupancy modeling based on the posture and comfort level of the occupant is developed, and subsequently, we report a new and efficient framework for detecting posture and emotion from skeleton joints and face points data respectively obtained from the Kinect sensor. The proposed approach is tested in terms of accuracy, region of convergence, and confusion matrix using several machine learning techniques. Out of all the techniques, random forest classifier gave the maximum blind test accuracy for multi-class classification of posture detection. Deep learning is used for emotion detection using several optimizers out of which Adadelta gave the maximum blind test accuracy for multi-class classification. Along with the Kinect sensor, several other sensors such as the magnetic door sensor, pyroelectric sensors, and illumination sensors are connected through a wireless network using Raspberry Pi Zero W. Thus creating an unmanned technique for illumination regulation.
Shreya Das
2023-02-19T14:39:16Z
http://arxiv.org/abs/2302.09586v2
# Occupant's Behavior and Emotion Based Indoor Environment's Illumination Regulation ###### Abstract This paper presents an efficient approach for building occupancy modeling to reduce energy consumption. In this work, a novel approach to occupancy modeling based on the posture and comfort level of the occupant is developed, and subsequently, we report a new and efficient framework for detecting posture and emotion from skeleton joints and face points data respectively obtained from the Kinect sensor. The proposed approach is tested in terms of accuracy, region of convergence, and confusion matrix using several machine learning techniques. Out of all the techniques, random forest classifier gave the maximum blind test accuracy for multi-class classification of posture detection. Deep learning is used for emotion detection using several optimizers out of which Adadelta gave the maximum blind test accuracy for multi-class classification. Along with the Kinect sensor, several other sensors such as the magnetic door sensor, pyroelectric sensors, and illumination sensors are connected through a wireless network using Raspberry Pi Zero W. Thus creating an unmanned technique for illumination regulation. Bearing-only tracking, target motion analysis, state constrain, Lagrange multiplier. ## I Introduction The ongoing scenario of total energy consumption in the world has been dominated by building energy utilization. 40% of world's total energy usage is used by buildings [1]. In this age of energy impoverishment, depleting fossil fuels is making the energy needs of the 7.7 billion population across the world unattainable. Saving electricity is the most inexpensive solution to energy shortages. So constructing energy-efficient buildings has now become a major priority. The most crucial variable for this that needs constant monitoring is occupancy [2]. Occupancy refers to the number of people present inside a space that affects the cooling or heating load, ventilation load, and illumination load to name a few, of the particular space. Several research works have been done to build efficient occupancy modeling that started with a collection of a huge amount of data using sensors. The Mitsubishi electric research labs collected data for one year using over 200 motion sensors providing over 30 million raw data to check the occupancy of two floors [3]. Thus providing a schedule of occupancy for a year. But the occupancy modeling based on data collected over a fixed schedule often becomes outdated as the pattern of occupancy changes. The use of such a technique will also have no provision for early and late occupants. So to achieve energy efficiency real-time sensing and estimation-based control are required on all the electrical appliances that are present inside a building [4]. Also, the use of only motion sensors gives a vague idea of the number of people present inside a room. So several other sensors are used in the literature including thermal sensors [5], CO2 sensors [6], cameras [4], pyroelectric infrared (PIR) sensors [7], and many more [8, 9, 10, 11]. Unfortunately, almost every sensor comes up with a drawback. The estimation of the number of people in a crowd is not possible by just using thermal sensors. The response of CO2 sensors is not in real-time and their accuracy is also reduced in properly ventilated rooms. The use of camera images needs expensive computers for image processing which are also less accurate for moving objects. The use of a camera is often debarred as it invades privacy. A PIR sensor is a binary sensor to detect motion and the use of only a PIR sensor does not provide information on the number of people present inside a room. Thus it is better to use a network of several types of sensors to obtain a better view of the number of occupants inside a room. Here, we have used real-time data obtained from various sensors enabled by wireless sensor network (WSN) [12, 13] established with the help of Raspberry Pi Zero W. Instead of using camera images we used Kinect sensor XBOX 360 that collected Cartesian coordinate data of skeleton joints and face points in a human body. This reduces the cost of image processing and also nullifies the possibility of phantom detection which is a major disadvantage of a low-cost thermal sensor. Recording just the coordinate data also does not risk the privacy of the occupants. Along with the Kinect sensor, other sensors used were door sensors, PIR sensors, and illumination sensors. Occupancy modeling based on the number of people present inside the room is commonly found in the literature [14, 15] but occupancy modeling based on the posture and comfort level of the occupant is by far new to the best of my knowledge and does not exist in literature. In this paper, we have classified posture into three classes'standing','sitting', and 'lying down'. Emotion is also classified into three classes 'comfortable', 'neutral', and 'uncomfortable'. Illumination of the room is regulated based on these two factors. We also present a new and efficient framework for detecting posture and emotion. This includes extracting relevant features required for machine learning. In this paper, we provide rigorous testing of posture and emotion prediction and compare the results of all the existing machine learning techniques based on both cross-validation and blind test accuracy, confusion matrix, and region of convergence (ROC) plots. Both \(2\)-class and multi-class classification is done and results are compared. The random forest classifier gave the maximum blind test accuracy of 98.27% for multi-class classification of posture detection. We have used deep learning for emotion detection using different optimizers out of which maximum blind test accuracy was obtained using the Adadelta optimizer which is \(94.13\%\) for multi-class classification of emotion. Finally, we used the multiclass classification technique with RFC for posture detection and Adadelta optimizer for emotion detection to regulate illumination. The Cartesian coordinate data obtained from the Kinect sensor is not used directly to predict posture or emotion. The data for both skeletal joints and face points go through some mathematical evaluations that include essential features selection is stated in the next section of mathematical background. The proposed scheme for illumination regulation along with the different types of sensors used for data acquisition is stated in section III. The posture and emotion prediction is done using several machine learning techniques whose accuracy, confusion matrix, and region of convergence (ROC) are compared in section IV. Also, the algorithm followed to regulate the light intensity in the room is provided in the form of a flowchart in this section. Finally, the paper ends with a brief conclusion and the future scope of this work. ## II Mathematical Background The Cartesian coordinate position of skeleton joints and face points obtained from the Kinect sensors are not used directly to determine the posture and facial expression. Only essential features are selected from the Cartesian coordinate data positions obtained from the sensor. This also reduces data dimension and thus reduces computation complexity. The Cartesian coordinate data go through the following mathematical evaluations before they are used for machine learning. ### _Skeleton joint data collection_ Kinect XBOX 360 is used to predict the posture of an occupant by determining the 3-dimensional Cartesian coordinate position of 20 skeleton joints of the occupant [12, 13] as shown in Figure 1 with the camera center as the origin. To classify human posture into three classes of standing, sitting, and lying down, the third-degree joints i.e. the hands and feet as shown in the Figure 1 are considered to be redundant features and thus they are eliminated. To make the data independent of camera position in the room the origin is shifted from the camera center to the spine joint. Considering \((X_{i},Y_{i},Z_{i})\) as the coordinate position of \(i\)-th joint with the camera as origin, \((X_{s},Y_{s},Z_{s})\) as the position coordinates of spine joint with respect to camera center as the origin. The convention that the Kinect sensor follows for the \(X\), \(Y\) and \(Z\) axes are shown in Figure 2. Then, \[(X_{is},Y_{is},Z_{is})=(X_{i},Y_{i},Z_{i})-(X_{s},Y_{s},Z_{s}), \tag{1}\] where \((X_{is},Y_{is},Z_{is})\) represents the coordinates of \(i\)-th joint with the spine joint as the origin. The data collection for posture is done using the following logic. First, the Cartesian coordinate system is converted into a spherical coordinate system such that, \[r=\sqrt{X_{is}^{2}+Y_{is}^{2}+Z_{is}^{2}}, \tag{2}\] \[\theta=\tan^{-1}\frac{\sqrt{X_{is}^{2}+Z_{is}^{2}}}{Y_{is}}, \tag{3}\] and \[\phi=\tan^{-1}\frac{X_{is}}{Z_{is}}, \tag{4}\] where \(r\in\mathbb{R}\) is the radial distance between the spine and the joints, \(\theta\in\mathbb{R}\) is the angle made by the joints with \(Y_{is}\) axis and \(\phi\in\mathbb{R}\) is the angle made by the joints with the \(Z_{is}\) axis. The value of \(r\) is constant for the first degree joints like the knees, the shoulder center etc. Whereas, for second degree joints like ankles, elbows etc., the considered posture of standing, sitting and lying down is independent of \(r\). Thus, the value of \(r\) is neglected as it does not provide any information about posture. Finally, the angle of turn with respect to the \(Y\)-axis, \(\beta\in\mathbb{R}\) as shown in Figure 3 is calculated. In Figure 3, \(R\ddot{H}\) represents the vector from the spine to the right-hip. \(L\ddot{H}\) represents the vector from spine to the left-hip. Angle of turn along \(X\)-axis and \(Z\)-axis is not needed as human body cannot physically turn or tilt along these two Fig. 1: 20 skeletal joints XBOX 360 can detect. Fig. 2: Kinect XBOX 360. axes. Human body can bend, folding joints, along these axes and bending is considered by \(\theta\) and \(\phi\). Thereby, we are considering a \(31\)-dimensional feature set vector of \(15\) joints, such that, \(\vec{F_{Joint}}=(\theta_{1},\phi_{1},\theta_{2},\phi_{2},...,\theta_{15},\phi_{15 },\beta)\in\mathbb{R}^{31}\) for predicting posture. Human posture of the occupant is classified into three class i.e. standing, sitting and lying down. The data labeling for posture determination is done with the logic as shown in Figure 4. * 'From Figure 4, if \(\alpha>140^{o}\), \(\delta>140^{o}\) and the torso is nearly aligned along \(Y\)-axis then the person is standing. * If \(\alpha<140^{o}\) and \(\delta<140^{o}\), the person is sitting. * If the torso is on the \(XZ\)-plane then the person is lying down. The variables \(\alpha\) and \(\delta\) such that \(\alpha\in\mathbb{R}\) and \(\delta\in\mathbb{R}\) are calculated as, \[\alpha=\cos^{-1}\frac{\vec{K}\vec{H}\vec{K}A}{|\vec{K}\vec{H}||\vec{K^{2}}\vec {A}|}, \tag{5}\] where \[\vec{K}\vec{H}=(\vec{K}\vec{H}-\vec{K}\vec{K})\text{ or }(\vec{L}\vec{H}-\vec{L} \vec{K}), \tag{6}\] \[\vec{K}\vec{A}=(\vec{K}\vec{A}-\vec{K}\vec{K})\text{ or }(\vec{L}\vec{A}-\vec{L} \vec{K}), \tag{7}\] and \[\delta=\tan^{-1}\frac{\sqrt{X_{Kis}^{2}+Z_{Kis}^{2}}}{Y_{Kis}}, \tag{8}\] where \((X_{Kis},Y_{Kis},Z_{Kis})\) represents the knee coordinate with respect to the spine as origin. Thus, \(\delta\) is the spherical angle, \(\theta\) constructed by and with the \(Y_{is}\) axis. The vectors used in the above equations are: \[\vec{R}\vec{H}=\text{vector to right hip joint from spine joint,}\] \[\vec{L}\vec{H}=\text{vector to left-hip joint from spine joint,}\] \[\vec{R}\vec{K}=\text{vector to right-knee joint from spine joint,}\] \[\vec{L}\vec{K}=\text{vector to left-knee joint from spine joint,}\] \[\vec{R}\vec{A}=\text{vector to right-ankle joint from spine joint,}\] \[\vec{L}\vec{A}=\text{vector to left-ankle joint from spine joint.}\] ### _Face point data collection_ Kinect XBOX 360 can determine the 3-dimensional Cartesian coordinate position of \(120\) face points of human face [14, 15] with respect to the camera center as the origin, few of which is shown in Figure 5. Cartesian coordinate position of these \(120\) face points with the nose tip as the origin is collected for our study. Assuming \((X_{i},Y_{i},Z_{i})\) represent the coordinates of \(i\)-th face point with the camera as the origin and \((X_{nt},Y_{nt},Z_{nt})\), the position coordinates of the nose-tip face point with respect to the camera center as the origin. Then, \[(X_{int},Y_{int},Z_{int})=(X_{i},Y_{i},Z_{i})-(X_{nt},Y_{nt},Z_{nt}), \tag{9}\] where \((X_{int},Y_{int},Z_{int})\) is the coordinates of \(i\)-th face point with nose-tip face point as origin. Out of these \(120\) face points we have selected only \(32\) face points that are highly responsible for giving an expression on human face. These \(32\) face points include points in the eye-brows, mouth, cheeks and side of the nose. Cartesian coordinate position of these \(32\) face points with the nose tip as the origin are separated. Thus, data set of \(\vec{F}_{Face}=(X_{1nt},Y_{1nt},Z_{1nt},X_{2nt},Y_{2nt},Z_{2nt},\dots X_{32nt},Y_ {32nt},Z32nt)\in\mathbb{R}^{96}\) is obtained from the collected data set of \((X_{1nt},Y_{1nt},Z_{1nt},X_{2nt},Y_{2nt},Z_{2nt},\dots X_{120nt},Y_{120nt},Z_{120 nt})\in\mathbb{R}^{360}\). The data dimension is further reduced by considering the length of these face points from the nose tip and the length between the corresponding the face points on both side of the face as shown in Figure 6. The Euclidean distance from the nose tip face point to the selected face points is evaluated as, \[ED_{np}=\sqrt{X_{in}^{2}+Y_{in}^{2}+Z_{in}^{2}}, \tag{10}\] where \(ED_{np}\in\mathbb{R}^{32}\) is obtained for 32 face points. The Euclidean distance between corresponding the face points on both side of the face is evaluated as, \[ED_{h}=\sqrt{(X_{Rin}-X_{Lin})^{2}+(Y_{Rin}-Y_{Lin})^{2}+(Z_{Rin}-Z_{Lin})^{2}}, \tag{11}\] where \((X_{Rin},Y_{Rin},Z_{Rin})\) is the Cartesian coordinate of the i-th face point on the right side of the face with the nose tip as the origin and \((X_{Lin},Y_{Lin},Z_{Lin})\) is the Cartesian coordinate of the i-th face point on the left side of the face with the nose tip as the origin. \(ED_{h}\) is obtained for 28 face points. The face points on the side of the nose is not considered here as those points have no horizontal movement while Fig. 4: Posture for (a) standing, (b) sitting and (c) lying down. Fig. 3: Vector diagram of joints with respect to the spine as origin. delivering an expression. So, \(ED_{h}\in\mathbb{R}^{14}\). All these \(ED_{np}\) and \(ED_{h}\) makes \((32+14)=46\) features in total that are used to perform machine learning techniques to predict emotion. Thereby, the final feature space, \(\phi_{fn}\in\mathbb{R}^{46}\) is obtained for emotion detection. Data labeling is done manually for emotion determination. ## III Materials and Methods ### _Proposed Scheme_ The whole setup is employed in a prototype test bed room as shown in Figure 7. The room has a door having a door sensor and two PIR sensors, a window, a bed, a table above which the Kinect sensor is placed and three tube lights for illumination regulation. The proposed scheme that is used to regulate light intensity in the test bed room as shown in Figure 7 depending on the posture and comfort level of the occupant inside the room is divided into four modules as shown in Figure 8. Each module in Figure 8 performs a different task as explained below, * Module I consists of sensors used in the system. * Module II represents the determination of skeleton joints and face points from Kinect XBOX 360. * Module III refers to the posture and emotion classification of the occupant. * Module IV regulates tube lights' lumen as per the comfort level of the occupant. ### _Data acquisition_ Data acquisition is performed using different sensors that include the magnetic door sensor, PIR sensor, BH1750 light intensity sensor, and Kinect XBOX 360. #### Iii-B1 Magnetic door sensor A binary reed switch magnetic door sensor is used to check the door status i.e. the door is open or closed. This status is sent over wirelessly using Raspberry Pi Zero W, to the computer where the final coding for illumination regulation is done [16, 17]. #### Iii-B2 PIR sensor The binary PIR sensor detects motion as a human passes by the sensor. Two PIR sensors are fitted near the door. One of them is fitted inside and the other is fitted outside the room. When someone enters the room the PIR sensor fitted outside the room detects motion first and then the PIR sensor fitted inside the room detects motion. The reverse happens when someone leaves the room. Using this technique we can count the number of people entering and Fig. 5: A few face points as detected by Kinect XBOX 360, (b) Mesh mask of triangles whose vertices represents face points detected by Kinect XBOX 360 and (c) 3-D scatter plot of all the face points in Jupyter Notebook with scale: \(X=x\times 5\), \(Y=y\), \(Z=z\times 5\) (in meter). Fig. 8: System schematic diagram. Fig. 6: Lines joining selected face points to the nose tip face point and (b) Lines joining selected face points to their corresponding face point on both side of face. Fig. 7: Prototype test bed room. leaving the room thus we can also count the occupancy inside the room. This data is sent over to a computer wirelessly via socket communication using Raspberry Pi Zero W [18, 19]. #### Iii-B3 Bh1750 BH1750 is a digital ambient light sensor IC that uses I2C bus interface [20, 21]. Each tube light is fitted with a BH1750 sensor. The status of light intensity inside the room is obtained in real-time from BH1750 using Raspberry Pi Zero W which is again communicated to the computer. #### Iii-B4 Kinect sensor (XBOX 360) The Kinect sensor detects human presence and gathers coordinate data of skeleton joints and face points. The coordinate data after going through some evaluations as discussed in the mathematical background section is collected to predict the posture i.e.'standing','sitting' or 'lying down' (as shown in Figures 9 and 10), and comfort level i.e. 'comfortable', 'neutral' or 'uncomfortable'. Data set is obtained in the computer from Kinect sensor using visual studio (WPF application C#) as shown in Figure 10. ### _Illumination regulation_ Three tube lights are installed inside the test bed room whose intensity is to be regulated. Each of the tube lights has remote control with it which is used to regulate the light intensity in the room. A small circuit is built that is connected to the remote buttons as shown in Figure 11. The terminals of 'power on', 'power off', 'intensity +' and 'intensity -' buttons of tube lights' remote controls are soldiered to the pins of switching IC 4066 as shown in Figure 11. Control signal is sent to the pins of 4066 from Raspberry Pi to short the desired button terminals on the remote control of the tube light whose illumination is to be regulated. ## IV Results ### _Posture Determination_ Skeleton joint data was obtained from as many as 27 people using visual studio 2019 (WPF application C#). Several machine learning techniques are used to predict posture. The machine learning techniques used are Support Vector Machine (SVM), Logistic Regression (LR), Classification and Regression Trees (CART), K-Nearest Neighbor (KNN), Random Forest Classifier (RFC), Gaussian Naive Bayes (Gaussian NB) and Linear Discriminant Analysis (LDA). Table I, II and III shows the confusion matrix for 2 class classification of standing versus rest, sitting versus rest and lying down versus rest respectively. Figure 12 and 13 shows the cross-validation an blind test accuracy, respectively obtained for 2 class classification of posture. The corresponding ROC plots are provided in Figure 14. In case of the cross-validation and blind test accuracy plot for 2 class classification using several machine learning techniques. After performing the LDA which is a data dimension reduction technique, the SVM is performed to check accuracy. The RFC and the CART shows \(100\%\) cross-validation accuracy and the SVM shows maximum blind test accuracy for 2 class classification. The SVM shows maximum area under the ROC (Receiver Operator Characteristics) curves as shown in Figure 14. The random forest classifier shows the maximum accuracy in both blind test and cross-validation test as shown in Figure 15 and Table IV for multi-class classification. Confusion matrices for the RFC shown in Table IV have high diagonal element values thus showing accurate prediction of posture. Further, we used 3 layered ANN with different optimizers to check the accuracy for posture estimation for both 2 class and multi-class classification obtained for both cross-validation and blind test. The confusion matrices obtained from 2 class classification for cross-validation and blind test are shown in the Tables V, VI, and VII. The cross-validation and blind test accuracy of obtained from all the optimizers are shown in Figures 16 and 17 respectively. The corresponding ROC plots and their accuracies are shown in Figures 18, 19, and 20. tion in terms of accuracy and confusion matrix which are listed in Table VIII. Here we see that maximum cross-validation accuracy of 100% is obtained from almost all the optimizers except Adagrad. Whereas, maximum blind test accuracy of 96.62% is obtained from Adamax optimizer for multi-class classification of posture. On using 2-class classification, to know the exact posture of the occupant, it is required to check for three times (i.e. for'standing versus rest','sitting versus rest' and 'lying down versus rest'). This is more time consuming and so we finally used multiclass classification for posture determination. The maximum blind test accuracy obtained from the random forest classifier is 98.27% and is used for illumination regulation. ### _Emotion recognition_ Face point data are obtained for \(31\) people using Visual Studio 2019 (WPF application C#). A neural network training \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Optimizer** & **Cross-Validation** & **Blind Test** \\ \hline \multirow{2}{*}{Adamak} & \(\begin{bmatrix}78&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}59&6\\ 6&159\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adadelta} & \(\begin{bmatrix}73&0\\ 0&157\end{bmatrix}\) & \(\begin{bmatrix}81&3\\ 3&143\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adam} & \(\begin{bmatrix}73&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}79&5\\ 6&140\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \(\begin{bmatrix}73&0\\ 0&157\end{bmatrix}\) & \(\begin{bmatrix}80&4\\ 4&142\end{bmatrix}\) \\ \hline \multirow{2}{*}{Nadam} & \(\begin{bmatrix}73&0\\ 0&157\end{bmatrix}\) & \(\begin{bmatrix}70&3\\ 2&155\end{bmatrix}\) \\ \hline \multirow{2}{*}{SGD} & \(\begin{bmatrix}73&0\\ 0&157\end{bmatrix}\) & \(\begin{bmatrix}72&1\\ 0&157\end{bmatrix}\) \\ \hline \multirow{2}{*}{RMSprop} & \(\begin{bmatrix}73&0\\ 0&157\end{bmatrix}\) & \(\begin{bmatrix}72&1\\ 1&156\end{bmatrix}\) \\ \hline \end{tabular} \end{table} TABLE V: Cross-Validation and blind test confusion matrix for 2 class classification of posture, standing versus rest using different classifiers \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Optimizer** & **Cross-Validation** & **Blind Test** \\ \hline \multirow{2}{*}{Adamak} & \(\begin{bmatrix}78&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}59&6\\ 6&159\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adadelta} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adam} & \(\begin{bmatrix}78&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}57&8\\ 7&158\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Nadam} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Nadam} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{SGD} & \(\begin{bmatrix}84&0\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}61&5\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{RMSprop} & \(\begin{bmatrix}84&0\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}61&5\\ 3&161\end{bmatrix}\) \\ \hline \end{tabular} \end{table} TABLE VII: Cross-Validation and blind test confusion matrix for 2 class classification of posture, sitting versus rest using different classifiers \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Classifier** & \multicolumn{2}{c|}{**Cross-Validation**} & \multicolumn{3}{c|}{**Blind Test**} \\ \hline \multirow{2}{*}{ \begin{tabular}{c} **Accuracy** \\ **\%** \\ \end{tabular} } & \multicolumn{2}{c|}{**Confusion**} & \multicolumn{2}{c|}{**Accuracy**} & \multicolumn{2}{c|}{**Confusion**} \\ \cline{3-6} & \(\begin{bmatrix}76&0&0\\ 1&82&0\\ 0&72\end{bmatrix}\) & 98.27 & \(\begin{bmatrix}73&3&0\\ 2&77&4\\ 0&0&72\end{bmatrix}\) \\ \hline \multirow{2}{*}{KNN} & \multirow{2}{*}{92.24} & \(\begin{bmatrix}72&4&0\\ 6&77&0\\ 4&5&63\end{bmatrix}\) & 84.85 & \(\begin{bmatrix}70&6&0\\ 10&73&0\\ 10&7&55\end{bmatrix}\) \\ \hline \multirow{2}{*}{CART} & \multirow{2}{*}{99.57} & \(\begin{bmatrix}75&0&1\\ 0&82&1\\ 1&0&71\end{bmatrix}\) & 97.40 & \(\begin{bmatrix}75&1&0\\ 5&74&4\\ 1&1&70\end{bmatrix}\) \\ \hline \multirow{2}{*}{LDA} & \multirow{2}{*}{94.81} & \(\begin{bmatrix}75&1&0\\ 4&79&0\\ 6&8&58\end{bmatrix}\) & 88.74 & \(\begin{bmatrix}75&1&0\\ 4&79&0\\ 7&10&55\end{bmatrix}\) \\ \hline \multirow{2}{*}{LR} & \multirow{2}{*}{94.81} & \(\begin{bmatrix}76&0&0\\ 2&80&1\\ 3&4&65\end{bmatrix}\) & 91.77 & \(\begin{bmatrix}76&0&0\\ 3&77&3\\ 5&5&62\end{bmatrix}\) \\ \hline \multirow{2}{*}{SVM} & \multirow{2}{*}{97.40} & \(\begin{bmatrix}76&0&0\\ 1&82&0\\ 3&4&65\end{bmatrix}\) & 93.94 & \(\begin{bmatrix}76&0&0\\ 1&82&0\\ 4&5&63\end{bmatrix}\) \\ \hline \multirow{2}{*}{Gaussian NB} & \multirow{2}{*}{99.13} & \(\begin{bmatrix}74&1&1\\ 0&79&4\\ 0&0&72\end{bmatrix}\) & 97.84 & \(\begin{bmatrix}74&1&1\\ 0&77&6\\ 0&0&72\end{bmatrix}\) \\ \hline \end{tabular} \end{table} TABLE IV: Confusion matrix for multi-class classification to determine posture. Fig. 16: Cross-validation accuracy plot for 2 class classification of posture. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Optimizer** & **Cross-Validation** & **Blind Test** \\ \hline \multirow{2}{*}{Adamak} & \(\begin{bmatrix}78&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}59&6\\ 6&159\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adadelta} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adam} & \(\begin{bmatrix}78&0\\ 0&152\end{bmatrix}\) & \(\begin{bmatrix}57&8\\ 7&158\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{Nadam} & \(\begin{bmatrix}82&2\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}62&4\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{SGD} & \(\begin{bmatrix}84&0\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}61&5\\ 0&164\end{bmatrix}\) \\ \hline \multirow{2}{*}{RMSprop} & \(\begin{bmatrix}84&0\\ 0&146\end{bmatrix}\) & \(\begin{bmatrix}61&5\\ 3&161\end{bmatrix}\) \\ \hline \end{tabular} \end{table} TABLE VII: Cross-Validation and blind test confusion matrix for 2 class classification of posture, sitting versus rest using different classifiers Fig. 17: Blind test accuracy plot for 2 class classification of posture. using deep learning technique is done using different optimizers for predicting emotion which is classified as 'comfortable', 'neutral' and 'uncomfortable'. The input layer of the neural network has \(46\) nodes, after which there are \(4\) hidden layers with decreasing number of nodes i.e. from \(50\) to \(20\) by \(10\). Then the output from the \(4\)-th hidden layer is dropped by \(20\%\) which is again passed through a \(5\)-th hidden layer which has \(20\) nodes. Output from the \(5\)-th hidden layer is again dropped by 30%. Thereafter passing that through \(5\) hidden layers with number of nodes decreasing and then again slightly increasing i.e. \(20\) to \(10\) to \(5\) to \(10\) to \(20\). Output from the 6th hidden layer and the \(10\)-th hidden layer is merged. And the merged set is again passed through 2 hidden layers with nodes \(20\) and \(25\). The output from the 4th hidden layer and the \(12\)-th hidden layer is again merged. The merged set is passed through \(2\) hidden layers with nodes \(25\) and \(7\). Finally the output from this \(14\)-th hidden layer is fed to the output layer which has a single node. Therefore, overall, we have \(14\) hidden layers. All the hidden layers as rectified linear unit activation function. Both 2 class and multi-class classification is done. In 2 class classification of emotion area under the ROC plots varies from \(0.9\) to \(1\) for different optimizers as shown in Figure 22 and their corresponding confusion matrices are provided in Tables IX, X, and XI. In case the of multi-class classification maximum cross-validation accuracy is obtained for Adamax and maximum blind test accuracy is obtained for Adadelta optimizer as shown in Table XII and Figure 25. Thus for emotion detection finally we use Adadelta which gives the maximum blind test accuracy of \(94.13\%\) for multi-class classification. Similarly, as we mentioned in case of posture detection we used multi-class classification as the 2-class classification needs to be checked three times (i.e. 'comfortable versus rest', 'neutral versus rest' and 'uncomfortable versus rest') to detect the exact emotion requiring more time for computation. Fig. 19: ROC plot for 2 class classification of posture; sitting versus rest using (a) Adamax, (b) Adam, (c) sgd, (d) RMSprop, (e) Nadam, (f) Adagrad and (g) Adadelta optimizers. Fig. 18: Fig. ROC plot for 2 class classification of posture; standing versus rest using (a) Adamax, (b) Adam, (c) sgd, (d) RMSprop, (e) Nadam, (f) Adagrad and (g) Adadelta optimizers. ### _Final code of automation_ While the system runs in automation following the flowchart shown in Figure 26, multi-class classification is performed using RFC to predict posture and Adadelta optimizer to predict emotion. Data from a door sensor, PIR sensors and BH1750 are sent wirelessly over socket communication using Raspberry Pi Zero W. ## V Conclusion We proposed a scheme for regulating the illumination of a room based on the posture and comfort level of the occupant. A sensor network is built that provides information on the presence of an occupant, and the occupant's coordinate position of joints and facial points to determine posture and emotion using machine learning and correspondingly change the illumination of the room, and the illumination is also sensed using illumination sensor. The sensor network works wirelessly using Raspberry Pi Zero W. Thus preventing unnecessary energy wastage due to needless illumination in the room. This requires accurate detection of posture and emotion due to which we created an efficient framework and performed several machine learning techniques. Out of which, maximum accuracy of \(98.27\%\) was obtained on a blind test from the random forest \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Optimizer** & **Cross-Validation** & **Blind Test** \\ \hline \multirow{2}{*}{Adamax} & \(155\) & 0 & 148 & 7 \\ & \(1\) & \(7\) & 7 & 68 \\ \hline \multirow{2}{*}{Adadelta} & \(140\) & 1 & 134 & 7 \\ & \(3\) & 86 & 15 & 74 \\ \hline \multirow{2}{*}{Adam} & \(154\) & 1 & 140 & 15 \\ & \(2\) & 73 & 8 & 67 \\ \hline \multirow{2}{*}{Adagrad} & \(140\) & 1 & 132 & 9 \\ & \(1\) & 88 & 23 & 66 \\ \hline \multirow{2}{*}{Nadam} & \(155\) & 0 & 146 & 9 \\ & \(5\) & 70 & 10 & 65 \\ \hline \multirow{2}{*}{SGD} & \(155\) & 0 & 152 & 3 \\ & \(2\) & 73 & 4 & 71 \\ \hline \multirow{2}{*}{RMSprop} & \(154\) & 1 & 146 & 9 \\ & \(1\) & 74 & 8 & 67 \\ \hline \end{tabular} \end{table} TABLE IX: Confusion matrix for 2 class classification of comfortable versus rest \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Optimizer** & \multicolumn{2}{c|}{**Cross-Validation**} & \multicolumn{2}{c|}{**Blind Test**} \\ \hline \multirow{2}{*}{Adamax} & \(100\) & \(\begin{bmatrix}73&0&0\\ 0&81&0\\ 0&0&77\end{bmatrix}\) & 96.62 & \(\begin{bmatrix}73&0&0\\ 7&74&0\\ 2&2&73\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adadelta} & \(100\) & \(\begin{bmatrix}73&0&0\\ 0&81&0\\ 0&0&77\end{bmatrix}\) & 94.37 & \(\begin{bmatrix}72&1&0\\ 71&01&0\\ 1&5&71\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adam} & \(100\) & \(\begin{bmatrix}73&0&0\\ 0&81&0\\ 0&1&77\end{bmatrix}\) & 91.73 & \(\begin{bmatrix}72&0&1\\ 7&1&1\\ 0&6&71\end{bmatrix}\) \\ \hline \multirow{2}{*}{Adagrad} & \multirow{2}{*}{99.35} & \(\begin{bmatrix}73&0&0\\ 0&81&0\\ 0&1&77\end{bmatrix}\) & 96.28 & \(\begin{bmatrix}70&0&3\\ 5&76&0\\ 2&2&73\end{bmatrix}\) \\ \hline \multirow{2}{*}{Nadam} & \multirow{2}{*}{100} & \(\begin{bmatrix}73&0&0\\ 0&81&0\\ 0&0&77\end{bmatrix}\) & 96.54 & \(\begin{bmatrix}73&0&0\\ 4&74&3\\ 3&2&72\end{bmatrix}\) \\ \hline \multirow{2}{*}{SGD} & \multirow{2}{*}{100} & \(\begin{bmatrix}73&0&0\\ 1&80&0\\ 0&0&77\end{bmatrix}\) & 92.77 & \(\begin{bmatrix}70&1&2\\ 6&71&4\\ 3&6&68\end{bmatrix}\) \\ \hline \end{tabular} \end{table} TABLE VIII: Accuracy and confusion matrix for multi-class classification of posture using ANN with different optimizer Fig. 21: Accuracy for multiclass classification of posture using ANN with different optimizers. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Optimizer** & \multicolumn{2}{c|}{**Cross-Validation**} & \multicolumn{2}{c|}{**Blind Test**} \\ \hline \multirow{2}{*}{Adamax} & \(105\) & 0 & 148 & 7 \\ & \(1\) & \(74\) & 7 & 68 \\ \hline \multirow{2}{*}{Adadelta} & \(140\) & 1 & 134 & 7 \\ & \(3\) & 86 & 15 & 74 \\ \hline \multirow{2}{*}{Adam} & \(154\) & 1 & 140 & 15 \\ & \(2\) & 73 & 8 & 67 \\ \hline \multirow{2}{*}{Adagrad} & \(140\) & 1 & 132 & 9 \\ & \(1\) & 88 & 23 & 66 \\ \hline \multirow{2}{*}{Nadam} & \(155\) & 0 & 146 & 9 \\ & \(5\) & 70 & 10 & 65 \\ \hline \multirow{2}{*}{SGD} & \(155\) & 0 & 152 & 3 \\ & \(2\) & 73 & 4 & 71 \\ \hline \multirow{2}{*}{RMSprop} & \(154\) & 1 & 146 & 9 \\ & \(1\) & 74 & 8 & 67 \\ \hline \end{tabular} \end{table} TABLE IX: Confusion matrix for 2 class classification of comfortable versus rest classifier for the multi-class classification. Deep learning is used to detect emotion where various optimizers were used out of which Adadelta performed with maximum accuracy of \(94.13\%\) on a blind test for the multi-class classification. ## VI Future Work The system we prepared was for a single occupant room as the Kinect XBOX 360 we used can properly detect coordinates of only a single occupant. On using XBOX 1 the same scheme of illumination regulation can be performed for multi-occupant which is planned as future work. The occupancy modeling we performed here is done only by changing the illumination in the room. Other electrical equipment must be regulated like air-conditioners, fans, _etc._, [14] to maintain occupant's comfort and reduce energy wastage. The use of a control scheme may provide the desired illumination more efficiently [22].
2309.02979
Come Closer: The Effects of Robot Personality on Human Proxemics Behaviours
Social Robots in human environments need to be able to reason about their physical surroundings while interacting with people. Furthermore, human proxemics behaviours around robots can indicate how people perceive the robots and can inform robot personality and interaction design. Here, we introduce Charlie, a situated robot receptionist that can interact with people using verbal and non-verbal communication in a dynamic environment, where users might enter or leave the scene at any time. The robot receptionist is stationary and cannot navigate. Therefore, people have full control over their personal space as they are the ones approaching the robot. We investigated the influence of different apparent robot personalities on the proxemics behaviours of the humans. The results indicate that different types of robot personalities, specifically introversion and extroversion, can influence human proxemics behaviours. Participants maintained shorter distances with the introvert robot receptionist, compared to the extrovert robot. Interestingly, we observed that human-robot proxemics were not the same as typical human-human interpersonal distances, as defined in the literature. We therefore propose new proxemics zones for human-robot interaction.
Meriam Moujahid, David A. Robb, Christian Dondrup, Helen Hastie
2023-09-06T13:24:45Z
http://arxiv.org/abs/2309.02979v1
# Come Closer: The Effects of Robot Personality on Human Proxemics Behaviours ###### Abstract Social Robots in human environments need to be able to reason about their physical surroundings while interacting with people. Furthermore, human proxemics behaviours around robots can indicate how people perceive the robots and can inform robot personality and interaction design. Here, we introduce Charlie, a situated robot receptionist that can interact with people using verbal and non-verbal communication in a dynamic environment, where users might enter or leave the scene at any time. The robot receptionist is stationary and cannot navigate. Therefore, people have full control over their personal space as they are the ones approaching the robot. We investigated the influence of different apparent robot personalities on the proxemics behaviours of the humans. The results indicate that different types of robot personalities, specifically introversion and extroversion, can influence human proxemics behaviours. Participants maintained shorter distances with the introvert robot receptionist, compared to the extrovert robot. Interestingly, we observed that human-robot proxemics were not the same as typical human-human interpersonal distances, as defined in the literature. We therefore propose new proxemics zones for human-robot interaction. ## I Introduction Social robots are entering our public and social spaces, where they need to be human-aware and follow social etiquette. Social robots can be valuable in the service industry, such as receptionists [1], paristas [2], and bartenders [3]. If we observe how people interact with each other in physically situated face-to-face settings, the interaction goes well beyond the spoken words. Hence, the physical space in which the interaction takes place can have implications for the design of effective human-robot interaction. For such an effective interaction with humans, robots need to perceive and reason deeply about their physical surroundings, understand the physics of human interaction and engage in fluid interaction with humans in their physical environment [4]. One of the commonly used measures to understand human social behaviours in a physical space is distance and positioning between interaction partners, often referred to as _proxemics_. Initial work on the concept of proxemics [5] provided a systemic basis for research into social and personal spaces between humans. It was demonstrated that social spaces substantially reflect and influence social relationships and the attitudes of people towards each other. It is not unreasonable to assume that such proxemics behaviours might also hold true in human-robot interaction (HRI). Furthermore, it is essential to understand how design decisions for social robots could influence human proxemics behaviours around these robots. To this end, this work looks at evaluating the influence of different robot personalities, i.e. introvert vs extrovert, on proxemics using a fixed position robot. Furthermore, we compare these proxemics to human-human interpersonal distances. According to E. Howarth [6], it is preferred to have extrovert traits when working in service industry jobs, this is valid for the receptionist role. Previous work [7] demonstrated that manipulating linguistic and prosodic cues of a social robot can be used to portray extroversion and introversion personalities traits for the robot. Furthermore, an extrovert robot has been shown to be preferred and trusted over an introvert one [8]. However, these studies did not delve into the proxemics aspect. Our contributions in this paper include a unique study "in the wild" showing that human-robot proxemics are influenced by a robot's displayed personality. We use a Furhat1 robot receptionist, connected to a visitor management system and a TDS kiosk2, which is a self-service visitor check-in kiosk, as shown in Figure1. The robot is able to reason about its physical surroundings and interact using a combination of verbal and non-verbal communication [9]. We address two broad research questions (RQ): Footnote 1: [https://furhatrobotics.com](https://furhatrobotics.com) Footnote 2: [https://www.timedatasecurity.com/product/tds-visitor](https://www.timedatasecurity.com/product/tds-visitor) RQ1: Can the robot's personality influence human proxemics behaviours? RQ2: Are Human-robot interpersonal distances different to those found for human-human interpersonal distances? Fig. 1: Furhat Robot Receptionist connected to a visitor management system and a TDS kiosk for visitors’ check-in and check-out. ## II Background Edward T. Hall [10] introduced the theory of proxemics, which refers to the personal space that people maintain around themselves. This space differs depending on the social setting and their cultural backgrounds. Michael Argyle [11] suggested the intimacy equilibrium model, which links mutual gaze and proxemics behaviours [12]. This model illustrates how people react to the violation of their personal space by reducing mutual gaze and/or moving backward. A person's space is not only a physical buffer zone, but also a psychological one [13]. The invasion of our personal space can be uncomfortable and disquieting. Such a breach occurs when our intimate zone (0.00m to 0.45m) is violated by someone who is not an intimate connection. The ideal zone reserved for personal interactions among good friends or family is between 0.45 to 1.2, while the social zone reserved for interactions among acquaintances is between 1.2m and 2.1m [10]. These guidelines used in social psychology are based on human-human interaction, and can only map to human-robot interaction if we concede that a robot is perceived as a social actor and comparable to a human [14][15]. However, people do not always interact with robots in the same way that they interact with each other [16][17]. Previous research suggests that humans approach robots with distances reserved for a close acquaintance or a family member [18], which is considered within the intimate zone. In human-robot interactions, there are many factors that could influence human proxemics behaviours. Some factors are related to the robots such as its voice [18], appearance [19], speed [20], and height [21]. Other aspects are related to humans such as their age [22], personalities [23][24], prior experience with a robot [21], gender [25] or social norms [11]. Thus, it is critical to comprehend which robot design decisions impact human proxemics behaviours around robots, in order to enhance interactions between robots and humans. It is yet unclear if robot personality can have any effects on human proxemics behaviours, given that it has been shown that we can model certain personality traits [8]. In this study, we test if proxemics vary depending on the robot personality type. Previous research on robot personality aims attention at the facet of extroversion, for being one of the easier personality traits to exhibit, even in shorter interactions [26]. On that account, we deemed it feasible to focus on extroversion and introversion as types of robot personalities. Commonly, introverts are monotone and use fewer pauses and hesitations [27], while extroverts use extensive vocabulary and speak with a faster speech rate and louder voice, in addition to higher fundamental frequency and broader frequency range [28], [29]. Earlier work [30][31] demonstrates how calibrating prosody can portray extroversion and introversion as personality traits in artificial agents, and can influence how we perceive their personality. Other previous research [32] explored the influence of voice pitch on how people perceived a social robot receptionist. The findings point out that a high-pitched robot with a female voice was rated more attractive, emotional, and extrovert. Similarly, the voice of the robot can influence approaching distances [18]. Trovato et al. [33] demonstrate that any uncomfortable noise produced by a robot can lead to greater human-robot distance when people approach it. However, this proxemics effect can be masked by adding ambient background music. According to Bhagya et al. [34], proxemics distances preferred by humans increase with the increased volume of the internal noises (i.e.: machine noise) of a robot. However, this was not tested with a robot that uses advanced speech capabilities and natural language. Macmillan [35] and Chepesiuk [36] demonstrate that the ideal sound of a normal conversation is about 60dB, while any sound greater than 70db will be considered annoying noise for human ears, which might result in changing proxemics behaviours. During this experiment, we therefore kept the volume of the robot speaker between 58dB and 64dB, measured using a sound meter. ## III Methodology As mentioned above, Lim et al. [8] showed that one can successfully represent extroversion and introversion using linguistic features and vocal cues, they then went on to show that an extrovert robot is preferred and trusted more than an introvert robot in a robot barista setting. We followed a similar approach to Lim et al. [8], using prosodic parameters: volume, pitch, and tempo. We created two robot personalities with different traits of extroversion and introversion. However, as the context and domain of a robot receptionist was different to the robot barista, in their study, we relied on other previous work [29] to add pragmatic [7] and lexical/syntactic [37] differences between the two personalities. We explore how different types of robot personalities, specifically introversion and extroversion, can lead to different human proxemics behaviours. Furthermore, in the wild studies can give us new insights into spontaneous interactions with robots, giving people the freedom to approach the robot (or not) at a comfortable distance. This work addresses these questions and fills this gap by comparing how an introvert vs. extrovert robot receptionist will affect proxemics behaviours. ## IV Experimental Design and Setup Charlie, the Robot Receptionist set-up, consists of a Furhat robot [38] from Furhat Robotics3 that is connected to a visitor management system through an API, in order to get information about calendar, meetings, employee profile and to guide visitors with the check-in and check-out process. The robot is placed in the reception area of the UK National Robotarium4, as shown on Figure 2. The robot can interact with users using verbal and non-verbal communication and has advanced natural language understanding and dialogue management capabilities. The receptionist can help visitors with the registration process in order to print a badge, and give them information about directions, news and events. The Robot Receptionist is aware of its physical surroundings. Furthermore, it can reason and react to events in the physical space around itself. ### _Conditions_ We created two robot personalities with different traits of extroversion and introversion, by manipulating the lexicon (See Table I), volume, pitch and tempo (following [8]) as follows: 1. The Introvert: The robot speech was given a lower pitch (-20%), volume (-6dB) and speaking rate (-20%). 2. The Extrovert: The robot speech was given a higher pitch (+20%), volume (+6dB) and speaking rate (+20%). We observed 3 types of interaction with the robot receptionist, therefore we classified participants into 3 groups: 1. **Verbal interactions**: People used verbal communication to interact with the robot. 2. **Non-verbal interactions**: People were standing in the robot field of view, and looking towards the robot for more than 15 seconds but without talking. 3. **No interaction**: People just pass by, without showing any interest. These were identified by a very short time period (less than 15 seconds) between being detected in the scene and leaving the scene. The experiment was between-participants by design. The independent variables are thus: - The **Robot Personality** as a manipulated independent variable with 2 levels: Introversion vs Extroversion policy. - The **Type of Interaction** as a non manipulated independent variable with 3 levels: Verbal, Non-verbal or No interaction The main dependent variable for the experiment is the **Shortest Distance** between the robot and the user during the entire interaction. During the experiment, a total of 120 participants took part (60 interactions for each personality) over 6 weeks. All interaction data were collected with the exact user position coordinates and orientation at 5 different time stamps with \(t_{1}\) when the user is first detected and \(t_{5}\) when the user leaves the robot's scene. We also collected the robot-transcribed utterances, and the duration of the interaction. The participants' visual or personal data were not collected. No audio or video data was collected, also no demographic information was collected as the experiment was in the wild. The procedure was ethically approved by our institution's ethics board. ### _Hypotheses_ In order to address our research questions, we formulated the following hypotheses: * **H1:** There is a significant difference for the shortest distance between the two robot personality types (In-trovert/Extrovert) * **H2:** There is significant difference for the shortest distance between the types of interaction (Verbal, Non-verbal and No Interaction) * **H3:** There is a significant interaction effect between robot personality types and the types of interaction, with respect to shortest distance \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Fig. 2: The reception area in the National Robotarium where the robot receptionist is placed ## V Results and Analysis Firstly, we grouped the participants into 3 groups based on the Type of Interaction they had with the robot. Table II represents the number and percentage of users engaging in each Type of Interaction. Using the users' location coordinates, we computed the Euclidean distance for each logged interaction. We used the Shortest Distance between the participant and the robot. The Shortest Distance is preferred over average distance as it is more accurate [39]. Furthermore, the users used the kiosk next to the robot (Figure1), which will result in them standing at a specific distance from the robot for an unequal amount of time. This would lead to inconsistent results when calculating the average distance. Figures 3 and 4 show plots of users interacting with each personality. Table III shows the descriptive statistics of the Shortest Distance in each of the two personality conditions across all three Types of Interaction and over all the interactions as a whole. A factorial ANOVA (also known as two-way ANOVA) was carried out in IBM SPSS version 28. _Robot Personality:_ There was a significant main effect of Robot Personality, with participants' Shortest Distance being closer to the Introvert Robot (Mean, 0.99) than the Extrovert Robot (Mean, 1.36), \(F\)(1,114) = 16.76, \(p\)\(<\).001. We can therefore accept H1. Furthermore, the ANOVA showed that the Shortest distance for the Extrovert group was significantly higher than the Introvert group as shown on Figure 5. \begin{table} \begin{tabular}{|p{113.8pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline **Type of Interactions** & \multicolumn{2}{c|}{**With the introvert Robot**} & \multicolumn{2}{c|}{**With the extrovert Robot**} \\ \cline{2-5} & **Number** & **Percentage** & **Number** & **Percentage** \\ \hline Participants using Neural Interaction & 22 & 36.66\% & 18 & 30.00\% \\ \hline Participants using Non-verbal Interaction & 19 & 31.66\% & 20 & 33.33\% \\ \hline No Interaction & 19 & 31.66\% & 22 & 36.66\% \\ \hline \end{tabular} \end{table} TABLE II: The number of users and types of their interactions with the Robot Reception using different policies Fig. 4: Introvert Robot: Scatter plot of the users X and Y coordinates showing the users’ position on the reception scene in front of the Introvert Robot receptionist. (The black dot represents the robot’s position at (0.0,0.0)) Fig. 5: Box plots showing that the users approached the introvert robot within a significantly shorter distance compared to the extrovert robot. (boxes show the means and the two middle quartiles, while the whiskers show maximums and minimums other than any notional outliers) Fig. 3: Extrovert Robot: Scatter plot of the users X and Y coordinates showing the users’ position on the reception scene in front of the Extrovert Robot receptionist. (The black dot represents the robot’s position at (0.0,0.0)) \begin{table} \begin{tabular}{|p{113.8pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline **Type of Interactions** & \multicolumn{4}{c|}{**Infrovert Robot**} & \multicolumn{4}{c|}{**Extrovert Robot**} \\ \cline{2-5} & **Min** & **Max** & **Mean** & **Std** & **Min** & **Max** & **Mean** & **Std** \\ \hline Participants using Verbal Interaction & 0.41m & 0.91m & **0.61m** & 0.12 & 0.55m & 1.47m & **L19m** & 0.46 \\ \hline Participants using Non-verbal Interaction & 0.22m & 1.35m & 0.64m & 0.32 & 0.47m & 2.08m & 1.01m & 0.51 \\ \hline No Interaction & 0.78m & 2.34m & 1.78m & 0.48 & 0.54m & 2.72m & 1.79m & 0.54 \\ \hline All Interactions & 0.22m & 2.34m & 0.99m & 0.63 & 0.47m & 2.72m & 1.36m & 0.00 \\ \hline \end{tabular} \end{table} TABLE III: Shortest Distance maintained by participants while interacting with the Robot Receptionist using different personalities _Type of Interaction:_ There was a significant main effect of Type of Interaction on the participants' Shortest Distance from the Robot, _F_(2,114) = 61.69, _p\(<\)_.001. We can therefore accept H2. Post hoc tests using Bonferroni's correction for multiple comparisons, revealed that the participants' Shortest Distance was significantly shorter for those in the Verbal (Mean, 0.88) and Non-verbal (Mean, 0.84) Interaction groups (_p\(<\)_0.001 in both cases) compared to those in the No Interaction group (Mean, 1.79). There was no significant difference in Shortest Distance between participants in the Verbal and Non-Verbal Interaction groups (_p_ =.902). _Interaction Effect:_ There was a significant interaction effect between the Robot Personality and the Type of Interaction measured by Shortest Distance _F_(2,114) = 4.57, \(p\) =.012. This effect indicates that participants' Shortest Distance from the Introvert Robot and the Extrovert Robot was affected differently by their Type of Interaction. We can therefore accept H3. Figure 6 shows the Means of the Shortest Distance for each group along with their 95% confidence intervals. We can see this indicates that the Shortest Distance for those participants in both Verbal and Non-Verbal groups was influenced by Robot personality (with them being closer to the Introvert Robot). On the other hand, the Shortest Distance of those who did not interact with the robot was not affected by robot personality. ## VI Discussion _RQ1: Can the robot's personality influence human proxemics behaviours?_ In support of H1, the results show that different types of robot personalities, in this case introversion and extroversion, can lead to different human proxemics behaviours. Furthermore, the distance was shorter with the introvert robot, which indicates that participants were more comfortable standing closer to the introvert robot. Shorter proxemics could be linked to positive feelings about the robots. There are wider implications of these results for human-robot interaction theory and the design of a social robot and specifically for designing its personality. _RQ2: Are Human-robot interpersonal distances different to those found for human-human interpersonal distances?_ By accepting H2, we demonstrate that the Shortest Distance between human and robot was different to what would be expected between two humans, based on the Type of Interactions regardless of the robot personality. We notice that none of the distance ranges fall within the human-human social zone, which indicates that people did not follow the same human-human proxemics rules [10] to interact with the robot, as shown on Figure 7. Participants using verbal communication kept a distance between 0.41m and 1.47m. Nonetheless, this distance does not fit with the social zone for human-human interactions (1.2m to 3.6m). It is however very close to the personal zone (0.45m to 1.2m) as considered by Hall's proxemics theory [10]. Similarly, participants using non-verbal communication kept a distance between 0.23m and 2.08m. This distance overlap between the social zone for human-human interactions (1.2m to 3.6m) and the personal zone (0.45m to 1.2m). Lastly, participants who avoided interacting with the robot used a zone in the reception area, which is \(\geq\)0.78m away from the robot. This is again considered in human-human interaction as being in a personal and social zone. These results contradict the media equation theory [14][40] and Hall's proxemics theory [10]. While these theories rely on principles from social psychology and social science, our results suggest that these theories do not always map to human robot-interaction. Therefore, in Figure 6(c), we propose a new set of proxemics zones for this form of interaction with a stationary robot. Fig. 6: Mean Shortest Distances (in metres) for the different groups, along with the 95% confidence intervals. Fig. 7: Human-human interaction zones according to proxemics theory compared to interaction zones with the Robot Receptionist Our findings suggest the comfortable distance people like to keep when approaching the robot might vary between 0.23m and 2.08m (III, second row, Non-verbal Interaction, Introvert Robot, Min, 0.23m and Extrovert Robot, Max, 2.08m). The grey coloured zones in Figure 7 b and c, represent the narrow zone very close to the robot, for which we have no data. The vision system within the Robot itself is unable to sense and gather accurate data within this zone very close to it. Thus, it represents what might be termed a blind spot. Thus we have no data on whether or not people were approaching closer than this. For this reason, we represent this space in grey as unknown. This close zone is a space for future research. As this experiment was done "in the wild", participants were interacting with the robot in a spontaneous way. Some participants were using only non-verbal cues. While the robot was verbally interactive, they chose not to verbally respond to the robot. This behaviour could be explained by people meeting the robot for the first time, or being curious about the robot and observing its behaviour. This type of behaviour will be unusual in the context of human-human interaction, again contradicting the media equation theory. However, this suggests that the non-verbal social communication of the robot could be improved to also respond using more non-verbal cues in these types of situations. ## VII Conclusion and Future Work We show that the distance between robot and human can be different depending on the robot personality. To demonstrate this, we focused on introversion and extroversion as personality traits. We provide empirical evidence that people who had a verbal interaction with the robot maintained a shorter distance with an Introvert robot compared to the Extrovert version. Overall, the distance people maintained with the robot, regardless of the robot personality, was between 0.41m and 1.47m for verbal interaction, and between 0.23m to 2.08m for non-verbal interaction. This range is overlapping between human-human social zone and personal zone. These results imply that proxemics theory does not map directly to human-robot interactions as people do not interact with robots in the same way they interact with each other. There are broader implications of these results in the design of social robots and interactions. For instance, factors related to the robot personality traits can influence human proxemics behaviours around these robots. In the future, further data will be collected to analyse other parameters that can influence proxemics behaviours, such as the robot's physical aspects and gestures. Moreover, we want to analyse further variables such as the duration of the interaction and look into the human trajectories in more detail, and how these proxemics behaviours change over time. ## Acknowledgements This work was funded and supported by the UKRI Node on Trust (EP/V026682/1) [https://trust.tas.ac.uk](https://trust.tas.ac.uk). We would like to thank the National Robotarium Engineers [https://thenationalrobotarium.com](https://thenationalrobotarium.com) and the TDS support team [https://www.timedatasecurity.com/product/tds-visitor](https://www.timedatasecurity.com/product/tds-visitor).
2305.15108
The Role of Output Vocabulary in T2T LMs for SPARQL Semantic Parsing
In this work, we analyse the role of output vocabulary for text-to-text (T2T) models on the task of SPARQL semantic parsing. We perform experiments within the the context of knowledge graph question answering (KGQA), where the task is to convert questions in natural language to the SPARQL query language. We observe that the query vocabulary is distinct from human vocabulary. Language Models (LMs) are pre-dominantly trained for human language tasks, and hence, if the query vocabulary is replaced with a vocabulary more attuned to the LM tokenizer, the performance of models may improve. We carry out carefully selected vocabulary substitutions on the queries and find absolute gains in the range of 17% on the GrailQA dataset.
Debayan Banerjee, Pranav Ajit Nair, Ricardo Usbeck, Chris Biemann
2023-05-24T12:55:04Z
http://arxiv.org/abs/2305.15108v1
# The Role of Output Vocabulary in T2T LMs ###### Abstract In this work, we analyse the role of output vocabulary for text-to-text (T2T) models on the task of SPARQL semantic parsing. We perform experiments within the the context of knowledge graph question answering (KGQA), where the task is to convert questions in natural language to the SPARQL query language. We observe that the query vocabulary is distinct from human vocabulary. Language Models (LMs) are pre-dominantly trained for human language tasks, and hence, if the query vocabulary is replaced with a vocabulary more attuned to the LM tokenizer, the performance of models may improve. We carry out carefully selected vocabulary substitutions on the queries and find absolute gains in the range of 17% on the GrailQA dataset. ## 1 Introduction Knowledge Graph Question Answering (KGQA) is the task of finding answers to questions posed in natural language, using triples present in a KG. Typically the following steps are followed in KGQA: 1) Objects of interest in the natural language question are detected and linked to the KG in a step called entity linking. 2) The relation between the objects is discovered and linked to the KG in a step called relation linking. 3) A formal query, usually SPARQL1, is formed with the linked entities and relations. The query is executed on the KG to fetch the answer. Footnote 1: [https://www.w3.org/TR/rdf-sparql-query/](https://www.w3.org/TR/rdf-sparql-query/) Our focus in this work is the query building phase, henceforth referred to as KGQA semantic parsing. The motivation of our work stems from Banerjee et al. (2022), where minor vocabulary substitutions to handle non-printable special characters for T5 Raffel et al. (2020) produced better results on the task of SPARQL semantic parsing. In this work, we extend the idea and replace the entire SPARQL vocabulary with alternate vocabularies. As in Banerjee et al. (2022), we replace certain special characters in the SPARQL vocabulary, such as {, } with textual identifiers, as T5 is known to have problems dealing with these special characters Banerjee et al. (2022). We call this a masked query, and in this work, we test the ability of the models to generate this masked query, given the natural language question as input. A sample question, the original SPARQL query, and the corresponding masked query are as shown below (for the Wikidata KG Vrandecic and Krotzsch (2014)) : _Is it true that an Olympic-size swimming pool's operating temperature is equal to 22.4?_ ASK WHERE { wd:Q2084454 wdt:P5066?obj filter(?obj = 22.4) } ASK WHERE OB ent0 rel0?obj filter (?obj = 22.4 ) CB In the era of pre-trained Language Models (LMs) Devlin et al. (2019); Raffel et al. (2020) it is common practice to fine-tune models on custom downstream datasets. This requires supervised training which results in modification of weights of the models using some training algorithm. More recently, the technique of prompting of language models Brown et al. (2020); Shin et al. (2020) has been developed, which elicits the desired response from a LM through a task description and a few input-output examples. Brown et al. (2020) shows that such a strategy works better for larger models. It has however been observed that prompt design is brittle in behaviour and displays sensitivity to the exact phrase (Shin et al., 2020). A more recent innovation is that of prompt tuning (Lester et al., 2021), where the task-specific prompt is learnt on a smaller external neural network. The gradients are computed and flow through the LM, but leave the weights of the LM itself unchanged. Instead, the weights of the prompt tuning network change and produce a custom and continuous prompt which produces the desirable response from the LM. A similar method is prefix tuning (Li and Liang, 2021), which is known to perform better for generation tasks (Ma et al., 2022). In this method, the original inputs and outputs are kept the same, but the input is pre-pended with a continuous prefix learnt in the external network. This prefix allows the model to understand the exact task to be performed by it. As primary contribution, in this work, we perform an analysis of how the complexity of output vocabularies affects the performance on the KGQA semantic parsing task for prefix and fine-tuned language models. Code and data can be found at [https://github.com/debayan/sparql-vocab-substitution](https://github.com/debayan/sparql-vocab-substitution). ## 2 Related Work A study of low-resource semantic parsing using prompt tuning was performed by Schucher et al. (2022) on the Top v2 (Chen et al., 2020) and Overnight (Wang et al., 2015) datasets. Prompt tuning, while not the same as prefix tuning, still keeps the LM weights frozen while the prompts are learnt on an external network. In their experiments, they perform a single kind of vocabulary substitution but find no noticeable performance improvements. No specific study is made of the change in performance with vocabularies of varying complexities, which is a task we undertake. Another difference is that we perform experiments in the high-resource use case as opposed to low-resource. Another work which is similar to ours is Sun et al. (2022), where the authors experiment with prefix tuning on the task of semantic parsing, and find problems with non-standard vocabularies of logical forms. In their case, they work with the TOP v2 (Chen et al., 2020) and PIZZA (Arkoudas et al., 2022) datasets. The keywords in those datasets consist of words joined by underscores (eg: IN:GET_REMINIDER_DATA_TIME ), which poses a problem for the sub-word tokenizer of the transformer based models. They find that fine tuning a model on these datasets outperforms prefixtuning by a large margin. However, when they add the non-standard keywords to the tokenizer vocabulary and re-train the tokenizer to generate new embeddings for these keywords, fine tuning and prefix tuning perform at par. Our work is different in a few respects: firstly, due to the specific research focus of our group, we experiment with a semantic parsing dataset for KGQA, namely GrailQA (Gu et al., 2021). Secondly, instead of retraining the tokenizer, we perform a simpler procedure of pre-processing the dataset by replacing the current vocabulary with a new vocabulary. We then train the models on this modified dataset, and as a post-processing step, substitute back the original vocabulary in place of the new vocabulary. ## 3 Prefix Tuning Prefix tuning prepends a set of tunable weights to every key-value pair in the transformer attention. The transformer attention is represented as follows: \[\text{attn}(Q,K,V)=\text{softmax}(\frac{Q\cdot K^{\top}}{\sqrt{d}})V \tag{1}\] where the query \(Q\), key \(K\) and value \(V\) are obtained through affine transformations on the input. \(d\) represents the model dimension. Prefix tuning modifies the transformer attention by adding tunable prefixes to \(K\) and \(V\), thereby modifying \(K\) as \(K^{\prime}=[h_{K};K]\) and \(V\) as \(V^{\prime}=[h_{V};V]\). Here \(h_{K}\) and \(h_{V}\) represent the key prefix and the value prefix respectively. Following Li and Liang (2021) we model these prefixes using a two layer MLP as follows: \[\begin{split} h_{K}&=W_{K,2}f(W_{K,1}E+b_{K,1})+b_{K, 2}\\ h_{V}&=W_{V,2}f(W_{V,1}E+b_{V,1})+b_{V,2}\end{split} \tag{2}\] where \(W\in\mathbb{R}^{d\times d}\) and \(b\in\mathbb{R}^{d}\) are trainable weights and biases respectively. \(E\in\mathbb{R}^{C\times d}\) is a trainable embedding matrix with \(C\) as the prefix length. ## 4 Models and Experimental Setup We carry out prefix-tuning and fine-tuning experiments with two versions of the T5 model: namely T5-Small (60 million parameters) and T5-Base (220 million parameters). Questions are fed as input during training while masked SPARQL queries, as described in Section 1, are provided as labels for supervision. For evaluation, we use the exact-match metric. A generated query is matched token by token, while ignoring white-spaces, to the gold query. The percentage of queries matched is reported. ### Hyper-parameters and Implementation Details Throughout our experiments, the prefix length is fixed to \(50\). For prefix tuning experiments we use the Adafactor (Shazeer and Stern, 2018) optimizer with a constant learning rate of \(0.001\). Fine-tuning experiments are optimized through AdamW (Loshchilov and Hutter, 2019) with a square root decay schedule, a maximum learning rate of \(0.0015\) and a linear warm-up of \(5000\) steps. Our code is implemented with HuggingFace Transformers2(Wolf et al., 2020) and OpenPrompt3(Ding et al., 2022). T5-Small experiments were run on 12GB Nvidia GTX-1080 and RTX-2080 GPUs, and T5-Base experiments were run on 48GB Nvidia RTX-A6000. For fine-tuning, we run each training thrice with three separate seeds for 120 epochs each. For prompt tuning we do the same for 400 epochs. We report the inference results of these trained models on the test sets of the respective datasets. Footnote 2: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) Footnote 3: [https://github.com/thunlp/OpenPrompt](https://github.com/thunlp/OpenPrompt) ## 5 Vocabulary The original vocabulary of the GrailQA dataset consists of 48 words. The T5 tokenizer splits these words into 124 sub-words. This tokenizer specific vocabulary size (TSVS) is seen in the last column of Table 1. In the next column, the original average logical form (SPARQL query) length can be seen as 125 tokenized sub-words. We wish to see how a new output vocabulary affects performance, and as a result, we construct a set of special vocabularies and substitute them in-place of the original SPARQL vocabulary. With reference to the settings in Table 1, each vocabulary is as described below: **original** The masked SPARQL queries remain as they are. No replacement of the original SPARQL keywords is made with an alternate vocabulary. **dictionary** The SPARQL keywords are replaced with a vocabulary of English words. For example, SELECT may be replaced with DOG, [ may be replaced with CAT etc. During the pre-training phase a LM is likely to have seen such words far more frequently than the SPARQL keywords. This mode tests how the model behaves when the output vocabulary is comprised of well known English words. **char1** The SPARQL keywords are replaced with a single character of the English alphabet, for example, SELECT is replaced with A, WHERE is replaced with B. Additionally, numerical digits from 1-9 are used, and if the size of vocabulary demands more, we add single length special characters, such as * and $. **char2**, **char4** and **char8** settings apply vocabulary substitution of 2, 4 and 8 character lengths chosen randomly, constituted from the characters A-Z and digits 0-9. For example, a typical **char8** substitution would be SELECT replaced by ATYZGFSD. This setting is designed to test the behaviour of the models when asked to produce more number of tokens per original-vocabulary word. A sample of a question, the SPARQL and the corresponding substitutions is provided in the Appendix in Table 2. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{GrailQA} \\ \hline & \multicolumn{3}{c|}{T5-Small} & \multicolumn{3}{c|}{T5-Base} & \\ \hline & PT & FT & PT & FT & TSVS & ALFL \\ \hline char8 & 74.03 & 86.57 & 82.65 & 86.72 & 306 & 263 \\ char4 & 76.43 & 87.09 & 84.92 & 87.10 & 159 & 141 \\ char2 & 83.29 & 91.49 & 89.83 & 92.30 & 90 & 87 \\ char1 & **84.89** & **92.13** & **91.24** & **92.61** & 57 & 57 \\ dictionary & 82.57 & 91.95 & 90.93 & 92.48 & 49 & 44 \\ original & 67.10 & 74.08 & 73.06 & 74.45 & 124 & 125 \\ \hline \end{tabular} \end{table} Table 1: Exact match percentages for generated masked SPARQL queries. Best performance is always found in substituted vocabularies. For **char** settings, accuracy drops as vocabulary and query lengths increase. TSVS = Tokenizer specific vocabulary size, ALFL = Average logical form length, PT = Prefix Tuning, FT = Fine Tuning ## 6 Datasets For our experiments, we require a dataset which contains a mapping of natural language questions to their corresponding logical forms and is large in size, since we test the high resource use-case. **GrailQA**4 is based on the Freebase knowledge graph Bollacker et al. (2008) and consists of 64,331 questions designed to test three levels of generalisation, ie, i.i.d, compositional and zero-shot. For our purposes, we split the train set itself to three parts, since we are not interested in testing compositional generalisation aspects of the test set of this dataset. We are left with the following configuration: test: 8868, dev: 4434, train: 31035. Footnote 4: [https://dki-lab.github.io/GrailQA/](https://dki-lab.github.io/GrailQA/) ## 7 Analysis As seen in Table 1, the best performance for prefix and fine tuning is achieved for substituted vocabularies. The original vocabulary lags behind in general, which points to the finding, that the choice of an appropriate vocabulary improves performance for semantic parsing. Further, among the substituted vocabularies, the setting **char8** performs the worst, which signifies the adverse role of the extra decoding load of this vocabulary on the performance of the model. This finding is different from that of Schucher et al. (2022), who find their in-vocab setting performing no better overall. They attribute it to the substitutions possibly masking the meanings of the intents, for their given dataset. On the contrary, we find significant gains for GrailQA. It must be noted however, that we perform high-resource prefix tuning while they perform low-resource prompt tuning, and hence results may differ. As seen in Figure 1, for the **char** settings, as the size of vocabulary increases, the prefix tuning accuracy drops. In the said figure, we define vocabulary compression ratio as the size of the new vocabulary divided by the size of the original vocabulary. Apart from vocabulary size, the query length also matters. We dual-define vocabulary compression ratio as the size of query length after substitution of new vocabulary divided by size of original query length, and plot on the same graph. When compared to the fine-tuning plot (Figure 2), prefix tuning has a steeper drop in accuracy, and the performance for T5-Small and T5-Base vary more significantly. It leads to the finding that fine-tuning is less sensitive to vocabulary changes, and the difference in model sizes between T5-Small and T5-Base also seems to matter less. In Figures 1 and 2, it can be seen that the **original** setting for the masked SPARQL vocabularies produce accuracies which are below the **char** family vocabulary curves. It suggests that vocabulary compression ratio alone is not a deciding factor in accuracy. If the vocabulary family changes from SPARQL to characters, there is an initial shift in accuracy, and after that the complexity of the character vocabulary further affects the accuracy. In Table 1, the **dictionary** setting performs slightly worse than the **char1** setting, although it has lower TSVS and ALFL. This suggests that the vocabulary size and query length are not the only factors that affect the eventual accuracy. Perhaps Figure 1: Prefix tuning accuracy drops as vocabulary and query lengths increase for **char** settings. TSVS = Tokenizer specific vocabulary size, ALFL = Average logical form length Figure 2: Fine-tuning accuracy drop is more gradual when compared to prefix tuning, and the performance of T5-Small and T5-Base are similar. TSVS = Tokenizer specific vocabulary size, ALFL = Average logical form length the frequency of the tokens seen by the model during the pre-training task plays a role. It is likely that the model has encountered, during pre-training, single characters a far larger number of times than the words used in **dictionary** vocabulary. ## 8 Error Analysis We performed an error analysis on a sample of 100 randomly selected questions which produced an incorrect output. In the **original** setting, roughly 50% errors were due to the presence of non-printable characters in the query (eg: \({}^{\Lambda}\)). We found that in the initial masked query, while we had replaced some non-printable characters in the pre-processing stage (eg: {, } ), we had not managed to replace the full set of non-printable characters. The original T5 paper mentions curly braces as one of the class of tokens that are not present in the pre-training corpus, however, a comprehensive list of the tokens that do not work with T5, or work with limited efficiency, is not available. In this scenario, it seems that a better approach is to replace the entire vocabulary with one that is entirely known to T5, for example, English words. When comparing errors made by **original**, that were fixed by **dictionary** and **char1**, we observed that roughly 30% of the cases were of variable placement, where the variable placeholders like ent0, rel0 were found to be in the wrong order in the output query in the **original** setting. Rest of the corrections belonged to the category of syntax errors. This points to the finding that alternate vocabularies improve the ability of T5 to correctly produce logical forms from a semantic perspective. To analyse the effect of increasing complexity of vocabulary, we compare 100 randomly selected errors made by **char8** with **char2**. In both these settings, no character is non-printable, and the only errors are either syntax errors, variable placement errors, structural errors or intent errors. Out of the 100 questions, 90 were found to be correct in **char2** setting. In the remaining 90 in the **char8** setting, the highest proportion of errors belonged to syntax (where the query is malformed). The next most prominent class of errors belonged to variable placement, followed by structural errors (eg: two triples instead of three). The major takeaway from this analysis is that for **char2** there were no syntax errors, while in **char8** there are a significant number of such errors. ## 9 Conclusion In this work we carried out experiments with new output vocabularies, where we carefully substituted the original members of the vocabulary with the new ones. We found that when the original SPARQL vocabulary is replaced with words from an alternate vocabulary closer to the T5 tokenizer vocabulary, the model consistently perform better. As a contribution, we believe that our findings will enable researchers in the field of semantic parsing to deploy smaller models with a modified vocabulary and still find satisfactory performance. This would, in the longer term, lead to energy savings. As future work, we would like to explore the behaviour of the same models in more depth using attention maps. Moreover, the significant shift in initial performance on changing vocabulary from **original** to **char** and **dictionary** demands further investigation. Similarly, the relatively lower performance of the **dictionary** setting when compared to **char1** setting, in spite of having lower tokenized vocabulary size (TSVS) needs to be investigated further. Perhaps sub-words which are seen more frequently during pre-training task of the LM perform better when substituted into the semantic parsing output vocabulary. ## 10 Limitations We found that prefix tuning takes much longer to converge when compared to fine tuning, and for T5-Base, it takes around 10 days on a 48 GB GPU to complete tuning for a single setting in Table 1. Due to limitation of resources and with an aim to save energy, we did not conduct experiments with larger models such as T5-Large, T5-XL etc. We also did not perform experiments with smaller splits of the same datasets, which could have given further insights on how model performance varies when training data size is less.
2310.09797
A Number Representation Systems Library Supporting New Representations Based on Morris Tapered Floating-point with Hidden Exponent Bit
The introduction of posit reopened the debate about the utility of IEEE754 in specific domains. In this context, we propose a high-level language (Scala) library that aims to reduce the effort of designing and testing new number representation systems (NRSs). The library's efficiency is tested with three new NRSs derived from Morris Tapered Floating-Point by adding a hidden exponent bit. We call these NRSs MorrisHEB, MorrisBiasHEB, and MorrisUnaryHEB, respectively. We show that they offer a better dynamic range, better decimal accuracy for unary operations, more exact results for addition (37.61% in the case of MorrisUnaryHEB), and better average decimal accuracy for inexact results on binary operations than posit and IEEE754. Going through existing benchmarks in the literature, and favorable/unfavorable examples for IEEE754/posit, we show that these new NRSs produce similar (less than one decimal accuracy difference) or even better results than IEEE754 and posit. Given the entire spectrum of results, there are arguments for MorrisBiasHEB to be used as a replacement for IEEE754 in general computations. MorrisUnaryHEB has a more populated ``golden zone'' (+13.6%) and a better dynamic range (149X) than posit, making it a candidate for machine learning computations.
Stefan-Dan Ciocirlan, Dumitrel Loghin
2023-10-15T10:59:41Z
http://arxiv.org/abs/2310.09797v1
A Number Representation Systems Library Supporting New Representations Based on Morris Tapered Floating-point with Hidden Exponent Bit ###### Abstract The introduction of posit reopened the debate about the utility of IEEE754 in specific domains. In this context, we propose a high-level language (Scala) library that aims to reduce the effort of designing and testing new number representation systems (NRSs). The library's efficiency is tested with three new NRSs derived from Morris Tapered Floating-Point by adding a hidden exponent bit. We call these NRSs MorrisHEB, MorrisBiasHEB, and MorrisUnaryHEB, respectively. We show that they offer a better dynamic range, better decimal accuracy for unary operations, more exact results for addition (37.61% in the case of MorrisUnaryHEB), and better average decimal accuracy for inexact results on binary operations than posit and IEEE754. Going through existing benchmarks in the literature, and favorable/unfavorable examples for IEEE754/posit, we show that these new NRSs produce similar (less than one decimal accuracy difference) or even better results than IEEE754 and posit. Given the entire spectrum of results, there are arguments for MorrisBiasHEB to be used as a replacement for IEEE754 in general computations. MorrisUnaryHEB has a more populated "golden zone" (\(+13.6\%\)) and a better dynamic range (149X) than posit, making it a candidate for machine learning computations. Keywords:Number Representation System Tapered Floating Point IEEE754 Posit Computer Arithmetic ## 1 Introduction Computers are well-known for their ability to run complex mathematical operations in a very fast way. To run such operations, the operands need to be represented in practical ways using the finite resources of modern computers. For example, numbers cannot be represented with infinite precision in computers. Multiple number representation systems (NRSs) were created to simulate the infinite world of mathematics. For the real number computations (or rational numbers, to be more precise) the IEEE754 standard [1] is the norm from its introduction in 1985. Only recently, Gustafson et al. [10, 9] questioned its dominance by proposing new NRSs such as unum and posit. The introduction of a new NRS, such as posit, produced multiple research work on its effect on domains such as scientific computing [17, 22], artificial intelligence [15, 2, 3, 18, 13, 12], digital signal processing [16], and computer architecture [14, 19, 27, 7, 25]. Previously proposed NRSs such as Morris Tapered Floating-Point [20] and universal real representation [11] were revived and re-analyzed. This process of re-analysis and benchmarking is resource and time-consuming every time a new NRS is introduced. This article proposes a library with multiple NRS implementations, including IEEE754, Floating-Point, Morris Tapered Floating-Point, Posit, Rational, Fractional, and Fixed-Point. In addition, existing benchmarks from the literature are implemented in this library. The main aim of our library is to have an easy way for adding a new NRS and test it immediately on already proposed and well-known benchmarks. This can make the analysis of a new NRS more efficient and less time-consuming. A secondary aim is to make it easy to add new benchmarks and make them compatible with all possible NRSs. In this paper, the library's efficiency and usage are tested by adding three new NRSs derived from Morris Tapered Floating-point [20] by using the concept of hidden exponent bit. Currently, there are a few libraries that implement NRSs. Some of them support only one NRS: posit (SoftPosit3, Posit Mathematica Notebook [8], Posit Octave4, Julia5), high-precision floating-point (The GNU Multiple Precision Arithmetic Library6, High Precision Arithmetic Library7, Flexfloat [26]). FloPoCo [5] is a library with floating-point and posits [21], but its scope is to generate arithmetic cores for FPGAs. These libraries do not offer a high spectrum of changeable attributes and NRSs. There is a need for a library that lets the developer change attributes like size, exponent size, fraction size, size of the exponent size, rounding method, rules for underflow and overflow. The solution is found in Universal Numbers Library [24, 23] which has multiple NRSs (minus Morris, plus unum type 1 and 2 and valids), a good set of benchmarks, a better performance and ways of adding new NRSs. The differences between our NRS library and Universal Numbers Library are the programming language (Scala vs. C++) and scope (easy to add and test new NRSs vs. performance for new NRSs). We believe these two libraries are complementary, not competitors. Our NRS library can be used for designing and benchmarking new NRSs. The filtered NRSs can then be implemented for performance in the Universal Numbers Library. Footnote 3: [https://gitlab.com/cerlane/SoftPosit](https://gitlab.com/cerlane/SoftPosit) Footnote 4: [https://github.com/diegofgeoelho/positsoctave](https://github.com/diegofgeoelho/positsoctave) Footnote 5: [https://github.com/interplanetary-robot/SigmoidNumbers](https://github.com/interplanetary-robot/SigmoidNumbers) Footnote 6: [https://gmplib.org/](https://gmplib.org/) Footnote 7: [https://www.nongnu.org/hpalib/](https://www.nongnu.org/hpalib/) To show the efficiency of our library, we design and evaluate three new NRSs based on Morris Tapered Floating-point with a hidden exponent bit. These three proposed NRSs are denoted by MorrisHEB(n, g, r), MorrisBiasHEB(n, g, r), and MorrisUnaryHEB(n, r), where \(n\) is the size (number of bits), \(g\) is a parameter that dictates the size of the exponent size, and \(r\) is the rounding rule. These NRSs were easily added by using the super-class TaperedFloatingPoint(size) and then implementing their underflow, overflow, exponent, and binary representation rules without any effort on mathematical operations. They are evaluated under characteristics, unary operations, binary operations, and literature benchmarks, with the following results: * Better dynamic range than posit and IEEE754. * MorrisUnaryHEB(16, r) has 13.6% more unique values in the "golden zone" than Posit(16, 2, r). * Increased number of unique values compared to the basic Morris tapered floating-point [20]. * Better decimal accuracy for unary operations. * More exact results for addition (37.61% more in the case of MorrisUnaryHEB(12, RE)). * Better decimal accuracy for inexact values on binary operations. To summarize, we make the following contributions in this paper. First, we design and implement a Scala library that makes it easy to add, test, and fine-tune number representation systems (NRSs). The library implements a series of well-known benchmarks from the literature. Secondly, we introduce three new NRSs based on Morris tapered floating-point, and thirdly, we analyze these three proposed NRSs together with well-known NRSs such as IEEE 754 floating-point and posit. The remainder of this paper is structured as follows. The second section contains the motivation behind, the scope of, and the architecture of the library. In the third section, a brief on NRSs implemented in the library can be found together with some decisions taken throughout the development. The new NRSs definitions can be found in the fourth section. In the fifth section, we present the evaluation, before concluding in the sixth section. ## 2 Library The main goal of our NRS library is to be an easy-to-use platform for adding and testing new NRSs and benchmarks. Figure 1 offers an overview of our library. In this library, an NRS is equipped with basic arithmetic operations (addition, subtraction, multiplication, division, exact division, modulo, power, negate, inverse), logic operations (less, equal, not equal, greater, greater or equal, less or equal), advanced arithmetic operations (minimum, maximum, absolute value, signum, \(n^{th}\) root, exponential, natural logarithm, logarithm), trigonometric functions (sin, cos, tan, cot, sec, csc), inverse trigonometric functions (arcsin, arccos, arctan, arccot, arcsec, arccsc), hyperbolic functions (sing, cosh, tanh, coth, sech, csch), inverse hyperbolic functions (arcsinh, arccoh, arctanh, arccoth, arccsch, arccsh), conversion functions to other NRSs, and viewing functions. All the above operations are exposed by an NRS interface part of the library. This interface is inherited by all the NRS implementations. When adding a new NRS, a developer needs to inherit the NRS interface and implement all the operations. Some of them have a generic implementation with Taylor series. Once done, all the benchmarks already implemented by our library can be run on this new NRS, without writing any additional code. If the NRS is derived from floating-point or tapered floating-point, the developer only needs to implement the rules for accepted exponent values, underflow, overflow, binary representation, and rounding. For adding a new benchmark, the developer needs to implement the algorithm using the generic NRS interface and all the past and future NRS implementations will be able to run it. The rational numbers NRS can be used as a reference, but in some circumstances, the computation time might be too long given its infinite precision. In such cases, the fractional numbers NRS is a good alternative. The benchmark suite implemented in our library contains unary operations, binary operations, density population of the NRS, and literature benchmarks (as we shall see in Section 5.4). This makes the life of scientists and developers much easier. The scientist will only focus on developing a new NRS, knowing that many existing benchmarks will be able to test it without writing additional code. Similarly, when adding a new benchmark, all the existing NRS implementations will automatically work with it. Developers of custom libraries can use the NRS interface to implement their specific functions. There is an opportunity for developing libraries for statistics, artificial intelligence, or digital signal processing. Currently, there are some statistics and scientific methods implemented in the library. For a digital signal processing library, the _complex_ construction with all its operations, FFT and IFFT are already implemented in the NRS Library. With time, the NRS interface might support more operations, but the current ones will always remain. ## 3 Number Representation Systems The library implements the NRSs in Table 1. NaturalNumber influences all the other NRSs. In the current version of the library, NaturalNumber takes advan Figure 1: Architecture of the NRS Library tage of Scala.BigInt, improving performance and code readability. FixedNaturalNumber(n, r) uses NaturalNumber to keep the value. Most of the operations use NaturalNumber in background and the result is converted such that it uses exactly \(n\) bits. The results that need more bits are considered Not Representable (NR). IntegerNumber uses NaturalNumber to keep the absolute value and a boolean variable for the sign. For IntegerNumber, the euclidean division was chosen because of its mathematical proprieties. FixedIntegerNumber(n, r) uses IntegerNumber as the value keeper. If the value is using more bits than the given size, the number becomes NR. The problem with the fractional system is that it can overflow easy and there are multiple ways to represent the same value. The advantage of the fractional system is that it can represent the entire rational number set \(\mathbb{Q}\) in infinite precision. The overflow problem has a partial solution in doing the greater common divisor of the numerator and denominator and dividing both by its value. This solution does not solve the entire problem and adds considerable computation time. An extension of FractionalNumber(n, m, r) proposed to solve the overflow problem is to divide by two (shifting right by one) both the numerator and denominator when one of them is out of the given size. FractionalNumber(n, m, r) uses RationalNumber to keep its value. _Fixed point_ representation is similar to _2's complement_ integer NRS, but it has an attribute called _binary point_. In a simple way, the value given by the 2's complement integer NRS is divided by two to the power of the value of the binary point. This system has an overflow and underflow problem. Its range \begin{table} \begin{tabular}{|l|l|} \hline **NRS** & **Description** \\ \hline \hline NaturalNumber & infinite precision without rounding natural number system \\ \hline FixedNaturalNumber(size, r) & fixed precision with rounding natural number system, where \(size\) is the bit-width and \(r\) is the type of rounding \\ \hline IntegerNumber & infinite precision with no rounding sign-magnitude integer number system \\ \hline FixedIntegerNumber(size, r) & fixed precision with rounding integer number system, where \(size\) is the bit-width and \(r\) is the rounding \\ \hline RationalNumber & infinite precision with no rounding fractional system \\ \hline FractionalNumber(n, m, r) & fixed precision with rounding fractional system where \(n\) is the size of the numerator, \(m\) is the size of the denominator, and \(r\) is the type of rounding used \\ \hline FixedPoint(\(\infty\)) & infinite precision with no rounding fixed point system \\ \hline FixedPoint(is, fs, r) & fixed precision with rounding fixed point system, where \(is\) represents the integer size, \(fs\) is the binary point value (fraction size), and \(r\) is the type of rounding used \\ \hline FloatingPoint(\(\infty\)) & infinite precision with no rounding floating-point number system \\ FloatingPoint(fs) & floating-point system with infinite precision exponent, no rounding, and with \(fs+1\) (fraction size) bits of mantissa (most significant bit always set) \\ \hline FixedFloatingPoint(es, fs, r) & finite precision with rounding floating-point system, where \(es\) is the exponent size (exponent in bias form), \(fs\) is the fraction size, and \(r\) is the type of rounding \\ \hline IEEE754(es, fs, r) & fixed precision with rounding IEEE754 system, where \(es\) is the exponent size, \(fs\) is the fraction size, and \(r\) is the type of rounding \\ \hline TaperedFloatingPoint(size) & tapered floating-point system with infinite precision exponent, no rounding, and with \(size\) bits of mantissa \\ \hline Morris(size, g, r) & fixed precision with rounding Morris tapered floating-point NRS, where \(size\) is the bit-width, \(q\) is the size of the exponent size, and \(r\) is the type of rounding \\ \hline Posit(size, es, r) & fixed precision with rounding posit NRS, where \(size\) is the bit-width, \(es\) is the exponent size, and \(r\) is the type of rounding \\ \hline \end{tabular} \end{table} Table 1: NRSs implement by the library of values is smaller than an integer NRS. FixedPoint(\(\infty\)) keeps its value as a RationalNumber. The only requirement for this NRS is that the denominator needs to be a power of two. FixedPoint(is, fs, r) takes another approach by keeping its value as an IntegerNumber. In a floating-point system, a number is represented as \((-1)^{sign}\times 2^{exponent}\times 1.f\). In infinite precision, \(1.f\) can be seen as a FixedPoint(\(\infty\)) with values inside the \([1,2)\) interval. The first bit represents the sign of the number, the next \(es\) (exponent size) bits represent the exponent (usually as a bias integer NRS) and the remaining \(fs\) (fraction size) bits represent the fraction bits (usually, there is a hidden bit with the value 1 which is the most significant bit). The value is given by \((-1)^{sign}\times 2^{exponent}\times(1+\frac{f}{2^{fs}})\), where \(f\) represents the value of the fraction bits without the hidden bit. A problem with the floating-point system is that 0 cannot be represented. A solution to this is that when all bits except the sign bit are 0, the number is 0. This creates the problem of having both \(+0\) and \(-0\). Given its infinite precision, FloatingPoint(\(\infty\)) does not have the concept of \(\infty\). FloatingPoint(fs) incorporates the concept of \(+\infty\) and \(-\infty\). Other NRSs are derived from it, with different rules for the range of the exponent, underflow, overflow, rounding, and binary representation. One of these NRSs is FixedFloatingPoint(es, fs, r). FixedFloatingPoint(es, fs, r) rules are: (i) in the case of underflow, the value is round to zero when the exponent is smaller than the minimum exponent and the rounding rule does not change this, (ii) in the case of overflow, the value goes to \(\infty\) if the exponent value is greater than the maximum exponent, (iii) the exponent is in bias form. To represent \(\infty\), all the bits except the sign bit are 1. The second floating-point system is the standard called IEEE754 [28]. This NRS introduces special cases for the smallest and biggest exponent values. When the exponent has the minimum value, the hidden bit is zero and the numbers are called subnormals, except for \(+0\) and \(-0\). The value of subnormals is given by \((-1)^{sign}\times 2^{\text{-bias}+1}\times(0+\frac{f}{2^{fs}})\), and the process is called gradual underflow. For maximum exponent value, different bit strings for fraction and sign bit can represent _qNaN_ (quiet Not A Number - where the first bit of the fraction is 1), _sNaN_ (signal NaN - where the first bit of the fraction is 0, but there is another bit in the fraction different from 0), \(+\infty\) (all fraction bits are 0), and \(-\infty\) (the same as \(+\infty\), only that the sign is negative). In our library, FloatingPoint(fs) does not have a binary representation, but it is used for implementing other NRSs. It is a super-class NRS. It has a boolean value for the sign, an IntegerNumber exponent, a NaturalNumber mantissa, some bits used for rounding and the fraction size value. It implements all the operations with the scope of having a mantissa in the range \([1,2)\) by making sure that every operation produces at least \(fs+1\) bits of mantissa with the most significant one having the value 1, except for the case when the result is \(+0\) or \(-0\) and the mantissa is 0. All the additional bits produced by the operation are appended to a rest bits list. This super-class is used further by FixedFloatingPoint(es, fs, r) and IEEE754(es, fs, r). FixedFloatingPoint(es, fs, r) and IEEE754(es, fs, r) are doing the operations using FloatingPoint(fs) and then verifying the results with their rules for underflow, overflow, and binary representation. Specifically, in the case of IEEE754(es, fs, r), if the exponent is smaller or equal compared to the minimum exponent value the number might be a subnormal number. This is tested by the difference between the subnormal exponent and the exponent value. If it is not a subnormal number, it underflows to 0. Beside issues such as multiple representations for zero, subnormal numbers, and too many bit representations for NaN, IEEE754 might have an oversized or undersized exponent for a given problem. Morris observed this and introduced tapered floating-point [20], adding an extra field representing the exponent size. This means that the exponent size and the fraction size are dynamically computed. A Morris floating-point system is determined by the bit-width and the size of the exponent size denoted by \(g\). The first \(g\) bits represent the \(G\) value which is used to compute the exponent size as \(es=G+1\). The next bit is the exponent sign bit followed by \(es\) bits that represent the absolute value of the exponent. The next bit is the fraction sign, and the remaining bits are considered the fraction bits. The hidden bit is always 1. The final value is computed as \(2^{\textit{exponent}}\times(-1)^{\textit{fraction}\;\textit{sign}}\times(1+ \frac{f}{2^{fs}})\), where \(\textit{exponent}=(-1)^{\textit{exponent}\;\textit{sign}}\times\textit{exponentBinaryValue}\) and \(\textit{exponentBinaryValue}\) is a natural number, not in bias form. Zero is represented by all bits 0 and error cases (NaN) are represented when all the bits are 1. Small numbers have a better precision because more fraction bits are allocated to represent them. In most cases, the dynamic range of a Morris NRS is bigger compared to IEEE754 of the same size, for the obvious reason that more bits can be used for the exponent. However, these NRSs still have the issue of multiple representations for the same value. Similar to FloatingPoint(fs), we created TaperedFloatingPoint(size) in our library to help with implementing all the operations for tapered floating-point NRSs. In this case, the fraction size is not known so the system uses the bit-width as the exact fraction size. The fraction is in the range \([1,2)\) and it has a hidden bit with value 1. The TaperedFloatingPoint(size) NRS does not contain \(\infty\). Every NRS derived from it needs to add rules for underflow, overflow, rounding, and binary representation. In trying to solve the problems of current floating-point systems, Gustafson proposed posit [10] (which is another type of TaperedFloatingPoint(size)). A posit NRS is determined by its total size and exponent size (\(es\)). The first bit in a posit s-layer representation is the sign bit. The concept for negative numbers is similar to 2's complement: all the bits are negated, and one is added to the value. Next, it is the regime field which is dynamic and uses unary arithmetic representation. If the regime starts with a bit 1, then it is a positive regime and the consecutive 1s are counted until a 0 is found or the end of the representation is reached. The value of the regime is NoC1 \(-\)1, where NoC1 is number of consecutive 1s. Otherwise, if the regime starts with a 0, then it is a negative regime and the consecutive 0s are counted until the first bit of 1 is found or the end of the representation is reached. The regime value in this case is \(-1\times\textit{NoC0}\), where _NoC0_ is the number of consecutive 0s. After the regime bits, the next exponent size (\(es\)) bits represent the exponent value in a base 2 natural number NRS, and the remaining bits are considered fraction bits. The hidden bit is always \(1\). The final value of a posit is given by \((-1)^{sign}\times 2^{regime\times 2^{es}+exponentBinaryValue}\times(1+\frac{f}{2^{ fz}})\). There are two special cases: for zero, when all the bits are \(0\), and for \(NaR\) (Not a Real - positive infinity and negative infinity), when first bit is \(1\) and the others are \(0\). Posit solves the problem of a value having multiple representations. This is a strong propriety for an NRS. Like Morris tapered floating-point system, posit has better precision for small number creating a posit "golden zone" where all the operations have a better accuracy than other NRSs. In our library, Posit(n, es, r) underflows to the minimum value that is not \(0\) and overflows to the maximum value but not \(\infty\). In contrast, Morris(n, g, r) underflows to \(0\) and overflows to NR. ## 4 New NRS Based on Moris Tapered Floating-point In this section, we introduce the three new representations based on Morris tapered floating-point with a hidden exponent bit. ### MorrisHEB(size, g, r) The tapered floating-point introduced by Morris in [20] seems a good concept. Its utilization was shown under the posit system proposed by Gustafson [10]. The major problem is the multiple ways of representing the same number. A solution for this is in borrowing the concept of hidden bit from mantissa. The \(g\) field not only represents the \(G\) value which dictates the exponent size but also the position of the most significant bit set in the exponent. If the value of the exponent size is kept as \(G+1\), then the minimum absolute value of the exponent is \(2\) when \(G=0\). The exponent value is \(exponent=exponent\)\(sign\times((1\ll es)+binaryExponent)\). There is a need for having zero as exponent value. A solution for this is to change the formula for exponent size to \(es=G-1\). The exponent is now: \[\text{exponent}=\begin{cases}(-1)^{\text{exponent sign}}\times(2^{es}+\text{ binaryExponent}),&es\neq-1\\ 0,&es=-1.\end{cases} \tag{1}\] This NRS is called MorrisHEB(size, g, r). The next formula is used for computing the value of all the three new NRSs binary representations presented in this section: \[\text{value}=\begin{cases}0,&\text{all bits }0\\ \text{NR},&\text{first bit 1 and the rest }0\text{s}\\ (-1)^{\text{sign}}\times 2^{\text{exponent}}\times(1+\frac{\text{f}}{2^{ \text{fs}}}),&\text{otherwise}\end{cases} \tag{2}\] The differences are in the way \(es\) and \(exponent\) are computed. MorrisHEB(size, g, r) underflows to \(0\), overflows to NR, and uses TaperedFloatingPoint(size) for implementing the operations. The binary representation starts with the sign bit. The next \(g\) bits represent the \(G\) value in natural base 2 format. The exponent sign bit follows the \(g\) field. The next \(e\) bits (\(es=G-1\)) or the next remaining bits (whichever is smaller) represent the binary exponent value in natural base 2 format. If \(es\) is grater than the remaining bits, the remaining bits represent the most significant bits of the binary exponent value. Te remaining least significant bits of the binary exponent value will be considered 0. After taking the exponent bits, the remaining bits are fraction bits and their count represents the fraction size. In summary, the binary format is: \[s_{f}G_{\text{g-1}}G_{\text{g-2}}...G_{0\text{$S_{e}$}}c_{\text{cs-1}}c_{\text{ cs-2}}...c_{0}f_{\text{fs-1}}f_{\text{fs-2}}...f_{0} \tag{3}\] ### MorrisBiasHEB(size, g, r) One might argue that the problem of multiple representations is still not solved because even the exponent may have multiple values (for \(es=-1\) the exponent sign does not matter). The problem stems from having a bit dedicated to the exponent sign. This is already solved in IEEE754 by using a bias value. A bias value \(g\) is proposed. The exponent sign is the sign of \(G\) and the exponent size is \(es=|G|-1\). Another issue with Morris and MorrisHEB representations is that they do not have an order in binary form. A solution for this is to have the bits of the exponent negated when \(G\) is negative. This makes it easy to implement a hardware compare unit. The NRS with these features is called MorrisBiasHEB(size, g, r), where the exponent is: \[\text{exponent}=\begin{cases}\text{signum(G)}\times(2^{es}+\text{binary Exponent}),&es\neq-1\\ 0,&es=-1.\end{cases} \tag{4}\] MorrisBiasHEB(size, g, r) underflows to 0, overflows to NR, and uses TaperedFloatingPoint(size) for implementing the operations. The binary representation starts with the sign bit. The next \(g\) bits represent the \(G\) value in bias format with \(\textit{bias}=2^{g-1}-1\). This means that \(G=\textit{binary}\;\textit{G-bias}\). The next \(es\) bits (\(es=|G|-1\)) or the next remaining bits (whichever is smaller) represent the exponent in natural base 2 format, if the signum(G) is 1. Otherwise, they need to be negated an the results is the binary exponent value. If \(es\) is grater than the number of the remaining bits, the remaining bits represent the most significant bits of the binary exponent value. The remaining least significant bits of the binary exponent value are considered 0. After taking the exponent bits, the remaining bits are fraction bits and their count is the fraction size. In summary, the binary format is: \[sG_{\text{g-1}}G_{\text{g-2}}...G_{0}c_{\text{cs-1}}c_{\text{cs-2}}...c_{0}f_{ \text{fs-1}}f_{\text{fs-2}}...f_{0} \tag{5}\] ### MorrisUnaryHEB(size, r) Can MorrisBiasHEB(size, g, r) be further improved? From the last standard of posit [10], we are inspired by the choice for fixing the exponent size to make it dependent only on the size and making the conversion between different sizes easier. This can be adapted using an unary representation for the \(g\) value (similar to the _regime_ in posit). There is also a need for the exponent size value of \(-1\), so the formula for the exponent size is: \[\text{exponent size}=\begin{cases}-k-1,&k<0\\ k-1,&k\geq 0.\end{cases} \tag{6}\] where \(k\) is the regime. MorrisUnaryHEB(size, r) underflows to 0, overflows to NR, and uses TaperedFloatingPoint(size) for implementing its operations. The binary representation starts with the sign bit. The next bit represents the first regime bit \(r_{0}\). The next consecutive bits with the same value as \(r_{0}\) are considered regime bits. The next bit after them, if it exists, has the negated value of \(r_{0}\) and it is also considered as part of the regime. The regime \(k\) is computed as: \[k=\begin{cases}-\text{NoC0},&r_{0}=0\\ \text{NoC1}-1,&r_{0}=1.\end{cases} \tag{7}\] The next \(es\) bits or the next remaining bits (whichever is smaller) represent the exponent value in natural base 2 format, if the signum(k) is 1. Otherwise, they need to be negated and the result is the binary exponent value. If \(es\) is grater than the remaining bits, the remaining bits represent the most significant bits of the binary exponent value. The remaining least significant bits of the binary exponent value are considered 0. The exponent is computed as: \[\text{exponent}=\begin{cases}\text{signum(k)}\times(2^{es}+\text{binary Exponent}),&es\neq-1\\ 0,&es=-1.\end{cases} \tag{8}\] After taking the exponent bits, the remaining bits are fraction bits and their count is the fraction size. In summary, the binary format of MorrisUnaryHEB(size, r) is: \[\texttt{s}r_{0}\texttt{r}_{1}...\texttt{r}_{\text{rs-2}}\overline{\texttt{ r}_{\text{rs-1}}}e_{\texttt{cs-1}}e_{\texttt{cs-2}}...e_{0}\texttt{f}_{ \text{fs-1}}f_{\text{fs-2}}...\texttt{f}_{0} \tag{9}\] ## 5 Evaluation In this section, we evaluate the three new proposed NRSs in addition to well-know NRSs from the literature. In the first subsection, we present the NRSs under evaluation and their characteristics such as minimum absolute value, maximum absolute value, dynamic range, and density of numbers in logarithmic scale. The second subsection presents the decimal accuracy of the unary operations for the tested NRSs with CDF graphs. In the third subsection, the color maps of binary operations are presented. The last subsection goes through some famous literature benchmarks. The next notation are used for rounding in this section: \(RZ\) for rounding towards zero and \(RE\) for rounding to the nearest tie to even. The values presented in this section are usually truncated to three decimals after the decimal point. ### NRSs Under Evaluation and Their Characteristics Table 2 presents the NRSs under evaluation with their minimum absolute value, maximum absolute value, and dynamic range when the total size is 16 bits. We compare the three new NRSs based on Morris tapered format with hidden exponent bit with the default Morris representation, fixed point, fixed floating point, IEEE754, and posit. Tapered floating-point NRSs have a higher dynamic range and can represent higher and lower absolute values compared to IEEE754 and fixed point. On the other hand, the difference between consecutive values may be one order of magnitude. Figure 2 presents the count of unique absolute values for 16-bit NRSs on a logarithmic scale. The added value of the hidden exponent bit can be seen in the increased count of numbers for Morris-derived NRSs. An interesting result is the MorrisUnaryHEB(16, RE) "golden zone": it has 30,201 unique absolute values in the interval \((10^{-3},10^{3})\) versus 26,587 for posit(16, 2, RE). This, together with the higher dynamic range, makes it a good competitor for posit in deep neural networks. We shall evaluate this in a future work. The difference between the underflow and overflow rules of IEEE754(es, fs, r) and FixedFloatingPoint(es, fs, r) can be seen in the gradual underflow for IEEE754(es, fs, r) and the additional higher values for FixedFloatingPoint(es, fs, r). The usage of a positive regime value for zero can be seen in the unequal \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline **NRS** & _Min(abs(X))_ & _Max(abs(X))/\({}^{14}\)_ & Dynamic Range & \(2^{\alpha}\) & \(3^{\alpha}\) \\ \hline FixedFloatingPoint(5, 10, RE) & \(3.054\times 10^{-4}\) & 130944 & 9.6322 & 130880 & 130816 \\ FixedPoint(8, 8, RE) & \(0.003\) & 127.996 & 4.515 & 127.992 & 127.988 \\ half-IEEE754/IEEE754(5, 10, RE) & \(5.960\times 10^{-8}\) & 65504 & 12.040 & 65472 & 65440 \\ Posit(16, 2, RE) & \(1.387\times 10^{-171}\) & \(72.057\times 10^{15}\) & 33.715 & \(45.035\times 10^{14}\) & \(11.258\times 10^{14}\) \\ Morris(16, 4, RZ) & \(9.207\times 10^{-19710}\) & \(1.086\times 10^{19700}\) & 39418.071 & \(5.887\times 10^{19689}\) & \(3.191\times 10^{19670}\) \\ MorrisHEB(16, 4, RZ) & \(4.630\times 10^{-980}\) & \(2.159\times 10^{988}\) & 19718.668 & \(3.295\times 10^{988}\) & \(5.028\times 10^{988}\) \\ MorrisBiasHEB(16, 4, RE) & \(6.061\times 10^{-29}\) & \(1.121\times 10^{77}\) & 115.267 & \(1.085\times 10^{77}\) & \(1.049\times 10^{77}\) \\ MorrisUnaryHEB(16, RE) & \(9.168\times 10^{-2467}\) & \(1.090\times 10^{466}\) & 4932.075 & \(1.044\times 10^{223}\) & \(5.809\times 10^{224}\) \\ \hline \end{tabular} \end{table} Table 2: 16-bit NRSs Dynamic Range Figure 2: Distribution of unique absolute values distribution of the values of MorrisUnaryHEB(16, RE) and Posit(16, 2, RE) in Figure 2. ### Unary Operations Figure 3 presents the CDF of decimal accuracy for the square root, natural logarithm, inverse, exponential, sinus, and cube root operations. The x-axis represents how many accurate digits are there in the result. For all the Taylor series functions \((\ln(x),\sin(x),e^{x})\), the decimal accuracy reference is the RationalNumber result after 30 iterations. For a decimal accuracy of at least three digits, MorrisUnaryHEB(16, RE) is the best NRS. This is because of its unique absolute values. Note that the exponential is the only function that increases the magnitude of the result. ### Binary Operations For binary operations, 12 bits NRSs were chosen because 8 bits hold too little information and 16 bits take too much storage space to keep all the values. We present the results as color maps, where black represents an accuracy of 10 or more digits, while white represents zero or less. The color map of addition is presented in Figure 4. The subtraction is similar to addition. This plot beautifully shows why FixedPoint(is, es, r) is the perfect NRS for accumulators if the range of the results is known. The white color space Figure 3: CDF of Unary Operations Figure 4: Color Maps for Addition Figure 5: Color Maps for Multiplication represents the overflow area for positive and negative values. The similarities between FixedFloatingPoint(es, fs, r) and IEEE754(es, fs, r) are obvious, but one can also observe the effect of the gradual underflow in IEEE754(es, fs, r). The black border and the plus lines for IEEE754(es, fs, r) represent NaNs (NaN plus anything else results in a NaN). The maps for Morris(size, g, r) and MorrisHEB(size, g, r) are different from the other maps because the binary representations do not represent ordered values. MorrisBiasHEB(size, g, r) looks like a mixed between FixedFloatingPoint(es, fs, r) and Posit(size, es, r): it exhibits tapered floating-point features by having an inverse proportional relationship between accuracy and absolute values. That is, when the absolute values of the operands increase, the decimal accuracy decreases. Posit(size, es, r) has a more uniform distribution of the accuracy. Note that Posit(size, es, r) does not use sign magnitude but uses 2's complement for negative numbers so its map symmetry is different from the other maps. The results of decimal accuracy for multiplication are presented in Figure 5. The overflow problem of FixedPoint(is, es, r) is obvious, while gradual underflow helps IEEE754(es, fs, r). The black borders of IEEE754(es, fs, r) are from the NaN values. MorrisUnaryHEB(size, r), Posit(size, es, r), and MorrisBiasHEB(size, g, r) exhibit their tapered floating-point proprieties in waves (or bands) of accuracy. Comparing MorrisUnaryHEB(size, r) and Posit(size, es, r), the rule for underflow can be observed as the white band in the color map of MorrisUnaryHEB(size, r). These results suggest that the Posit(size, es, r) rule for underflow might be the best one to be implemented in an NRS. The results of decimal accuracy for division are similar to the ones for multiplication and are omitted due to space constraints. Table 3 presents the percentage of exact results, the average decimal accuracy for inexact results, and the number of (thousands) operations per second (Kops). From IEEE754(4, 7, RE), one should remove 12.1% of the results because they represent NaNs. The interesting results in Table 3 are: (i) the high number of exact results for MorrisUnaryHEB(12, RE), (ii) the relatively good average decimal accuracy on inexact results for MorrisUnaryHEB(12, RE) and MorrisBiasHEB(12, 3, RE) on all operations, and (iii) the relatively low per \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|r|r|r|} \hline \multirow{2}{*}{**NRS**} & \multicolumn{4}{c|}{Exact} & \multicolumn{4}{c|}{Average Accuracy} & \multicolumn{4}{c|}{Kops} \\ \cline{2-10} & ADD & DIV & MUL & ADD & DIV & MUL & ADD & DIV & MUL \\ \hline FixedFloatingPoint(4, 7, RE) & 16.4\% & 2.4\% & 2.2\% & 3.3 & 2.4 & 2.4 & 191 & 374 & 278 \\ FixedPoint(6, 6, RE) & 75.0\% & 0.9\% & 0.9\% & 0.0 & 2.8 & 0.5 & 6535 & 2551 & 4117 \\ IEEE754(4, 7, RE) (12.1\% NaNs) & 28.6\% & 14.3\% & 14.4\% & 3.2 & 2.7 & 2.7 & 362 & 370 & 326 \\ Posit(12, 2, RE) & 12.4\% & 4.2\% & 4.2\% & 2.8 & 4.0 & 2.8 & 150 & 206 & 253 \\ Morris(12, 3, RZ) & 20.9\% & 22.1\% & 26.4\% & 4.9 & 1.5 & 1.5 & 145 & 256 & 347 \\ MorrisHEB(12, 3, RZ) & 14.2\% & 8.9\% & 8.8\% & 5.4 & 1.9 & 1.8 & 185 & 301 & 385 \\ MorrisBiasHEB(12, 3, RE) & 20.2\% & 2.2\% & 2.2\% & 3.4 & 2.7 & 2.9 & 148 & 221 & 261 \\ MorrisUnaryHEB(12, RE) & 37.6\% & 1.9\% & 1.9\% & 4.2 & 3.0 & 3.0 & 142 & 219 & 263 \\ \hline \end{tabular} \end{table} Table 3: Binary Operations (ADD,DIV,MUL) Results centage of exact results for Posit(12, 2, RE) (this is because of the increased exponent size). ### Literature Benchmarks In Table 4, we summarize the results of the evaluations proposed by Gustafson in [9]. The proposed evaluations are: * John Wallis Product: \(2\times\prod_{i=1}^{n}\frac{(2\times i)^{2}}{(2\times i-1)\times(2\times i+1)}\) for \(n=30\), * Kahan series: \(u_{i+2}=111-\frac{1130}{u_{i+1}}+\frac{3000}{u_{i}\times u_{i+1}}\) for \(u_{30}\), * Jean Micheal Muller: \(E(0)=1,E(z)=\frac{e^{x}-1}{z},Q(x)=|x-\sqrt{x^{2}+1}|-\frac{1}{x+\sqrt{x^{2}+ 1}},H(x)=E((Q(x))^{2})\) for \(H(15),H(16),H(17),H(9999)\), * Siegfried Rump: \(333.74\times y^{6}+x^{2}\times(11\times x^{2}\times y^{2}-y^{6}-121\times y^{ 4}-2)+5.5\times y^{8}+\frac{x}{2\times y}\) for \(x=77517\) and \(y=33096\), * Decimal accuracy for \(r_{1}\) from Quadratic formula for \(a=3,b=100,c=2\), * David Bailey's system of equations: \(0.25510582\times x+0.52746197\times y=0.79981812,0.80143857\times x+1.65707 065\times y=2.51270273\) solved with Cramer's rule. Note that none of the NRSs passes all the evaluations. The problem is with the limitations of finite representations. In Table 5, we present the results of multiple benchmarks from the literature [4, 6, 10]. These benchmarks are: * thin triangle area for \(a=7,c=b=\frac{7+2^{-25}}{2}\), * the formula \(x=(\frac{27/10-e}{\pi-(\sqrt{2}+\sqrt{3})})^{67/16}\), * the fraction \(\frac{x^{n}}{n!}\) for \(x=7,n=20\) and \(x=25,n=30\), * Planck constant \(h=6.626070150\times 10^{-34}\), * Avogadro number \(L=6.02214076\times 10^{23}\), * speed of light \(c=299792458\), * charge of \(\overline{e}1.602176634\times 10^{-19}\), * Boltzmann constant \(k=1.380649\times 10^{-23}\). The values in Table 5 represent the decimal accuracy of the results compared to the correct result. The first two benchmarks are favorable to Posit(size, es, r) while the last six are favorable to IEEE754(es, fs, r). Morris and its derived NRSs exhibit results that are close to the best NRS for each benchmark. MorrisBiasHEB(size, g, r) has good results for the entire spectrum of benchmarks. \begin{table} \begin{tabular}{|l|r|r|r|r|r|r|} \hline **NRS** & John Wallis & Kahan \(u_{30}\) & Jean Micheal Muller & Siegfried Rump & r\({}_{1}\) DA & David Bailey \\ \hline \multicolumn{7}{|c|}{32-bit NRSs} \\ \hline FixedFloatingPoint(8, 23, RE) & 3.091 & 100 & \((0,0,0,0)\) & \(-63.382\times 10^{28}\) & 5.612 & \((NR,NR)\) \\ FixedPoint(16, 16, RE) & 3.091 & 100 & \((1,1,1,NR)\) & \(NR\) & 3.787 & \((NR,NR)\) \\ IEEE754(8, 23, RE) & 3.091 & 100 & \((0,0,0,0)\) & \(-1.901\times 10^{30}\) & 5.612 & \((NR,NR)\) \\ Posit(32, 2, RE) & 3.091 & 100 & \((0,0,0,0)\) & \(1.172\) & 5.996 & \((-4,2)\) \\ Morris(32, 4, HZ) & 3.091 & 99.999 & \((0,0,0,0.995)\) & \(20.282\times 10^{30}\) & 4.599 & \((0,1)\) \\ MorrisHEB(32, 4, RE) & 3.091 & 99.999 & \((0,0,0,0.989)\) & \(15.211\times 10^{30}\) & 4.945 & \((2,1)\) \\ MorrisBiasHEB(32, 4, RE) & 3.091 & 100 & \((0,0,0,0)\) & \(-25.353\times 10^{28}\) & 5.612 & \((1,0.5)\) \\ MorrisUnaryHEB(32, RE) & 3.091 & 100 & \((0,0,0,0.999)\) & 1.172 & 5.612 & \((2,0)\) \\ \hline RationalNumber & 3.091 & 6.004 & \((1,1,1,1)\) & \(-0.827\) & 1 & \((-1,2)\) \\ \hline \end{tabular} \end{table} Table 4: Benchmarks in [9] ## 6 Conclusion In this paper, (i) we presented a Scala library that makes it easy to add, test, and fine-tune number representation systems (NRSs), (ii) we introduced three new NRSs based on Morris tapered floating-point, and (iii) we analyzed these three proposed NRSs together with well-known NRSs such as IEEE 754 floating-point and posit. By adding the hidden exponent bit to Morris tapered floating-point in three different forms, the resulting NRSs became competitors for IEEE754 and posit. MorrisBiasHEB(size, g, r) exhibits the best results on literature benchmarks on 32 and 64 bits when compared to the other NRSs. On the other hand, MorrisUnaryHEB(size, r) is a great candidate for machine learning computations due to its "golden zone" population, dynamic range, percent of exact results on addition and average decimal accuracy for inexact results on multiplication. Our library exhibits a performance of around 200 Kops which is good enough for testing and evaluating NRSs, but not enough for real-world applications. In future works, the library will be integrated with the Aparapi library8 and tested on GPU, and used for machine learning models with Spark. We also plan to increase the number of benchmarks. Footnote 8: [https://aparapi.com/](https://aparapi.com/) ## Acknowledgment Stefan-Dan Ciocirlan is partly supported by the Bildefender's University PhD Grants Program 2019-2022 and by the Google IoT/Wearables Student Grants 2022. Dumitrel Loghin is partly supported by the Ministry of Education of Singapore's Academic Research Fund Tier 1 (grant 251RES2106).
2310.18842
Optimization of Nonlinear Turbulence in Stellarators
We present new stellarator equilibria that have been optimized for reduced turbulent transport using nonlinear gyrokinetic simulations within the optimization loop. The optimization routine involves coupling the pseudo-spectral GPU-native gyrokinetic code GX with the stellarator equilibrium and optimization code DESC. Since using GX allows for fast nonlinear simulations, we directly optimize for reduced nonlinear heat fluxes. To handle the noisy heat flux traces returned by these simulations, we employ the simultaneous perturbation stochastic approximation (SPSA) method that only uses two objective function evaluations for a simple estimate of the gradient. We show several examples that optimize for both reduced heat fluxes and good quasisymmetry as a proxy for low neoclassical transport. Finally, we run full transport simulations using the T3D stellarator transport code to evaluate the changes in the macroscopic profiles.
Patrick Kim, Stefan Buller, Rory Conlin, William Dorland, Daniel W. Dudt, Rahul Gaur, Rogerio Jorge, Egemen Kolemen, Matt Landreman, Noah R. Mandell, Dario Panici
2023-10-28T23:01:55Z
http://arxiv.org/abs/2310.18842v2
# Optimization of Nonlinear Turbulence in Stellarators ###### Abstract We present new stellarator equilibria that have been optimized for reduced turbulent transport using nonlinear gyrokinetic simulations within the optimization loop. The optimization routine involves coupling the pseudo-spectral GPU-native gyrokinetic code GX with the stellarator equilibrium and optimization code DESC. Since using GX allows for fast nonlinear simulations, we directly optimize for reduced nonlinear heat fluxes. To handle the noisy heat flux traces returned by these simulations, we employ the simultaneous perturbation stochastic approximation (SPSA) method that only uses two objective function evaluations for a simple estimate of the gradient. We show several examples that optimize for both reduced heat fluxes and good quasisymmetry as a proxy for low neoclassical transport. Finally, we run full transport simulations using T3D to evaluate the changes in the macroscopic profiles. ## 1 Introduction Stellarators are one of the most promising designs for magnetic confinement fusion as they are inherently steady-state and are less susceptible to current-driven instabilities (Helander, 2014). However, stellarators have historically been plagued by large collisionally-induced losses of particles and energy, associated with cross-field drifts of particles caused by the inhomogeneity and curvature of the magnetic field. In principle, stellarators can be optimized to reduce these collisional losses. In practice, impressive improvements in plasma confinement have been obtained. For example, experiments at the W7-X stellarator have shown greatly reduced neoclassical transport (Beidler _et al._, 2021). Furthermore, advances in stellarator optimization techniques have led to designs with precise quasisymmetry (QS) (Landreman & Paul, 2022; Landreman _et al._, 2022) and quasi-isodynamicity (QI) (Goodman _et al._, 2022). After neoclassical losses are minimized, confinement is limited by the anomalous transport of heat and particles by turbulence. This turbulence is driven by plasma microinstabilities on length scales of the gyroradius. For example, the ion-temperature gradient (ITG) instability is believed to cause ion-temperature clamping in W7-X, preventing heating of ions above 2 keV (Beurskens _et al._, 2021). While there have been recent attempts to optimize stellarators to reduce turbulence-induced transport, they have mainly relied on proxies based solely on the magnetic geometry (Mynick et al., 2010; Proll et al., 2016; Roberg-Clark et al., 2022) or linear simulations (Jorge et al., 2023). However, linear physics may not accurately predict nonlinear saturation mechanisms that ultimately determine the rate of heat and particle loss (McKinney et al., 2019). Unfortunately, using nonlinear analysis for optimization is usually very challenging. Gyrokinetics (Antonsen Jr. & Lane, 1980; Catto, 1978; Frieman and Chen, 1982) is one of the most commonly-used models to study turbulence in magnetic confinement fusion devices, and is also the model used for this paper. Typical nonlinear gyrokinetics simulations usually require hundreds to thousands of CPU hours, making them infeasible to use within an optimization loop. In this work, we demonstrate the ability to reduce turbulent losses by optimizing stellarator configurations using nonlinear turbulence simulations directly rather than relying on proxies. In order to run nonlinear simulations inside the optimization loop, we use the new GPU-native gyrokinetic code GX(Mandell et al., 2018, 2022). GX utilizes pseudo-spectral methods in velocity space. GPU acceleration combined with flexible velocity resolution allows for nonlinear GX simulations that only take minutes to run. For this work, we focus on ITG turbulence, which contributes to major energy losses that limit plasma confinement, and so is one of the most important microinstabilities to consider for reactor design (Kotschenreuther et al., 1995; Horton, 1999; Helander et al., 2013). The optimization is performed using the stellarator equilibrium and optimization code DESC(Dudt and Kolemen, 2020; Panici et al., 2023; Conlin et al., 2023; Dudt et al., 2023). DESC also uses pseudo-spectral methods and directly solves the ideal MHD force-balance equation to compute the magnetic equilibrium. Quantities computed from the resulting magnetic fields are then used as inputs for GX. Stochastic optimization methods are used to robustly handle the noisy landscape of turbulent losses. The paper is organized as follows. The stochastic optimization method used in this work is described in Section 2. Results are shown in Section 3, with analysis on potential mechanisms for reduced turbulence explored in Section 4. The T3D transport simulations are shown in Section 5. Finally, the conclusions follow in Section 6. ## 2 Optimization Methods In this optimization routine, we seek to minimize the nonlinear heat flux returned by GX. Specifically, GX returns as output the time-trace of the heat flux (normalized to gyro-Bohm units (Cowley et al., 1991)). An example of some heat traces is shown in Figure 4(a). At the beginning of the simulation, linear growth of the fastest-growing instability dominates. However, eventually nonlinear effects cause the heat flux to decrease and saturate to a statistical steady-state. We use the time-average of this steady-state flux as our heat flux objective \(f_{Q}\). To take the time-average, we take the second half of the time-trace and compute the weighted Birkoff average \[f_{Q}=\frac{1}{I}\sum_{i}^{N}e^{-i/(N(1-i/N))}q_{i} \tag{1}\] where the sum is over each point in the trace and \(I=\sum_{i}^{N}e^{-i/(N(1-i/N))}\) is a normalization factor. This gives greater weight to values in the middle of the trace. Since GX is a local flux-tube gyrokinetic code, each simulation is on a single field line (specified by the field line label \(\alpha\)) on a single surface. Therefore, \(f_{Q}\) is also computed from only a single field line and surface. For all of the optimization examples, we only simulate on the \(\rho=\sqrt{\psi/\psi_{b}}=\sqrt{0.5}\) surface. and the \(\alpha=0\) (except for the multiple field line examples) field line. However, we will run post-processing simulations across different surfaces and field lines. To run these simulations, GX requires a set of geometric quantities that can be computed from numerical equilibria. Given that the magnetic field is written in Clebsch form \(\mathbf{B}=\nabla\psi\times\nabla\alpha\), the set of quantities needed are \[\begin{split}\mathcal{G}&=\{B,\mathbf{b}\cdot \nabla\mathbf{z},|\nabla\psi|^{2},|\nabla\alpha|^{2},\nabla\psi\cdot\nabla \alpha,\\ &(\mathbf{B}\times\nabla B)\cdot\nabla\psi,(\mathbf{B}\times \nabla B)\cdot\nabla\alpha,(\mathbf{b}\times\kappa)\cdot\nabla\alpha\},\end{split} \tag{2}\] where \(\mathbf{b}=\mathbf{B}/B\), \(\alpha\) is the straight field line label, \(\kappa\) is the curvature, and \(z\) is some coordinate representing the distance along the field line. For these simulations, the geometric toroidal angle \(\phi\) is used for \(z\). All of these quantities are easily computed using utility functions in DESC. One issue with directly minimizing nonlinear heat fluxes is that their time-traces are often very noisy. Even the resulting time averages are usually very noisy in parameter space. This can be seen in Fig. 1 showing the time-averaged nonlinear heat flux when scanning over the \(Z_{m,n}=Z_{0,-1}\) boundary mode of the initial equilibrium used in this study. Optimizers designed for smooth objectives may easily get stuck in local minima and make very little progress. To help with this issue, we use the simultaneous perturbation stochastic approximation (SPSA) method (Spall, 1987) to minimize the heat fluxes. In this algorithm, the \(i^{th}\) component of the gradient at point \(\mathbf{x_{n}}\) is approximated as \[\mathbf{\hat{g}_{n}}\left(\mathbf{x_{n}}\right)_{i}=\frac{f\left(\mathbf{x_{n} }+\mathbf{c_{n}}\right)-f\left(\mathbf{x_{n}}-\mathbf{c_{n}}\right)}{2c_{ni}}, \tag{3}\] where \(\mathbf{c_{n}}\) is a random perturbation vector whose components are sampled from a Bernoulli distribution. Therefore, we are effectively using the finite difference method in all directions at once. Finally, for this method, rather than specifying stopping tolerances, we instead specify a maximum number of iterations. The main feature of the SPSA method is that it only requires two objective function measurements per iteration. This makes it suitable for high-dimensional optimization Figure 1: The time-averaged nonlinear heat flux computed by GX when scanning across the \(Z_{0,-1}\) boundary mode. problems whose objective functions are expensive to compute. SPSA can also robustly handle noisy objective functions, and so has been used for simulation optimization, including Monte-Carlo simulations (Chan et al., 2003). In order to still use automatic differentiation and smooth optimization methods for other objectives like quasisymmetry residuals, we split the optimization routine into two parts. We first minimize the turbulent heat flux using stochastic gradient descent. Next, we minimize the quasisymmetry residuals using a least-squares method that utilizes automatic differentiation. This order is chosen arbitrarily, but the reverse ordering can also work. The two objectives are thus \[f_{1} =f_{Q}^{2}+\left(A-A_{target}\right)^{2} \tag{4}\] \[f_{2} =f_{QS}^{2}+\left(A-A_{target}\right)^{2}, \tag{5}\] where \(f_{Q}\) is the heat flux from \(\mathtt{GX}\), and \(A\) is the aspect ratio. For these optimizations we target an aspect ratio of \(A_{target}=8\). We use the two-term quasisymmetry objective, where a magnetic field is quasisymmetric if the quantity \[C=\frac{\left(\mathbf{B}\times\nabla\psi\right)\cdot\nabla B}{\mathbf{B}\cdot \nabla B} \tag{6}\] is a flux function. Computationally, we evaluate the equivalent form \[f_{QS}=\left(M-\iota N\right)\left(\mathbf{B}\times\nabla\psi\right)\cdot \nabla B-\left(MG+NI\right)\mathbf{B}\cdot\nabla B, \tag{7}\] at several different flux surfaces and try to minimize the resulting values. For this study, we always choose flux surfaces at \(\rho=0.6,\ 0.8,\text{and}\ 1\). We target quasi-helical (QH) symmetry, so that the helicity is \((M,N)=(1,4)\). The \(\mathtt{GX}\) simulation parameters used in the optimization loop and for post-processing, along with their justification, are in Appendix B. Finally, the optimization routine is performed in stages. Each stage increments the maximum boundary Fourier mode being optimized over. For example in the first stage only boundary modes with modes \(m,n\) satisfying \(|m|\leqslant 1\) and \(|n|\leqslant 1\) are used as optimization variables. In the next stage boundary modes with \(|m|\leqslant 2\) and \(|n|\leqslant 2\) are used. The \(m=0,n=0\) mode is excluded to prevent the major radius from changing. This reduces the number of optimization variables at the beginning of the optimization and warm-starts each successive stage. After each stage, we start the next stage with an equilibrium from the previous stage that had achieved both low heat flux and low quasisymmetry error. This type of method has been used with great success for optimizing quasisymmetry (Landreman and Paul, 2022) and other objectives. ## 3 Results ### Turbulence Optimization As an initial test of our stochastic optimization method, we begin by solely optimizing for turbulence. The optimization routine is the same as described in the previous section, except instead of optimizing for both quasisymmetry and aspect ratio in the second half of each iteration, we only optimize for the target aspect ratio. Usually, combining the turbulence and aspect ratio objectives as is done in the first part is insufficient to reach our target aspect ratio. We have to force the optimizer to take more aggressive steps due to the simulation noise, which makes it more difficult to maintain the desired aspect ratio. For this optimization (and all of the examples in this work), the initial equilibrium is an approximately quasi-helically symmetric equilibrium with an aspect ratio of 8 and 4 field-period Figure 2 shows the normalized nonlinear heat flux across each iteration of the optimization process. Due to the stochastic nature of optimization routine, the initial gradient estimates are very poor, leading to an increase in the heat flux in the first several iterations. However, as the optimization continued, there is eventually a rapid decrease in the heat flux as the gradient approximation begins to track the true gradient. The optimizer continues to steadily decrease the heat flux for most of the remaining iterations. It should be noted that since we specify a maximum number of iterations rather than stopping tolerances, the optimization ends even when the heat flux had increased slightly at the end. A scan of the nonlinear heat fluxes across \(\rho\) and optimized cross-sections of the flux surfaces are shown in Fig. 3. For this simulation (and all of the simulations in this section), the resolution is increased to the values in Table 3. As seen in those plots, some surfaces see moderate to drastic improvements to the nonlinear heat flux. In particular, the \(\rho=0.4\) surfaces has a reduction of about an order of magnitude. However, other surfaces seem much less improvement, such as at \(\rho=0.2\) and \(\rho=0.8\). This is not unexpected, as we only ran simulations at the \(\rho=\sqrt{0.5}\) surface during the optimization loop. Interestingly, based off of the surface boundary plots, the cross-sections of the optimized equilibria seem much less strongly-shaped than in the initial equilibrium. Instead, the magnetic axis has significantly more torsion. The contours of \(|\mathbf{B}|\) in Boozer coordinates is plotted in Fig. 4. Surprisingly, the contours seem to resemble those from quasi-isodynamic equilibria despite not including a QI term in the objective functions. It is well-known that having omnigeneous fields is necessary to obtain the maximum-J property for enhanced stabilization against trapped-electron modes (TEM) (Proll _et al._, 2012; Helander _et al._, 2013; Helander, 2014). It's been further theorized and shown in gyrokinetic simulations of W7-X that possessing the maximum-J property can also be beneficial for ITG turbulence as well (Proll _et al._, 2022). Therefore, it is possible that there is some relationship between only optimizing for turbulence and achieving omnigeneity. Investigating this relationship will be the subject of future work. Figure 2: The normalized heat flux across each iteration. The dashed lines represent an increase in the maximum boundary mode number being optimized. ### Combined Turbulence-Quasisymmetry Optimization Next, we include the two-term quasisymmetry objective into the second part of the optimization loop. The final heat flux traces and optimized cross-sections of the flux surfaces are shown in Fig. 5. The cross-sections in Fig. 5 show relatively modest changes in the shape of optimized stellarator. However, the time-trace (simulated at the \(\psi/\psi_{b}=0.5\) surface) in Fig. 5 shows about a factor of 3 decrease in the nonlinear heat flux. To ensure that the nonlinear heat flux was reduced across the entire plasma volume, we again ran simulations at radial locations of \(\rho=0.2\), \(0.4\), \(0.6\), and \(0.8\). The resulting heat fluxes as well as the maximum symmetry breaking residuals across \(\rho\) are plotted in Fig. 6. As seen in the plots, despite only optimizing for the \(\psi/\psi_{b}=0.5\) surface, the Figure 4: The \(|\mathbf{B}|\) contours in Boozer coordinates, which resemble contours of a QI equilibrium. Figure 3: (a) A scan of the nonlinear heat flux for the initial (red) and optimized (blue) equilibria across different flux surfaces. (b) The cross-sections of the magnetic flux surfaces of the initial (red) and optimized (blue) configurations. The star in (a) indicates the \(\rho=\sqrt{0.5}\) surface that was chosen for the optimization loop. heat flux is reduced across the entire plasma volume. With regards to quasisymmetry, for \(\rho<0.5\), the optimized equilibrium has a smaller maximum symmetry-breaking mode than the initial equilibrium. However, this reverses in the outer surfaces. This is unexpected, considering the quasisymmetry residuals were computed at \(\rho=0.6,\ 0.8\) and \(1\). Nevertheless, the degree of symmetry-breaking is still comparable to WISTELL-A (Bader _et al._, 2020), another optimized QH stellarator. Indeed, the \(|B|\) plots in Fig. 7 show contours characteristic of a quasi-helically symmetric stellarator (plotted at the \(\psi=0.5\) surface). Unlike in tokamaks, different field lines (characterized by the field line label \(\alpha\)) in stellarators experience different curvatures, magnetic fields, etc. and so may have very different fluxes (Dewar & Glasser, 1983; Faber _et al._, 2015). Therefore, we also run Figure 5: (a) The time-traces of the nonlinear heat fluxes of the initial (red) and optimized (blue) configurations. (b) The cross-sections of the magnetic flux surfaces of the initial (red) and optimized (blue) configurations. Figure 6: Scans of the nonlinear heat flux (a) and maximum symmetry breaking modes (b) across different radial locations. The star in (a) indicates the \(\rho=\sqrt{0.5}\) surface that was chosen for the optimization loop. simulations at \(\alpha\) at the radial location \(\rho=\sqrt{0.5}\). The resulting scan is shown in Fig. (a)a. At each \(\alpha\) simulated, the optimized stellarator has a lower heat flux, indicating that the heat flux has been reduced across the entire flux surface. It is interesting to see that for the initial geometry, there are large variations in the heat flux. This might indicate that the some flux tubes are too short to sample enough area on the flux surface or that there is some coupling between the flux tubes. However, it should be noted that the \(\iota\) of this equilibrium (plotted in Fig. 9) is close to \(5/4\). Recent work has shown very large variation across \(\alpha\) on low-order rational surfaces (Buller _et al._, 2023). In comparison, the optimized equilibrium has an \(\iota\) that is slightly above \(0.9\). This could have negative impacts on the optimization routine as the optimizer may choose to approach equilibria with \(\iota\) near low order rationals. This could then lead to not only misleading heat fluxes but also poor MHD stability. More work is needed to investigate the limitations of flux-tube codes, the effects from rational surfaces, and the resulting consequences for optimization. Figure 8: The heat fluxes across different field lines \(\alpha\) (a) and \(a/L_{T}\) (b) for the initial (red) and optimized (blue) configurations. The star indicates the \(\alpha=0\) field line and the \(a/L_{T}=3\) temperature gradient that were chosen for the optimization loop. Figure 7: The \(|\mathbf{B}|\) contours in Boozer coordinates for the initial (a) and final (b) equilibria. In a fusion reactor, since the transport is very stiff the temperature profiles may evolve to approach the critical temperature gradient (the temperature gradient at which the heat flux is zero). Therefore, the decrease in heat flux is less useful if the critical temperature gradient also decreases significantly. To check that the critical temperature gradient did not decrease, we also ran several nonlinear simulations with different temperature gradients also at \(\rho=\sqrt{0.5}\). The results are shown in Fig. 8b which shows that the critical temperature gradient does not seem to change significantly. It would be more beneficial if the critical temperature gradient had also increased. Specifically targeting the critical temperature gradient will be the focus of future work. We also run linear simulations for the initial and optimized equilibria, with the growth rates shown in Fig. 10a at \(k_{x}=0\) and Fig. 10b at \(k_{x}=0.4\). Although the optimized equilibria has lower heat fluxes, it has a significantly higher peak growth rate at \(k_{x}=0\). On the other hand, it has lower growth rates at lower \(k_{y}\), and one would expect that the nonlinear heat flux scales like \(\gamma/\langle k_{\perp}^{2}\rangle\) (where the angle brackets indicate a flux-surface average) (Mariani _et al._, 2018). Therefore, this might result in the observed lower nonlinear heat fluxes. However, this trend reverses for the simulations at \(k_{x}=0\). Furthermore, recent work has shown that both peak growth rates and the quasilinear \(\gamma/\langle k_{\perp}^{2}\rangle\) estimate are both poor proxies for the heat flux (Buller _et al._, 2023). Finally, we compare the optimized stellarator to another configuration solely optimized for quasisymmetry. The plot of the heat fluxes across different surfaces is shown in Fig 11. The heat fluxes for the precise QH equilibrium are higher than those from the approximately QH equilibrium used as the initial point. This shows that just optimizing for quasisymmetry can be detrimental for turbulence, and stresses the need to optimize for both. ### Optimization with Multiple field lines As described in the previous section, several physical and computational factors can lead to large variations in the heat fluxes at different \(\alpha\). To avoid this issue, we rerun the Figure 9: The \(\iota\) profiles for the initial (red) and optimized (blue) equilibria. optimization loop but simulate on field lines of both \(\alpha=0\) and \(\alpha=\pi\iota/4\). This makes the new objective function \[f_{Q}=f_{Q,\alpha=0}^{2}+f_{Q,\alpha=\pi\iota/4}^{2}+(A-A_{target})^{2}. \tag{1}\] This new objective function serves both as a way of reducing the heat flux across multiple field lines while also testing our stochastic optimizer against additional objectives. Similar situations include trying to optimize across different flux surfaces, different temperature/density gradients, etc. The plots in Fig. 12 show the cross-sections, maximum symmetry breaking mode, and heat fluxes across \(\rho\) and \(\alpha\) for the initial equilibrium and the final optimized equilibria after optimizing at one and two field lines. While this new equilibrium has higher heat fluxes than when optimizing for just a single field line, it instead has a lower maximum symmetry breaking mode and better quasisymmetry. The scans across multiple \(\alpha\) in Fig. 12d show about a 50% variation in the heat flux compared to the \(\alpha=0\) point. Unfortunately, this is larger than the approximately 30% variation in the single-field line case, and comparable to the relative variation in the Figure 11: The heat fluxes across \(\rho\) for a precise QH equilibrium (red) and the turbulence optimized equilibrium (blue). Figure 10: The linear growth rates across different \(k_{y}\) at \(k_{x}=0\) (a) and \(k_{x}=0.4\) (b) initial equilibrium. Nevertheless, the heat fluxes across each \(\alpha\) are still 2-3x smaller than in the initial equilibrium and this shows that the stochastic optimizer is still effective with additional terms in the objective function. More investigation is needed to more effectively and precisely optimize with multiple field lines. ### Fluid Approximation In this final case, we change the simulation parameters within the optimization loop to approach the fluid limit in GX. This is based off of previous work that showed a fluid approximation with only a few velocity moments can accurately match the true heat fluxes at higher velocity resolution at moderate collision frequencies (Buck _et al._, 2022). That study was motivated by a recently developed three-field model for the density, temperature, and momentum to approximate growth rates and nonlinear heat fluxes (Hegna _et al._, 2018). To approximate this model, we reduce the velocity resolution to 4 Hermite moments and 2 Laguerre moments, as in the gyrofluid model. While normally this requires closure Figure 12: (a) The cross sections for the initial equilibrium (red), the single field line optimized equilibrium (blue)from Section 3.2, and the new two field line optimized equilibrium. (b) The maximum symmetry-breaking mode for all three equilibria and WISTELL-A. (c) The heat flux scans across \(\rho\). (d) The heat flux scans across \(\alpha\). The stars in (c) and (d) indicate parameters that were chosen for optimization. relations like in (Beer, 1995; Mandell _et al._, 2018), we instead increase the collisionality to damp the high wavenumber modes and increase the temperature gradient to \(a/L_{T}=5\). We employ a Dougherty collision operator (Dougherty, 1964). The plots of the maximum-symmetry breaking modes and the heat flux across \(\rho\) for the kinetic and fluid cases are shown in Fig. 13. While the new equilibrium does not achieve lower heat fluxes than in the kinetic case, it still achieves about a factor of 2 reduction compared to the initial equilibrium except at \(\rho=0.8\) despite using very different physical parameters. The fluid case retains comparable levels of quasisymmetry as well. These results indicate that the optimization routine is robust against varying levels of fidelity. This opens new possibilities of potentially varying the fidelity across iterations. This can either decrease the computational cost further or allow us to increase other resolution parameters. ## 4 Mechanisms for Reduced Turbulence To better understand how the changes in the geometry affected the resulting heat flux, we perform additional nonlinear simulations using the results from Section 3.2. In these simulations, we take the geometry files of the initial and optimized equilibria and swap individual geometric quantities (the quantities in Eq. (2)). While we do not self-consistently resolve for the changes in the equilibrium, this may give insight into which quantity was most responsible for the reduction in heat flux. The bar graph in Fig. 14 shows the resulting heat fluxes for different combinations of swaps. The bitmasks on the x-axis indicate which quantity was swapped (1 indicates a swap). From left to right, the order of the bits correspond to \(B\), \(\mathbf{B}\times\nabla B\cdot\nabla\alpha\), \(\mathbf{B}\times\nabla\kappa\cdot\nabla\psi\), \(|\nabla\alpha|^{2}\), \(\nabla\psi\cdot\nabla\alpha\), and \(|\nabla\psi|^{2}\). Therefore, the bits 010000 indicates only swapping \(\mathbf{B}\times\nabla B\cdot\nabla\alpha\). Note that the complete set of all possible swaps are not shown, as several simulations failed. Given that we do not self-consistently resolve the equilibrium, it is unclear if these simulations should necessarily succeed. Surprisingly, most of the swaps did not decrease the heat flux significantly. Some cases even increased the heat flux. The only combination that reduced the heat flux Figure 13: The maximum symmetry-breaking modes (a) and the heat flux scan across \(\rho\) (b) for the initial equilibrium, the kinetic case from Section 3.2, and the fluid case (as well as WISTELL-A for the left plot). The star in (b) indicates the \(\rho=\sqrt{0.5}\) surface that was chosen for the optimization loop. substantially was swapping the global magnetic shear \(\hat{s}=\frac{x}{\iota}\frac{d\iota}{dx}\), where \(x=a\rho\). While it is known that larger shear can stabilize against microinstabilities, these results are still surprising given that the shear is very small in both cases (\(\hat{s}<0.1\) for both equilibria). To further investigate this, we optimized for several more precisely quasisymmetric equilibria, but with different target shears. The nonlinear heat flux traces are shown in Fig. 15. The shear = 0.01 case is when there is no shear target. While increasing the shear slightly does reduce the heat flux, continuing to increase it eventually increases the heat flux again. Future work will better address the relationship between shear and turbulence in stellarators. This relationship can be very important as very low global magnetic shear is characteristic of vacuum quasisymmetric stellarators (Landreman and Paul, 2022). The shear increases for finite \(\beta\) QS stellarators (Landreman et al., 2022), but is still small compared to shear in tokamaks. Figure 14: The resulting nonlinear heat fluxes after swapping different geometric quantities. Figure 15: Nonlinear heat flux traces for quasisymmetric equilibria with different shear targets. ## 5 Transport Simulations To study how the changes in nonlinear heat fluxes affected the macroscopic profiles, we use the T3D (short for Trinity3D) transport code. T3D uses the same algorithm as Trinity (Barnes _et al._, 2010), but is written in Python and adapted to work with stellarator geometries. T3D takes advantage of the separation of scales in the local \(\delta f\) gyrokinetic model where the distribution function is written as \(F=F_{0s}+\delta f_{s}\). It can be shown that \(F_{0s}\) is a Maxwellian and it is assumed that the perturbation \(\delta f\) scales like \(\delta f\sim\epsilon F_{0s}\) where \(\epsilon=\rho/a\). The Maxwellian \(F_{0s}\) evolves slower than the perturbation \(\partial F_{0s}/\partial t\sim\epsilon^{2}\partial\delta f_{s}/\partial t\). This allows for T3D to evolve the macroscopic density, pressure, and temperature profiles using heat and particle fluxes computed by gyrokinetic codes like GX. We run T3D simulations for the initial approximately QH equilibrium and optimized equilibrium from Section 3.2. Both equilibria were scaled to the same major radius and on-axis magnetic field as W7-X. An adiabatic response is still assumed for the electrons, and neoclassical contributions are ignored to isolate the effects from turbulence. The plots of the final steady-state temperature profiles for both equilibria are shown in Figure 16. From the plot of the temperature profiles, there is a clear increase in the temperature for the optimized equilibrium. The temperature at the innermost simulated point increased by approximately 10%. This seems lower than expected considering that the optimized equilibrium had lower heat fluxes by about a factor of 3. One would expect temperature increase by a factor of \(Q^{0.4}\sim 55\%\). However, the simulations from Section 3.2 assumed a temperature gradient of \(a/L_{Ti}=3\). From the plot of the temperature gradients, most points are simulated with lower gradients, and the equilibria have the same critical temperature gradient (as seen in Figure 8b). Nevertheless, the improvement is still very encouraging. Future work will investigate using simulations to optimize for a higher critical gradient or for specific target profiles. ## 6 Conclusions In this work, we directly optimized stellarators for reduced nonlinear heat fluxes and good quasisymmetry by coupling GX and DESC. By directly running nonlinear simulations we include the nonlinear saturation mechanisms that determine the steady-state heat flux, and so avoid potential limitations of linear or quasilinear models. The SPSA method is used to handle the noisy heat flux objective and to cheaply estimate the gradient. The Figure 16: The final temperature (left) and temperature gradient (right) profiles for the initial and optimized equilibria. newly optimized equilibria show factors of 2-4 improvement in the nonlinear heat flux across both several flux surfaces and multiple field lines while also having comparable or improved quasisymmetry. By swapping different geometric inputs to GX, we observe that it appears that the main factor that contributed to the change in heat flux was the global magnetic shear. However, increasing the shear too much led to increases in the heat flux. Our T3D simulations showed that by reducing the heat fluxes, the steady-state temperature profiles did increase slightly. However, the transport is sufficiently stiff so that macroscopic profiles will approach the critical gradient. Since the initial and optimized equilibria have similar critical gradients, this limits the improvements in the temperature profile. Nevertheless, these results demonstrate that we can efficiently include nonlinear gyrokinetic simulations within the optimization loop. Future work will include adding an objective for MHD stability, optimizing for the nonlinear critical gradient and also directly optimize for desired macroscopic profiles. More improvements will also be made to the SPSA optimizer to make it more effective near minima. Finally, we will implement other optimization methods that have been effective for stochastic optimization, such as Bayesian optimization. ## 7 Acknowledgements The authors thank M. Zarnstoff and B. Buck for insightful and fruitful conversations. Research support came from the U.S Department of Energy (DOE). P.K was supported by the DOE via the Scientific Discovery Through Advanced Computing Program under award number DE-SC0018429 (while at the University of Maryland), as well as through the DOE CSGF Program under award number DE-SC0024386 (while at Princeton University). This work started as part of P.K.'s Summer 2022 DOE SULI internship under award number DE-AC02-09CH11466. Computations were performed on the Traverse and Stellar clusters at Princeton/PPPL as well as the Perlmutter cluster at NERSC. ## Appendix A Codes ### Gx GX employs the radially-local \(\delta f\) approach to solve the gyrokinetic equation. In this approximation, the distribution function \(F_{s}\) is represented as \(F_{s}=F_{0s}+\delta f_{s}=F_{Ms}\left(1-Z_{s}\Phi/\tau_{s}\right)+h_{s}\), where \(F_{0s}\) is Maxwellian and \(\delta f_{s}\) is a perturbation consisting of a Boltzmann part proprtional to the electrostatic potential \(\Phi\) and a general perturbation \(h_{s}\). The subscript \(s\) labels species. Assuming electrostatic fluctuations, the gyroaveraged perturbation \(g_{s}=\langle f_{s}\rangle\) then obeys the electrostatic gyrokinetic equation \[\frac{\partial g_{s}}{\partial t}+ v_{\parallel}\mathbf{b}\cdot\nabla z\left(\frac{\partial g_{s}}{ \partial t}+\frac{q_{s}}{T_{s}}\frac{\partial\langle\phi\rangle}{\partial t}F _{s}\right)-\frac{\mu}{m_{s}}\mathbf{b}\cdot\nabla z\frac{\partial B}{ \partial z}\frac{\partial g_{s}}{\partial v_{\parallel}} \tag{10}\] \[+\mathbf{v}_{Ms}\cdot\left(\nabla_{\perp}g_{s}+\frac{q_{s}}{T_{s }}\nabla_{\perp}\langle\phi\rangle F_{s}\right)\] (11) \[+\langle\mathbf{v}_{E}\rangle\cdot\nabla_{\perp}g_{s}+\langle \mathbf{v}_{E}\rangle\cdot\nabla F_{s}=\langle C\left(\delta f_{s}\right)\rangle, \tag{12}\] where \(v_{\parallel}\) is the velocity parallel to the magnetic field, \(\mathbf{b}=\mathbf{B}/B\), \(\mu\) is the magnetic moment, \(z\) is some coordinate along the field line, \(\kappa\) is the field line curvature, \(\mathbf{v}_{Ms}\) is the sum of the magnetic and curvature drift velocities, and \(\mathbf{v}_{\mathbf{E}}\) is the \(\mathbf{E}\times\mathbf{B}\) velocity. For this work, electromagnetic effects are ignored and a Boltzmann response is used to model the perturbed electron distribution. In GX, Eq. (A.1) is projected onto the Hermite and Laguerre basis functions \[\psi^{l}(\mu B) = (-1)^{l}e^{-\mu B}\mathrm{L}_{l}(\mu B),\] (A.4) \[\phi^{m}(v_{\parallel}) = \frac{e^{-v_{\parallel}^{2}/2}\mathrm{He}_{m}(v_{\parallel})}{ \sqrt{(2\pi)^{3}m!}},\] (A.5) where \(\mathrm{He}_{m}(x)\) and \(\mathrm{L}_{l}(x)\) are the (probabilist's) Hermite and Laguerre polynomials, respectively. More details on the expansion and numerical algorithm can be found in Mandell _et al._ (2018) and Mandell _et al._ (2022). By choosing a Hermite-Laguerre basis, the resulting equations for the spectral coefficients reduce to the gyrofluid equations at low resolution (Dorland & Hammett, 1993; Beer, 1995). Particle number, momentum, and energy are also conserved at low resolution. Overall, any inaccuracies are due to the closure or dissipation model used. Therefore, GX allows for lower velocity resolution than other similar codes like GS2(Kotschenreuther _et al._, 1995; Dorland _et al._, 2000) and stella(Barnes _et al._, 2019) that use finite-difference methods in velocity space. Flexible velocity resolution combined with a GPU implementation allows GX to run nonlinear (electrostatic with adiabatic electrons) gyrokinetic simulations in minutes rather than hours or days. ### Desc DESC(Dudt & Kolemen, 2020; Panici _et al._, 2023; Conlin _et al._, 2023; Dudt _et al._, 2023) is a new stellarator equilibrium and optimization code. Two of the main features of DESC are its pseudo-spectral representation of the magnetic geometry and the use of automatic differentiation to compute exact derivatives. DESC directly solves the ideal MHD force balance equation \[\mathbf{J}\times\mathbf{B}=\nabla p.\] (A.6) DESC uses as its computational domain the coordinates \((\rho,\theta,\zeta)\), defined as \[\rho = \sqrt{\frac{\psi}{\psi_{a}}}\] (A.7a) \[\theta = \theta^{*}-\lambda(\rho,\theta,\zeta)\] (A.7b) \[\zeta = \phi,\] (A.7c) where \(\psi\) is the enclosed toroidal flux, \(\psi_{a}\) is the total enclosed toroidal flux in the plasma, \(\theta^{*}\) is a straight field line poloidal angle, \(\lambda\) is a stream function, and \(\phi\) is the geometric toroidal angle. It then expands \(\lambda\), and the cylindrical coordinates \(R\) and \(Z\) in terms of a Fourier-Zernike basis. \[R(\rho,\theta,\zeta) = \sum_{m=-M,n=-N,l=0}^{M,N,L}R_{lmn}\mathcal{Z}_{l}^{m}(\rho,\theta )\mathcal{F}^{n}(\zeta)\] (A.8a) \[\lambda(\rho,\theta,\zeta) = \sum_{m=-M,n=-N,l=0}^{M,N,L}\lambda_{lmn}\mathcal{Z}_{l}^{m}(\rho,\theta)\mathcal{F}^{n}(\zeta)\] (A.8b) \[Z(\rho,\theta,\zeta) = \sum_{m=-M,n=-N,l=0}^{M,N,L}Z_{lmn}\mathcal{Z}_{l}^{m}(\rho,\theta )\mathcal{F}^{n}(\zeta)\] (A.8c) where \(\mathcal{R}_{l}^{|m|}\), \(\mathcal{Z}_{l}^{m}\), and \(\mathcal{F}\) are the shifted Jacobi polynomials, Zernike polynomials, and Fourier series, respectively. It can be shown that the force error \(\mathbf{J}\times\mathbf{B}-\nabla p\) has only two independent components \[F_{\rho} =\sqrt{g}\left(J^{\zeta}B^{\theta}-J^{\theta}B^{\zeta}\right)+p^{\prime} \tag{11}\] \[F_{\beta} =\sqrt{g}J^{\rho}, \tag{12}\] where \(\sqrt{g}\) is the Jacobian, and the superscripts indicate the contravariant components. DESC then solves for the spectral coefficients \(R_{lmn}\), \(Z_{lmn}\), and \(\lambda_{lmn}\) that minimize this force error. DESC also utilizes automatic differentiation through JAX (Bradbury _et al._, 2018) in its optimization routines. Briefly, in automatic differentiation, the chain rule is applied through the code, allowing DESC to compute exact derivatives faster and more accurately than derivatives from finite differencing routines. However, at the time of this writing this requires writing the objective function in native Python, and GX is written in C++/CUDA. Furthermore, applying AD through GX is complicated by the Fast Fourier Transforms used to evaluate the nonlinear term in the gyrokinetic equation. Therefore, this would require extensive code developments for both DESC and GX, and so AD will not be used when optimizing for reduced turbulence. ## Appendix B Simulation Parameters used in Optimization ### Turbulence, Turbulence-QS, and Multiple \(\alpha\) Optimization For the GX simulations within the optimization loop, we use the simulation parameters listed in Table 1. The number of poloidal turns, parallel resolution, and number of simulated modes (proportional to \(n_{x}\) and \(n_{y}\)) are chosen to enable cheaper simulations. However, one poloidal turn has been used in previous nonlinear W7-X gyrokinetic benchmark studies (Gonzalez-Jerez _et al._, 2022_a_). Furthermore, from a quasilinear estimate the heat flux scales like \(1/\langle k_{\perp}^{2}\rangle\) (where the angle brackets indicate a flux-surface average) (Mariani _et al._, 2018). Consequently, higher wave number modes contribute less to the heat flux. The gradients are typical for experimental profiles for W7-X (Beurskens _et al._, 2021) and have been previously used for benchmark, transport, and optimization studies (Gonzalez-Jerez _et al._, 2022\(b\); Banon Navarro _et al._, 2023; Roberg-Clark _et al._, 2022). Finally, it's been demonstrated that GX simulations can yield accurate heat flux traces using the specified number of Hermite and Laguerre polynomials (Mandell _et al._, 2022). To check our results, the post-processing simulations are run at higher resolution. ### Fluid Approximation Optimization When running the optimization at fluid resolution, the simulation parameters are identical except those listed in Table 2. The velocity resolution is decreased to that of the gyrofluid model, and the collision frequency is increased. Since there is far greater dissipation in the model, we also increase the temperature gradient. ### Post-processing Simulations To ensure our final results are well-converged, we increase the resolution of the post-processing simulations to those listed in Table 3. The scans over \(\rho\), \(\alpha\), and \(a/L_{T}\) of course include additional values. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline Normalized Toroidal Flux (s) & 0.5 \\ \hline field line label (\(\alpha\)) & 0.0 (and \(\pi\iota/4\)) for multiple \(\alpha\) \\ \hline Number of Poloidal Turns (npol) & 1 \\ \hline Parallel Resolution (ntheta) & 64 \\ \hline Radial resolution (nx) & 64 \\ \hline field line label resolution (ny) & 64 \\ \hline Hermite Resolution (nhermite) & 8 \\ \hline Laguerre Resolution (nlaguerre) & 4 \\ \hline Normalized Temperature Gradient (tprim) & 3.0 \\ \hline Normalized Density Gradient (fprim) & 1.0 \\ \hline \end{tabular} \end{table} Table 1: The simulation parameters used for the GX simulations within the optimization loop. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline Hermite Resolution (nhermite) & 4 \\ \hline Laguerre Resolution (nlaguerre) & 2 \\ \hline Collision Frequency (vnewk) & 2.0 \\ \hline Normalized Temperature Gradient (tprim) & 5.0 \\ \hline Normalized Density Gradient (fprim) & 1.0 \\ \hline \end{tabular} \end{table} Table 2: The simulation parameters used for the GX simulations within the optimization loop. ### Desc Equilibria Parameters The parameters used for the equilibria are shown in this table. These are considered "moderate" resolution, and are typical values used for optimization. This is a vacuum case, so both the pressure and current are zero. When optimized for precise QH symmetry, the resulting equilibrium is similar to the QH equilibirum in (Landreman & Paul, 2022). ### T3d Simulation Parameters The table shows the resolution parameters used for the T3D simulations. Similar parameters have been used to simulate the profiles of W7-X and recovered the experimentally observed ion-temperature clamping. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline Spectral Resolution (M, N, L) & (8, 8, 8) \\ \hline Grid Resolution (M\({}_{grid}\), L\({}_{grid}\), N\({}_{grid}\)) & (16, 16, 16) \\ \hline Total Toroidal Flux (Psi) & 0.03817902 \\ \hline Major Radius (R0) & 1.0 \\ \hline Aspect Ratio (R0/a) & 8.0 \\ \hline \end{tabular} \end{table} Table 4: The resolution parameters for the DESC equilibria in this study. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline \hline Number of Poloidal Turns (npol) & 2 \\ \hline Parallel Resolution (ntheta) & 128 \\ \hline Radial resolution (nx) & 128 \\ \hline field line label resolution (ny) & 128 \\ \hline Hermite Resolution (nhermite) & 16 \\ \hline Laguerre Resolution (nlaguerre) & 8 \\ \hline \end{tabular} \end{table} Table 3: The simulation parameters used for the GX simulations within the optimization loop. ### Optimization Timings The table below shows the approximate timings of different parts of the optimization loop using a single Tesla V100 GPU on the Princeton Traverse cluster. All values are in minutes. In total, each optimization completed in about 20 hours. It takes slightly more than 12 hours on a single NVIDIA A100 GPU on the NERSC Perlmutter cluster. The "optimization" timings are for a single iteration.
2304.13888
Spin-0 bosons near rotating stars
In this work we study the effects of rotating stars on the behavior of bosons close to their surfaces. For this task, metrics determined by the rotation of these stars will be taken into account. We will consider the Klein-Gordon equation in the Hartle-Thorne metric and in the one proposed by Berti et al. by considering some kinds of stars. Pions and Higgs bosons will be investigated as examples. By solving the equations, the main effect that may be observed is a rotational phase, $\delta_R$, that depends on the angular momentum of the star, $J$, in the wave function of the boson. Corrections of higher orders in $J$ are also investigated.
Eduardo O. Pinho, Celso C. Barros Jr
2023-04-27T00:25:12Z
http://arxiv.org/abs/2304.13888v2
# Spin-0 bosons near rotating stars ###### Abstract In this work we study the effect of rotating stars in the behavior of bosons close to their surfaces. For this task, metrics determined by the rotation of these stars will be taken into account. We will consider the Klein-Gordon equation in the Hartle-Thorne metric and in the one proposed by Berti et al. by considering some kinds of stars. Pions and Higgs bosons will be investigated as examples. By solving the equations, it may be observed that the main effect that may be observed is a rotational phase, \(\delta_{\mathcal{R}}\), that depends on the the angular momentum of the star, \(J\), in the wave function of the boson. Corrections of higher orders in \(J\) are also investigated. Introduction The observation of the effects of the general theory of relativity in physical systems always has been a very challenging task. Since the first observation of light deflection by the Sun or the effect of the frame dragging due to the rotation of the earth [1], [2], the Lense-Thirring effect, that took decades to be observed by the Gravity probe B [3] or even the recent detection of gravitational waves [4], we have witnessed great experimental efforts in order to obtain reliable results. A fundamental question in physics is how quantum mechanics and the general relativity are related, or how the general relativity affects quantum systems. At the moment a large number of this kind of systems have been studied. Since the early studies from Parker [5] many other have been done, as for example quantum oscillators [6], [7], [8], [9], [10], [11], magnetic fields in the Melvin metric [12], cosmic strings [13], [14], [15], the Casimir effect [16], [17] and many other kinds of systems [18][19], [20][21], [22]. In this work we will study the effect of the dragging of the spacetime due to the rotation of stars in quantum systems, and for this purpose we will take spin-0 bosons as test particles. A description of the spacetime of "slowly" rotating relativistic stars was constructed during the 1960s by James B. Hartle and Kip S. Thorne [23], [24], for both the exterior and interior regions of a star. The limits of the approximation, or in other words, the "slowness" of the stars, in fact are not so narrow when considering physical stars, and this criterion is not valid only in the slow-rotation realm, yet it seems as if the solution is accurate for many systems, even for some that would not have been considered slow then. So in this paper, we will use the external Hartle-Thorne metric as given by [24] and also the corrections proposed in [25], and then write a Klein-Gordon equation. We will solve the equations and then show numerical results for some typical systems. This paper is divided into five sections beyond the introduction. In section II we will describe the line elements we will be using in our calculations and in Sec. III we will sketch the process by which we take the line element and expand it up to first order in the star angular momentum \(J\). Then we will write the Klein-Gordon equation and solve it in various ways, in terms of several different special functions. In section IV, we will show an expansion that contains terms up to second order of the angular momentum \(J\) and the resulting power series solution to the Klein-Gordon equation obtained in this spacetime. The resulting solution corresponds to the wave function for a free particle plus additional mass and angular momentum corrections. We will also compare this solution with the one obtained in section III, thereby showing that they have intrinsic similarities, despite having been obtained in conceptually different ways. In Sec. V the solution will then be studied for some physical systems with different values of the mass and angular momentum and in Sec. VI we will draw the conclusions of this work. ## II The two metrics In this section we will study the spacetime structure by means of metrics that take into account the rotation of stars. For this purpose we will consider the Hartle-Thorne metric [23], [24], and the metric proposed by Berti et al. [25]. As far as we are interested in studying observable effects of bosons near rotating stars, we will only consider the solutions of the equations in a external region. The Hartle-Thorne metric is a well-known spacetime metric that describes the geometry determined by slowly rotating stars. It was first obtained in a sequence of papers in the late 1960s and it has been used ever since in a variety of important works. The metric was initially constructed out of a very general canvas, that of an axially symmetric, stationary system that rotates uniformly and "slowly". This "slow" rotation refers to the fluid's angular velocity \(\Omega\), which must be slow enough for the fractional changes in pressure, energy density, and gravitational field due to rotation to be small. These considerations lead to the criterion presented by the authors \[\Omega^{2}\ll\left(\frac{c}{R}\right)^{2}\frac{G\!M}{R^{2}} \tag{1}\] for a star of mass \(M\) and radius \(R\). We note, however, that this criterion is actually wide-reaching, unlike it may be thought in a first view it and does not necessarily correspond to a slow rotation if compared with the values of real stars [26]. So, a general line element may be proposed in order to study a rotating object by considering it as an axially symmetric stationary system in terms of spherical coordinates \[ds^{2}=-H^{2}dt^{2}+E^{2}dr^{2}+r^{2}K^{2}[d\theta^{2}+\sin^{2}\theta(d\phi-Ldt )^{2}], \tag{2}\] where \(H\), \(E\), \(K\), and \(L\) are functions of \(r\) and \(\theta\). The rotation is introduced as a perturbation and then the line element can be written as [24] \[\begin{split} ds^{2}=&-\left(1-\frac{2M}{r}-\frac{2J^{2 }}{r^{4}}\right)\times\\ &\times\left\{1+2\Bigg{[}\frac{J^{2}}{Mr^{3}}\left(1+\frac{M}{r} \right)+\frac{5}{8}\frac{Q-\frac{J^{2}}{M}}{M^{3}}{Q_{2}}^{2}\left(\frac{r}{M} -1\right)\Bigg{]}P_{2}(\cos\theta)\right\}dt^{2}\\ &+\left(1-\frac{2M}{r}-\frac{2J^{2}}{r^{4}}\right)^{-1}\times\\ &\times\Bigg{\{}1-2\Bigg{[}\frac{J^{2}}{Mr^{3}}\left(1-\frac{5M}{ r}\right)+\frac{5}{8}\frac{Q-\frac{J^{2}}{M}}{M^{3}}{Q_{2}}^{2}\left(\frac{r}{M} -1\right)\Bigg{]}P_{2}(\cos\theta)\Bigg{\}}dr^{2}\\ &+r^{2}\Bigg{\langle}\Bigg{\{}1+2\Bigg{\langle}-\frac{J^{2}}{Mr^{3 }}\left(1+\frac{M}{r}\right)+\frac{5}{8}\frac{Q-\frac{J^{2}}{M}}{M^{3}}\Bigg{\{} \frac{2M}{\sqrt{r(r-2M)}}{Q_{2}}^{1}\left(\frac{r}{M}-1\right)+\] where \(Q\) is the star's mass quadrupole moment, \(J\) its angular momentum, and \[\begin{split}{Q_{2}}^{1}(\zeta)&=\sqrt{\zeta^{2}- 1}\left[\frac{3\zeta^{2}-2}{\zeta^{2}-1}-\frac{3}{2}\zeta\ln\frac{\zeta+1}{ \zeta-1}\right]\\ {Q_{2}}^{2}(\zeta)&=-\frac{3\zeta^{3}-5\zeta}{\zeta^{ 2}-1}+\frac{3}{2}(\zeta^{2}-1)\ln\frac{\zeta+1}{\zeta-1},\end{split} \tag{4}\] are associated Legendre functions of the second kind. In [27] it was pointed that this metric is not consistently truncated, and for this reason it presents some errors. Then a corrected metric has been proposed, but Berti et al. [25] studied this metric and found some minor sign errors. They proposed the solution that will be the second metric we will be using in this paper and it may be considered as a corrected version of the original Hartle-Thorne metric. From [25], the corresponding line element is given by \[\begin{split} g_{rr}&=\left(1-\frac{2M}{r}\right)^ {-1}\left[1+j^{2}G_{1}+qF_{2}\right]+\mathcal{O}(\epsilon^{3})\\ g_{tt}&=-\left(1-\frac{2M}{r}\right)\left[1+j^{2}F _{1}+qF_{2}\right]+\mathcal{O}(\epsilon^{3})\\ g_{\theta\theta}&=r^{2}[1+j^{2}H_{1}-qH_{2}]+ \mathcal{O}(\epsilon^{3})\\ g_{\phi\phi}&=g_{\theta\theta}\sin^{2}\theta+ \mathcal{O}(\epsilon^{3})\\ g_{t\phi}&=\left(\frac{2jM^{2}}{r}\right)\sin^{2} \theta+\mathcal{O}(\epsilon^{3})\end{split} \tag{5}\] where \[F_{1} =-pW+A_{1} \tag{6}\] \[F_{2} =5r^{3}p(3u^{2}-1)(r-M)(2M^{2}+6Mr-3r^{2})-A_{1}\] \[A_{1} =\frac{15r(r-2M)(1-3u^{2})}{16M^{2}}\ln\frac{r}{r-2M}\] \[A_{2} =\frac{15r(r-2M^{2})(3u^{2}-1)}{16M^{2}}\ln\frac{r}{r-2M}\] \[G_{1} =p[(L-72M^{5}r)-3u^{2}(L-56M^{5}r)]-A_{1}\] \[H_{1} =A_{2}+\frac{(1-3u^{2})(16M^{5}+8M^{4}r-10M^{2}r^{3}+15Mr^{4}+15r ^{5})}{8Mr^{4}}\] \[H_{2} =-A_{2}+\frac{5(1-3u^{2})(2M^{2}-3Mr-3r^{2})}{8Mr}\] \[L =80M^{6}+8M^{4}r^{2}+10M^{3}r^{3}+20M^{2}r^{4}-45Mr^{5}+15r^{6}\] \[p =\frac{1}{8Mr^{r}(r-2M)}\] \[W =(r-M)(16M^{5}+8M^{4}r-10M^{2}r^{3}-30Mr^{4}+15r^{5})\] \[+u^{2}(48M^{6}-8M^{5}r-24M^{4}r^{2}-30M^{3}r^{3}-60M^{2}r^{4}+135 Mr^{5}-45r^{6})\] \[j =\frac{J}{M^{2}}\] \[q =\frac{Q}{M^{3}}\] \[u =\cos\theta\] and \(\epsilon=\Omega/\Omega^{*}\) with \(\Omega^{*}=\sqrt{M/R^{3}}\). Observing these results it seems to be reasonable to consider both metrics in the study of quantum systems. As far as the corrections proposed in [25] are very small, we will verify if they are important in the behavior of spin-0 bosons near some kinds of stars. ## III Klein-Gordon equation in the Hartle-Thorne metric In this section we will study the Klein-Gordon equation in the metrics presented in Sec. II up to the first order in the angular momentum ratio \(j_{1}=J/r^{2}\). We must remark that it is a good quantity for the analysis that we intend to do in this work since it presents small numerical values for the stars. For the sun, taking \(r=R\) for example, \(j_{1}=9.7\times 10^{-13}\), for a typical white dwarf, \(j_{1}\sim 10^{-10}\) and for a typical neutron star \(j_{1}\sim 10^{-5}\). As we can see in a first approximation we may neglect terms of the order \(j_{1}^{2}\) as far it is a very small number in a region outside the star. So, in this section we will work out the solutions up to the first order in \(j_{1}\), and in next section we will study the corrections up to \(j_{1}^{2}\). In [24] the line element (3) has been expanded in powers of \(M/r\), that for the external region of the Sun for example, is smaller than \(M/R\sim 2\times 10^{-6}\) and when using numerical data relative to the Sun, an approximate form of this line element with an accuracy of the order of 1 part in \(10^{15}\) has been proposed \[\begin{split} ds^{2}=&-\left[1-\frac{2M}{r}+\frac{ 2Q}{r^{3}}P_{2}(\cos\theta)\right]dt^{2}+\left[1-\frac{2M}{r}+\frac{2Q}{r^{3}} P_{2}(\cos\theta)\right]^{-1}dr^{2}\\ &+r^{2}\left[1-\frac{2Q}{r^{3}}P_{2}(\cos\theta)\right]\left[d \theta^{2}+\sin^{2}\theta\left(d\phi-\frac{2J}{r^{3}}dt\right)^{2}\right]\end{split} \tag{7}\] We will study stars with nearly symmetric mass distributions by taking \(Q=0\), and as far as \(J^{2}/r^{6}<<M/r\) for the external line element we may write \[\begin{split} ds^{2}=&-\left(1-\frac{2M}{r}\right) dt^{2}-\frac{4J}{r}\sin^{2}\theta dtd\phi\\ &+\left(1-\frac{2M}{r}\right)^{-1}dr^{2}+r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\phi^{2}\right),\end{split} \tag{8}\] that is the metric that will be considered in the calculations. The Klein-Gordon equation for an arbitrary spacetime may be obtained by replacing the regular derivatives in the usual Klein-Gordon equation by covariant derivatives, and then we get [28], [29] \[\left[-\frac{1}{\sqrt{-g}}\partial_{\mu}\big{(}g^{\mu\nu}\sqrt{-g}\partial_{ \nu}\big{)}+\mu^{2}\right]\psi=0. \tag{9}\] Since the components of our metric are not functions of \(t\) or \(\phi\), we can do the separation of variables in the equation and obtain \[\psi(t,r,\theta,\phi)=e^{im\phi}e^{-i\omega t}\Theta(\theta)R(r), \tag{10}\] where \(m=0,\pm 1,\pm 2...\) and we still have to solve the equations for \(R(r)\) and \(\Theta(\theta)\). The equation for \(\Theta(\theta)\) is \[\left[\frac{d^{2}}{d\theta^{2}}+\frac{\cos\theta}{\sin\theta}\frac{d}{d\theta }+\lambda^{\prime}-\frac{m^{2}}{\sin^{2}\theta}\right]\Theta(\theta)=0, \tag{11}\] which solution is simply the set of associated Legendre functions \(P_{l}^{m}(\theta)\) with \(\lambda^{\prime}=l(l+1)\) where \(l\) is an integer. The radial equation is \[\left(\mu^{2}-\omega^{2}A+\frac{l(l+1)}{r^{2}}+\frac{4m\omega JA}{r^{3}} \right)R-\frac{1}{r^{2}}\frac{d}{dr}\left(\frac{r^{2}}{A}\frac{dR}{dr}\right)=0, \tag{12}\] where \(A=\left(1-\frac{2M}{r}\right)^{-1}\). By substituting \(R(r)=\frac{\sqrt{A}}{r}\mathbf{u}(r)\), making the change of variables \(\mathbf{x}=\frac{r}{2M}\), and rearranging fractions, we arrive at the following equation \[\frac{d^{2}u}{dx^{2}}+\left[a+\frac{b}{x^{2}}+\frac{c}{x}+\frac{d}{(x-1)^{2}}+ \frac{e}{x-1}\right]u=0, \tag{13}\] where \[\begin{split} a&=4M^{2}(\omega^{2}-\mu^{2})\\ b&=\frac{1}{4}\\ c&=\frac{1}{2}+l(l+1)-\frac{4m\omega J}{M}\\ d&=\frac{1}{4}+4M^{2}\omega^{2}-\frac{4m\omega J}{M} \\ e&=-\frac{1}{2}-4M^{2}\mu^{2}-l(l+1)+8M^{2}\omega^{ 2}+\frac{4m\omega J}{M}\.\end{split} \tag{14}\] This equation is the normal form of the confluent Heun equation, and its canonical form is given by \[\frac{d^{2}H}{dx^{2}}+\left(\alpha+\frac{\beta+1}{x}+\frac{\gamma+1}{x+1} \right)\frac{dH}{dx}+\left(\frac{\sigma}{x}+\frac{\tau}{x-1}\right)H=0 \tag{15}\] where \(H(\mathbf{x})=HeunC(\alpha,\beta,\gamma,\delta,\eta,\mathbf{x})\) are the so-called confluent Heun functions with \[\begin{split}\sigma&=\frac{1}{2}(\alpha-\beta- \gamma-\alpha\beta-\beta\gamma)-\eta\\ \tau&=\frac{1}{2}(\alpha+\beta+\gamma+\alpha\gamma+ \beta\gamma)+\delta+\eta\end{split} \tag{16}\] By means of an integrating factor, it is possible to relate the equations (13) and (15), and thus each of their coefficients, \[\begin{split}\alpha&=\pm 4M\sqrt{\mu^{2}-\omega^{2}} \\ \beta&=0\\ \gamma&=\pm 4\sqrt{\frac{m\omega J}{M}-M^{2}\omega^{2}} \\ \delta&=8M^{2}(2\omega^{2}-\mu^{2})\\ \eta&=\frac{4m\omega J}{M}-l(l+1)\.\end{split} \tag{17}\] With these expressions we determined the exact analytical solution of the Klein-Gordon equation in the Hartle-Thorne metric given by eq. (8). By observing that some terms of eq. (12) are still very small, we may obtain a simpler solution with the purpose to have a better understanding of the physics of this system relating it to more common special functions, \[\frac{d^{2}u}{dr^{2}}+\left[k^{2}+\frac{a^{\prime}}{r^{2}}+\frac{b^{\prime}}{r}+ \frac{c^{\prime}}{(r-2M)^{2}}+\frac{d^{\prime}}{r-2M}\right]u=0, \tag{18}\] where \[a^{\prime} =\frac{1}{4}\] \[b^{\prime} =\frac{1}{4M}+\frac{l(l+1)}{2M}-\frac{Jm\omega}{M^{2}}\] \[c^{\prime} =\frac{1}{4}+4M^{2}\omega^{2}-\frac{2Jm\omega}{M} \tag{19}\] \[d^{\prime} =-\frac{1}{4M}-2M\mu^{2}-\frac{l(l+1)}{2M}+4M\omega^{2}+\frac{Jm \omega}{M^{2}}\] \[k^{2} =\omega^{2}-\mu^{2}.\] Observing that \(a^{\prime}/(r-2M)^{2}\) and \(b^{\prime}/(r-2M)\), for fixed parameters and \(r\) values, are, numerically, several orders of magnitude smaller than the other three terms that multiply the function \(u(r)\) directly in eq. (18), a reasonable approximation is to neglect these terms. By performing a change of variables \(z=2ik(r-2M)\), the equation becomes \[u^{\prime\prime}(z)+\left[-\frac{1}{4}+\frac{1/4-\lambda^{2}}{z^{2}}+\frac{ \varepsilon}{z}\right]u(z)=0, \tag{20}\] where \(\varepsilon=d^{\prime}/(2ik)\) e \(\lambda=\sqrt{1/4-c^{\prime}}\). The equation above is the so-called Whittaker equation, it is a modified version of the confluent hypergeometric equation and it can be written in the form of the Kummer's equation through a transformation of the function. This equation possesses two solutions, the Whittaker functions \(M_{\varepsilon,\lambda}(z)\) and \(W_{\varepsilon,\lambda}(z)\), which can be written, respectively, in terms of a Kummer function \(M(A,B,z)\) and a Tricomi function \(U(A,B,z)\) that are solutions to the confluent hypergeometric equation [30], \[M_{\varepsilon,\lambda}(z) =e^{-\frac{1}{2}z}z^{\frac{1}{2}+\lambda}M(\frac{1}{2}+\lambda- \varepsilon,1+2\lambda,z), \tag{21}\] \[W_{\varepsilon,\lambda}(z) =e^{-\frac{1}{2}z}z^{\frac{1}{2}+\lambda}U(\frac{1}{2}+\lambda- \varepsilon,1+2\lambda,z). \tag{22}\] As far as the term proportional to \(c^{\prime}\) is still much smaller than the remaining terms of eq. (18) and in the region exterior to the stars \(r>>2M\) we have \(r\simeq r-2m\) and then the equation becomes, with a good accuracy \[\frac{d^{2}u}{dr^{2}}+\left[k^{2}+\frac{d^{\prime}}{r}\right]u=0. \tag{23}\] Supposing a solution with the form \[u(r)=re^{ikr}w(r) \tag{24}\] and a change of variables \(z=2ikr\), we obtain the equation \[zw^{\prime\prime}(z)+(2-z)w^{\prime}(z)-\left(1-\frac{d^{\prime}}{2ik}\right)w( z)=0, \tag{25}\] that has the solution \[w(z)=c_{1}\,M\left(1-\frac{d^{\prime}}{2ik},2,z\right)+c_{2}\,U\left(1-\frac{ d^{\prime}}{2ik},2,z\right), \tag{26}\] and for the physical problem that we are studying \(c_{2}=0\). The Kummer function may defined as \[M\left(1-\frac{d^{\prime}}{2ik},2,z\right)=\sum_{s=0}^{\infty}\frac{\left(1- \frac{d^{\prime}}{2ik}\right)_{s}}{\left(2\right)_{s}s!}z^{s}, \tag{27}\] that asymptotically has the behavior of a Bessel function, then for large values of \(z\) (that is exactly the region where this calculation is considered), the solution may be written in the form \[R(r)=\frac{e^{i\left[kr+\frac{d^{\prime}}{2ik}\ln r\right]}}{r}, \tag{28}\] or \[R(r)=\Psi(r)e^{i\frac{im_{\infty}}{2k}\ln r}=\Psi(r)e^{i\delta_{R}} \tag{29}\] that shows explicitly the phase originated by the effect of the rotation of the star in the lowest order, \(\delta_{R}\), that may be interpreted as a rotational phase. In the next section we will investigate the solution for higher orders of \(j_{1}\). ## IV Expansion of the Hartle-Thorne metric to second order in \(J\) As far as some mistakes have been observed in the Hartle-Thorne metric given by eq. (3), if we are interested in studying terms of the order of \(j_{1}^{2}\) and higher in the solution we have to use the Berti et al. metric (5) in our calculations. It is interesting to note that if we keep the terms up to first order in \(j_{1}\) and \(Q=0\), eq. (7) is recovered, that means that the results presented in the last section are correct and the differences occur for higher terms in \(j_{1}\). So, in this section we will derive an equation taking into account the terms that appear in the corrected metric and write down a solution based upon an infinite series ansatz. In order to observe the behavior of the corrections in the solution an easier way is to solve the equations for fixed values of the angle \(\theta\). Considering these assumptions, the resulting radial equation will become \[\left[g^{rr}\partial_{r}^{2}+\tilde{g}\partial_{r}\left(-\omega^{2}g^{tt}-m^{2} g^{\phi\phi}+m\omega g^{t\phi}-\mu^{2}\right)\right]f(r)=0 \tag{30}\] where \[\tilde{g}=\partial_{r}g^{rr}+\frac{1}{2g}g^{rr}\partial_{r}g. \tag{31}\] After some manipulations the equation may be written in the form \[\left(M_{0}+\frac{M_{1}}{r}+\frac{M_{2}}{r^{2}}+\frac{M_{3}}{r^{3}}+\frac{M_{ 4}}{r^{4}}\right)g^{\prime\prime}(r)+\left(N_{0}+\frac{N_{1}}{r}+\frac{N_{2} }{r^{2}}+\frac{N_{3}}{r^{3}}+\frac{N_{4}}{r^{4}}\right)g(r)=0 \tag{32}\] where \[f(r)=\frac{A}{r-2M}g(r) \tag{33}\] and \[M_{0} =1\] \[M_{1} =-6M\] \[M_{2} =12M^{2}\] \[M_{3} =-8M^{3}\] \[M_{4} =2J^{2}(5-12u^{2})\] \[N_{0} =\omega^{2}-\mu^{2} \tag{34}\] \[N_{1} =4M\mu^{2}-2M\omega^{2}\] \[N_{2} =m^{2}/(u^{2}-1)-4M^{2}\mu^{2}\] \[N_{3} =4m^{2}M/(u^{2}-1)+2Jm\omega+2M\] \[N_{4} =-4M^{2}+4m^{2}M^{2}/(u^{2}-1)-4JmM\omega+2J^{2}\omega^{2}(3u^{2 }-2).\] This equation may be solved by the method presented in [31] where the solution has the form \[g(r) =\alpha\mathrm{e}^{\pm iF(r)} \tag{35}\] \[F(r) =kr+a_{0}\ln\frac{r}{2M}+\sum_{n=1}^{\infty}a_{n}\left(\frac{2M}{ r}\right)^{n}. \tag{36}\] and the exponent \(F(r)\) is a sum of functions and a series of powers of \(M/r\) where the \(a_{n}\) coefficients have to be determined. It is easy to find the first few coefficients by simply substituting this ansatz back into equation (32), however, as \(n\) increases the expressions become larger and larger. The first five coefficients are given by \[\begin{split} k&=\sqrt{\frac{N_{0}}{M_{0}}}\\ a_{0}&=\frac{-k^{2}M_{1}+N_{1}}{2kM_{0}}\\ a_{1}&=\frac{1}{4MkM_{0}}\Bigg{(}a_{0}^{2}+2a_{0}kM _{1}+k^{2}M_{2}-N_{2}\pm ia_{0}M_{0}\Bigg{)}\\ a_{2}&=\frac{1}{4(2M)^{2}kM_{0}}\Bigg{(}a_{0}^{2}M _{1}+2a_{0}kM_{2}+k^{2}M_{3}-N_{3}-4Ma_{0}M_{0}a_{1}\\ &-4MkM_{1}a_{1}\pm ia_{0}M_{1}\mp i4M_{0}a_{1}\Bigg{)}\\ a_{3}&=\frac{1}{6(2M)^{3}kM_{0}}\Bigg{(}a_{0}^{2}M _{2}+2a_{0}kM_{3}+k^{2}M_{4}-N4-4Ma_{0}M_{1}a_{1}\\ &-4MkM^{2}a_{1}+4M^{2}M_{0}a_{1}^{2}-16M^{2}a_{0}M_{0}a_{2}-16M ^{2}kM_{1}a_{2}+\\ &\pm ia_{0}M_{2}\mp i4MM_{1}a_{1}\mp i24M^{2}M_{0}a_{2}\Bigg{)}. \end{split} \tag{37}\] For large values of \(r\) we have \[R(r)\sim\frac{e^{i[kr+a_{0}\ln r]}}{r}. \tag{38}\] that is the same result obtained in Sec. III, in a very different way, shown in eq.(28). ## V Analyzing the solutions In this section we will investigate the solution (35) for several physical configurations and observe how it changes depending on the mass, angular velocity and energy of the star or of the particle whose motion the Klein-Gordon equation purports to describe. It is also interesting to study high energy and low energy test particles. We will start by listing all physical configurations we chose in order to verify the size of the mass and angular momentum corrections. We selected three celestial bodies: the Sun [32], the neutron star PSR B1257+12 [33],[34], and the white dwarf PG 2131+066 [35]. All numerical values used in this section are given in Planck units, where \(c=G=\hbar=1\). For the sun we have \[\begin{split} M_{\odot}&=9.136\times 10^{37}\\ J_{\odot}&=1.82\times 10^{75}\\ R_{\odot}&=4.30835481\times 10^{43},\end{split} \tag{39}\] for the neutron star, \[\begin{split} M_{ns}&=1.4M_{\odot}\\ J_{ns}&=0.00297J_{\odot}\\ R_{ns}&=0.000015R_{\odot},\end{split} \tag{40}\] and for the white dwarf the corresponding values are \[\begin{split} M_{wd}&=0.608M_{\odot}\\ J_{wd}&=0.1432J_{\odot}\\ R_{wd}&=0.0186R_{\odot}.\end{split} \tag{41}\] We also chose two test particles with very different masses, the pion with mass \(\mu_{\pi}\)=0.139 GeV [32], and the Higgs boson with mass \(\mu_{Higgs}\)= 125.1 GeV, that in Planck units are \[\begin{split}\mu_{\pi}&=1.1056\times 10^{-20}\\ \mu_{Higgs}&=1.0245\times 10^{-17}\end{split} \tag{42}\] We separate eq. (36) in four different parts: A is the real part of \(F\) containing only mass terms; B is the imaginary part of \(F\) containing only mass terms; \(\Gamma\) is the real part of \(F\) containing the angular momentum corrections, i.e., terms that are proportional to the angular momentum \(J\) or to its square \(J^{2}\); \(\Delta\) is the imaginary part of \(F\) containing the angular momentum contributions. \[F(r)=\text{A}(r)+\text{B}(r)+\Gamma(r)+\Delta(r) \tag{43}\] These functions have the same structure of \(F(r)\) given by eq. (36) and may be calculated by the sum of five terms determined by different dependencies on the variable \(r\). In the Tables, each column is related with one kind of dependence on the variable \(r\). We will show the results for the six combinations of stars and particles given by eq. (39)-(42) for particles with energies \(\omega=2\mu\), \(r=R\), that means particles near the surfaces, \(m=1\) and \(\theta=\pi/4\). These results are displayed in Tab. 1-VI. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(r\) & \(\ln r\) & \(1/r\) & \(1/r^{2}\) & \(1/r^{3}\) \\ \hline A & \(1.23755\times 10^{19}\) & \(5.2965\times 10^{18}\) & \(-2.23527\times 10^{18}\) & \(-4.43284\times 10^{17}\) & \(-1.16929\times 10^{17}\) \\ \hline B & 0 & 0 & \(2.30902\times 10^{-1}\) & \(3.69947\times 10^{-2}\) & \(1.05281\times 10^{-2}\) \\ \hline \(\Gamma\) & 0 & 0 & 0 & \(-7.48238\times 10^{-6}\) & \(-2.30951\times 10^{8}\) \\ \hline \(\Lambda\) & 0 & 0 & 0 & 0 & \(6.04614\times 10^{-25}\) \\ \hline \end{tabular} \end{table} Table 2: Celestial object: Neutron Star. Particle: Pion. As we can see in the tables, the star to which the corrections are more significant is the neutron star. However, even the largest angular momentum corrections are small if compared with the other terms. For this reason we will continue this analysis considering neutron stars. By varying the physical quantities in question (such as the energy of the particle, and the angular momentum of the star) we will further explore our solution to the radial equation (32). We begin by defining three functions related to the solution which \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(r\) & \(\ln r\) & \(1/r\) & \(1/r^{2}\) & \(1/r^{3}\) \\ \hline A & \(1.14677\times 10^{22}\) & \(4.90798\times 10^{21}\) & \(-2.0713\times 10^{21}\) & \(-4.10768\times 10^{20}\) & \(-1.08352\times 10^{20}\) \\ \hline B & 0 & 0 & \(2.30902\times 10^{-1}\) & \(3.69947\times 10^{-2}\) & \(1.05281\times 10^{-2}\) \\ \hline \(\Gamma\) & 0 & 0 & 0 & \(-7.48238\times 10^{-6}\) & \(-2.1401\times 10^{11}\) \\ \hline \(\Delta\) & 0 & 0 & 0 & 0 & \(6.52476\times 10^{-28}\) \\ \hline \end{tabular} \end{table} Table 5: Celestial object: Neutron star. Particle: Higgs boson. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(r\) & \(\ln r\) & \(1/r\) & \(1/r^{2}\) & \(1/r^{3}\) \\ \hline A & \(7.64512\times 10^{26}\) & \(4.6795\times 10^{22}\) & \(-1.58518\times 10^{16}\) & \(-3.36818\times 10^{10}\) & \(-9.51912\times 10^{4}\) \\ \hline B & 0 & 0 & \(2.47395\times 10^{-6}\) & \(4.24684\times 10^{-12}\) & \(1.29492\times 10^{-17}\) \\ \hline \(\Gamma\) & 0 & 0 & 0 & \(-5.66093\times 10^{-13}\) & \(-8.16655\times 10\) \\ \hline \(\Delta\) & 0 & 0 & 0 & 0 & \(7.40464\times 10^{-40}\) \\ \hline \end{tabular} \end{table} Table 4: Celestial object: Sun. Particle: Higgs boson. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(r\) & \(\ln r\) & \(1/r\) & \(1/r^{2}\) & \(1/r^{3}\) \\ \hline A & \(1.42199\times 10^{25}\) & \(2.04316\times 10^{22}\) & \(-3.15-45\times 10^{17}\) & \(-2.18816\times 10^{13}\) & \(-2.02149\times 10^{9}\) \\ \hline B & 0 & 0 & \(8.0869\times 10^{-5}\) & \(4.53782\times 10^{-9}\) & \(4.52286\times 10^{-13}\) \\ \hline \(\Gamma\) & 0 & 0 & 0 & \(-2.34386\times 10^{-10}\) & \(-2.60399\times 10^{2}\) \\ \hline \(\Delta\) & 0 & 0 & 0 & 0 & \(1.64829\times 10^{-35}\) \\ \hline \end{tabular} \end{table} Table 6: Celestial object: White dwarf. Particle: Higgs boson. we will plot and which might give us some information on its status as a solution. These functions are \[f_{1} = iF \tag{44}\] \[f_{2} = \frac{e^{iF}}{r-2M}\] (45) \[f_{3} = \frac{\Gamma+\Delta}{|\text{A}+\text{B}+\Gamma+\Delta|} \tag{46}\] The function \(f_{1}\) is precisely the exponent in the solution (35). The function \(f_{2}\) is eq. (35), along with the integrating factor defined in (33); the end result is that \(f_{2}\) represents the radial solution. The function \(f_{3}\) is defined as the difference between the exponent (36) and the mass contributions as defined in (43), divided by the absolute value of \(F\). Since all that is left in the numerator of \(f_{3}\) is the sum of angular momentum contributions \(\Gamma+\Delta\) as defined in (43), this function tells us the approximate order of magnitude of the angular momentum contributions when compared to the mass contributions. Fig. 1 and Fig. 2 represent the function \(f_{1}\) in two regions, in Fig. 2 we show the results to the higher values of the energy. As we can see, the major difference between figures 1 and 2 is that the value of the function stabilizes as the energy increases, and even big differences in energy account for little changes in \(f_{1}\). For lower energies, even with small changes, the value of \(f_{1}\) changes considerably. Figure 1: Absolute value of the real part of \(f_{1}\) for some values of the energy. Figure 3: Real part of \(f_{2}\) as function of \(r\). Figure 2: Absolute value of the real part of \(f_{1}\) for higher values of the energy. In Fig. 3 and 4 we represent the function \(f_{2}\), the radial wave function, in two regions according to the values of the energies of the pions. As we can see in Fig. 3, for large distances, far away from the surface of the star, the curves seem to coalesce, and this effect is especially visible for higher energies. The last two graphs, Fig. 5 and 6, represent the function \(f_{3}\), which shows the size of the angular momentum contribution as a variation in its value is done, while maintaining the star's original mass and radius fixed. We observe that the largest angular momentum selected, \(J_{6}=10,000J_{ns}\), where \(J_{ns}\) is the angular momentum of the neutron star as given in (40), probably cannot be reached for a real star as it exceeds the maximum possible value given in the literature [26] for the angular velocity (and therefore angular momentum) of a neutron star. We used it regardless in order to show what happens to the solution for extreme rotations. Figure 4: Real part of \(f_{2}\) as function of \(r\). Fig. 5 and 6 show that the shape of the curves does not change considerably when varying the angular momentum, however, the absolute value of the function does. This is Figure 5: Imaginary part of \(f_{3}\) as a function of \(r\) for the actual angular momentum \(J_{ns}\) of the neutron star and for variations of \(J\). Figure 6: Imaginary part of \(f_{3}\) as a function of \(r\). the expected result: the larger the angular momentum of the neutron star, the larger will be its contribution to the solution. We can also investigate what happens with variations of the energy \(\omega\) in the case where the angular momentum used was the neutron star's own. The results are shown in Fig. 7. As we can see in figure 7, the angular momentum contribution is larger for higher energies. Indeed, our solution seems to take the angular momentum into account more significantly when the star is rotating quickly (though still under the "slow" rotation regime as dictated by the original paper) and the particle is a high-energy zero-spin boson. We must remark that in the results presented in this section, the velocity of the particle \(v_{p}>>v_{R}\), where \(v_{R}\) is the rotation velocity of the star for \(r=R\). If we have small energies, \(v_{p}\sim v_{R}\), more terms will be needed in eq. (36). ## VI Conclusions In this work we studied the behavior of spin-0 bosons near rotating stars. For this purpose we considered the Klein-Gordon equation in the region external to the star with the spacetime structure determined by the Hartle-Thorne metric (8) up to first order in \(j_{1}\), that Figure 7: Absolute value of the imaginary part of \(f_{3}\) as a function of \(r\) for the different energies \(\omega\) and fixed angular momentum \(J=J_{\text{ns}}\). is a very good approximation. The corrections for the metric proposed in [25] appears for terms of order of \(j_{1}^{2}\), and in general are very small. By solving the Klein-Gordon equation derived in the metric (8) a general solution have been found, with the radial part given in terms of a confluent Heun function. By analyzing the terms of the radial equation for \(r>R\) it is possible to neglect some of then and still have an accurate result as far as they are very small. With this procedure we obtained a simpler solution that shows explicitly the main effect of the rotation of the star in the wave function that is given by the rotational phase defined from eq. (29) \[\delta_{R}=\frac{jm\omega}{2k}\ln r. \tag{47}\] Then an analysis for higher powers of \(j_{1}\) have been performed considering the Klein-Gordon equation in the Berti metric, eq. (5), and solving it by the method presented in [31]. We presented the calculations up to \(a_{3}\) in eq. (36), and the numerical calculations have shown that for bosons with relativistic energies the solutions converge easily and the effect of the rotation of the stars is very small. Among the considered physical systems, the contribution of the terms depending on \(j_{1}\) is more important for the neutron stars. For lower values of the energies (\(v_{p}\sim v_{R}\)) this effect increases and more terms are needed in the solution. So, from the results we can observe that an important aspect of this effect is that it increases as the rotation velocity of the star increases relatively to the velocity of the test particle. For these reasons, for stars this effect must always be small as far as \(v_{R}\) cannot reach high values and we expect that near black holes or galaxies it becomes more important. This kind of behavior is similar to the pions and to the Higgs bosons, that is for light or heavy bosons. The inclusion of mass asymmetries of the stars is straightforward, we just have to take \(Q\neq 0\) in eq. (7) and follow the procedure presented in this work, and then, it is possible to find this effect in the wave function. If we think in terms of experimental results, the best way is to study this effect is to look for observables for which the rotational phase (47) is important, as for example in the analysis of the final state interactions of decays or in the study of interacting particles. We must also remark that even if this kind of effect is small, an experimental verification is a proof of the frame dragging by rotating objects in and consequently of the general theory of relativity influence in quantum systems. ## VII Acknowledgments We would like to thank CNPq for the financial support.
2306.13029
Decentralized Online Federated G-Network Learning for Lightweight Intrusion Detection
Cyberattacks are increasingly threatening networked systems, often with the emergence of new types of unknown (zero-day) attacks and the rise of vulnerable devices. Such attacks can also target multiple components of a Supply Chain, which can be protected via Machine Learning (ML)-based Intrusion Detection Systems (IDSs). However, the need to learn large amounts of labelled data often limits the applicability of ML-based IDSs to cybersystems that only have access to private local data, while distributed systems such as Supply Chains have multiple components, each of which must preserve its private data while being targeted by the same attack To address this issue, this paper proposes a novel Decentralized and Online Federated Learning Intrusion Detection (DOF-ID) architecture based on the G-Network model with collaborative learning, that allows each IDS used by a specific component to learn from the experience gained in other components, in addition to its own local data, without violating the data privacy of other components. The performance evaluation results using public Kitsune and Bot-IoT datasets show that DOF-ID significantly improves the intrusion detection performance in all of the collaborating components, with acceptable computation time for online learning.
Mert Nakıp, Baran Can Gül, Erol Gelenbe
2023-06-22T16:46:00Z
http://arxiv.org/abs/2306.13029v2
# Decentralized Online Federated G-Network Learning for Lightweight Intrusion Detection ###### Abstract Cyberatacks are increasingly threatening networked systems, often with the emergence of new types of unknown (zero-day) attacks and the rise of vulnerable devices. While Machine Learning (ML)-based Intrusion Detection Systems (IDSs) have been shown to be extremely promising in detecting these attacks, the need to learn large amounts of labelled data often limits the applicability of ML-based IDSs to cybersystems that only have access to private local data. To address this issue, this paper proposes a novel Decentralized and Online Federated Learning Intrusion Detection (DOF-ID) architecture. DOF-ID is a collaborative learning system that allows each IDS used for a cybersystem to learn from experience gained in other cybersystems in addition to its own local data without violating the data privacy of other systems. As the performance evaluation results using public Kitsune and Bott IoT datasets show, DOF-ID significantly improves the intrusion detection performance in all collaborating nodes simultaneously with acceptable computation time for online learning. Federated Learning, G-Networks, Intrusion Detection, Cybersecurity, Zero-Day Attacks, Machine Learning, Deep Random Neural Network ## I Introduction Intrusion Detection Systems (IDS), are important components of overall cybersystem security, and have often been developed using Machine Learning to detect anomalies and threats in incoming network traffic [1, 2, 3] and multiclass classification techniques have also been studied to detect different types of attacks in a unified manner [4, 5]. Federated Learning (FL), also known as collaborative learning, is a machine learning technique that trains a machine learning algorithm via multiple independent sessions, each using its own dataset, presenting several advantages and novel research issues [6]. This approach contrasts with traditional centralized machine learning techniques where local datasets are merged into one training session, as well as with approaches that assume that local data samples are statistically identical. Thus it is useful in connected but distinct environments when multiple entities such as autonomous vehicles or different types of IoT devices concurrently operate [7, 8], as well as when heterogeneous active objects collaborate in a partially common physical and data driven environment [9]. In the case of IDS, different but related environments may experience partially distinct and partially similar types of attacks, at different traffic rates and different attack frequencies. Also, the types of attacks and frequency of attacks experienced by a given entity (e.g. a particular service system) may be closely connected to its own commercial activities, or even its level of profitability and economic efficiency, which that particular system may not wish to share with other systems for competitive reasons. In such cases a "concurrent" yet "federated" type of learning about the design of an appropriate IDS may be very valuable to all systems, provided they do not directly access each other's data. In recent work G-Networks [10], which are a generalization of the Random Neural Network [11] and of "queueing networks with negative and positive customers", have been successfully used for cyberattack detection and IDS [12] with deep learning. Furthermore, in [13] it was shown that appropriately designed auto-associative G-Network models can very accurately detect multiple types of attacks simultaneously with training that is only based on "benign" traffic. Thus, in this paper we extend this approach using G-Networks for attack detection to Federated Learning where the mix of multiple attacks may vary between distinct sites that share their learning experience but do not share their private data. ### _Related Work on Federated Learning and IDS_ FL-based IDS is typically centralized or decentralized FL. The former collects updates of the learning algorithm in a central server so as to build a global model, while in decentralized FL, training is performed locally by each separate concurrent training site and the updated algorithms are then transferred among the separate sites. #### I-A1 Centralized Federated Learning In [14] a centralized federated architecture is developed to detect malware using the Generative Adversarial Network (GAN) for the industrial Internet of Things (IoT). Although this architecture achieves high accuracy in detecting attacks, it assumes that a validation set is available at the server, which may partially violate privacy of the user data. In [15] a multi-step Distributed Denial of Service (DDoS) attack prediction method that uses Hidden Markov Model within a centralized FL architecture using Reinforcement Learning is presented and tested against a global algorithm, while in [16] a multi-class classifier for FL-based IDS considers multiple data distributions, and in [17], a self-learning distributed approach is developed to detect IoT devices compromised by Mirai malware. Similarly, [18] presents an anomaly detection approach based on centralized FL to classify and identify attacks in IoT networks, and has tested it on a dataset consisting of Man-in-the-Middle (MitM) and flood attacks. In [19], an architecture was proposed to mitigate DDoS in industrial IoT networks offering reduced mitigation delay. #### I-A2 Decentralized Federated Learning In order to defend only against gradient attacks, Reference [20] proposed a decentralized FL framework, which is based on a peer-to-peer network for sending, aggregating, and updating local models. Another study [21] used decentralized FL to detect anomalies in network traffic generated by IoT devices, when all federated IDSs are shared with each distinct participant to obtain a weighted average, while in [22] blockchain-based FL was used as a decentralized architecture to specifically detect poisoning attacks. ### _Contributions of This Paper_ In this paper, we propose a novel Decentralized Online Federated Learning Intrusion Detection (DOF-ID) architecture for improved online learning of ML-based IDS, that uses the Deep Random Neural Network [23]. The DOF-ID architecture hosts many IoT or IP node,s each of which utilizes an instance of a common IDS, learns directly from its local data, and collaborates with other nodes to incorporate their up-to-date knowledge into its IDS. This architecture improves the overall security of all collaborating nodes with online learning between nodes, by taking advantage of the experience of each node, while preserving the confidentiality of the local data at each of these nodes. The DOF-ID architecture uses a learning procedure that combines Local Learning, and Decentralized Federated Updates (DFU) with concurrent parameter updates taking place on the collaborating nodes with local data. Therefore the proposed DOF-ID architecture with the DFU algorithm contrasts sharply with recent work on federated learning IDS. We then evaluate the performance of the DOF-ID architecture and compare it with the performance of four benchmark methods, on three different types of cyberattacks obtained from two well-known public datasets: Kitsune [24, 25] and Bot-IoT [26]. The results show that DOF-ID provides significant performance gains compared to learning from local data alone and outperforms other state-of-the-art federated learning methods. The remainder of this paper is organized as follows: Section II presents the novel DOF-ID architecture with DRNN-based IDS (Section II-A), local learning algorithm (Section II-B), and the DFU algorithm (Section II-C). Section III evaluates the performance of the proposed DOF-ID architecture on public datasets. Section IV summarizes the paper and presents some insights towards future work. ## II Intrusion Detection with Decentralized and Online Federated Learning In order to improve the performance of an ML-based IDS, we now present a novel Decentralized and Online Federated Learning Intrusion Detection Intrusion Detection (DOF-ID) architecture, which is based on the collaboration of \(N\) nodes (denoted by set \(\mathcal{N}\)) using separate local instances of the same IDS. Figure 1 displays the proposed architecture from the perspective of a particular node \(n\), where the nodes are represented as computer networks (e.g. Internet of Things (IoT) networks). From the application perspective, one may consider DOF-ID as a subscription based service, where each subscriber (i.e. node) receives the updates of other subscribers to improve its local security level. As seen in Figure 1, each node \(n\) directly communicates with other nodes in \(\mathcal{N}\) (i.e. peers) to send locally learned parameters of IDS and to receive those learned by other nodes. That is, locally learned IDS parameters are Peer-to-Peer (P2P) shared between every node in the DOF-ID architecture distributing the knowledge over collaborating nodes in \(\mathcal{N}\) to improve their security (subsequently the global security) while the confidentiality of local data in every node is assured. The DOF-ID architecture operates over time windows each with a length of \(T\) seconds, where time windows are considered to be synchronized among collaborating nodes. We also assume that the first time window (denoted by \(l=0\)) starts with the use of DOF-ID architecture. Accordingly, at the beginning of each time window \(l\), each node \(n\) updates its local IDS if no intrusion is detected in the previous window \(l-1\). That is, if the intrusion decision of node \(n\) in window \(l-1\), denoted by \(y_{n}^{l-1}\in\{0,1\}\), equals zero, node \(n\) executes the following steps as part of the learning procedure for the current window \(l\): 1. It learns from training data containing local _benign_ network traffic for windows up to beginning of \(l\), denoted by \(\mathcal{D}_{n}^{l}\), such that \(\mathcal{D}_{n}^{l}=\{\,k:\quad y_{n}^{k}=0,\forall k\in\{1,\ldots,l-1\}\,\}\). When the learning is completed, an up-to-date locally trained IDS of node \(n\), denoted by \(I_{n}^{l}\), is obtained to use for detection in window \(l\). 2. It shares the parameters of \(I_{n}^{l}\) with other collaborating nodes in \(\mathcal{N}\) and receives the local updates of those nodes, i.e. \(\{I_{n^{\prime}}^{l}\}_{n^{\prime}\in\mathcal{N}\setminus n}\). In this paper, it is assumed that the P2P parameter exchange is instantaneous; however, future work shall analyse the time, bandwidth and energy requirements of the proposed DOF-ID architecture regarding P2P parameter exchange. 3. As the final step, node \(n\) updates the local IDS \(I_{n}^{l}\) by merging its parameters with \(\{I_{n^{\prime}}^{l}\}_{n^{\prime}\in\mathcal{N}\setminus n}\) via the proposed DFU. Following the training procedure in window \(l\), each node \(n\) estimates the intrusion probability \(y_{n}^{l}\) through the following steps: 5. The inputs of the utilized IDS are considered to be statistics calculated from the traffic of node \(n\). Thus, node \(n\) first calculates traffic statistics as a vector of IDS inputs, denoted by \(x_{n}^{l}\), in time window \(l\). 6. Using the up-to-date IDS \(I_{n}^{l}\), the final intrusion decision \(y_{n}^{l}\equiv I_{n}^{l}(x_{n}^{l})\) for traffic statistics \(x_{n}^{l}\) is calculated. In the rest of this section, we respectively present our methodology for the particular ML-based IDS utilized in DOF-ID architecture as well as the local and federated learning algorithms. ### _IDS Utilized in the DOF-ID Architecture_ Within our DOF-ID architecture, we use an IDS which is the modified version of the one presented in [13] and comprised of DRNN and Statistical Whisker-based Benign Classifier (SWBC) as shown in Figure 2. At each window \(l\), this IDS estimates \(y_{n}^{l}\) that indicates whether the traffic of the considered node \(n\) in window \(l\) is malicious based on the input vector of traffic statistics, \(x_{n}^{l}\). #### Iii-A1 Traffic Statistics Let \(p_{n}^{t}\) denote the packet with length \(|p_{n}^{t}|\) generated in node \(n\) at instantaneous time \(t\), and let \(P_{n}^{l}\) be the set of all packets generated in \(n\) within time window \(l\) whose length equals \(T_{n}\): \[P_{n}^{l}=\{\,p_{n}^{t}:\;(l-1)\,T_{n}\leq t<l\,T_{n}\,\}. \tag{1}\] In each time window \(l\), node \(n\) calculates three main statistics that represent the overall density of the network traffic as the average packet length in bytes (\(\mu_{n}^{l}\)), the average number of packets per second (\(\lambda_{n}^{l}\)), and the average traffic in bytes per second (\(\rho_{n}^{l}\)): \[\mu_{n}^{l}=\frac{\sum_{p\in P_{n}^{l}}|p|}{|P_{n}^{l}|},\quad\lambda_{n}^{l}= \frac{|P_{n}^{l}|}{T_{n}},\quad\rho_{n}^{l}=\frac{\sum_{p\in P_{n}^{l}}|p|}{T_ {n}}. \tag{2}\] In order to use these statistics with DRNN, each element \(i\) of \(x_{n}^{l}=[\mu_{n}^{l},\lambda_{n}^{l},\rho_{n}^{l}]\), denoted by \(x_{n,i}^{l}\), is normalized to have values in \([0,1]\). #### Iii-A2 Deep Random Neural Network to Create Auto-Associative Memory In order to create an auto-associative memory, we use the well-known lightweight deep learning model DRNN [23], which is a Random Neural Network [11] model with feed-forward and clustered structure. As a result of its unique architecture presented in [23], each neuron at hidden layers of DRNN utilizes the following activation function, Fig. 1: Schematic system representation of the Decentralized and Online Federated Learning Intrusion Detection (DOF-ID) architecture. Fig. 2: Structure of the IDS utilized in the DOF-ID architecture which is specific to this model: \[\Psi(\Lambda) = \frac{p\left(r+\lambda^{+}\right)+\lambda^{-}+\Lambda}{2\left[ \lambda^{-}+\Lambda\right]}\] \[- \sqrt{\left(\frac{p\left(r+\lambda^{+}\right)+\lambda^{-}+\Lambda}{ 2\left[\lambda^{-}+\Lambda\right]}\right)^{2}-\frac{\lambda^{+}}{\lambda^{-}+ \Lambda}}\,\] where \(\Lambda\) is the input of the given cluster, \(p\) is the probability that any neuron received trigger transmits a trigger to some other neuron, and \(\lambda^{+}\) and \(\lambda^{-}\) are respectively the rates of external Poisson flows of excitatory and inhibitory input spikes to any neuron. On the other hand, the neurons at the output layer of DRNN utilize linear activation functions. As we consider three different network statistics, the DRNN model that we use in this paper consists of \(H=3\) fully connected layers with three neurons each. Accordingly, from the input vector \(x_{n}^{l}\), DRNN estimates vector \(\hat{x}_{n}^{l}\) of the statistics that are expected to be observed when the network traffic is benign: \[\hat{x}_{(n,1)}^{l}=\Psi([x_{n}^{l},1]\,W_{(n,1)}^{l}) \tag{4}\] \[\hat{x}_{(n,h)}^{l}=\Psi([\hat{x}_{(n,h-1)},1]\,W_{(n,h)}^{l})\ \ \forall h \in\{2,\ldots,H-1\},\] (5) \[\hat{x}_{n}^{l}=[\hat{x}_{(n,h-1)}^{l},1]\,W_{(n,H)}^{l}, \tag{6}\] where \(\hat{x}_{(n,h)}^{l}\) is the output of layer \(h\), and \(\hat{x}_{n}^{l}\) is the final output of DRNN for node \(n\) in window \(l\). In addition, the term \([x_{n}^{l},1]\) for \([\hat{x}_{n}^{l},1]\) indicates that \(1\) is added to the input of each layer as a multiplier of the bias, and \(W_{(n,h)}^{l}\) is the connection weight matrix between layers \(h-1\) and \(h\) of DRNN in \(I_{n}^{l}\). #### Iii-A3 Statistical Whisker-based Benign Classifier As the second operation in \(I_{n}^{l}\) (IDS of node \(n\) in time window \(l\)) that makes a decision on an intrusion, SWBC is used to measure the significance of the difference between the actual statistics measured from the network traffic and the expected statistics estimated by DRNN. SWBC is originally proposed in [13] and calculates the decision \(y_{n}^{l}\) as follows: \[\zeta_{n}^{l}=\sum_{i\in\{1,2,3\}}\mathbf{1}(|x_{n,i}^{l}-\hat{x}_{n,i}^{l}|>w_ {n,i}^{l}), \tag{7}\] \[y_{n}^{l}=\mathbf{1}(\zeta_{n}^{l}>\theta_{n}^{l}), \tag{8}\] where \(x_{n,i}^{l}\) is the \(i\)-th element of vector \(x_{n}^{l}\) corresponding the traffic statistic \(i\), and \(\{w_{n,i}^{l}\}_{i\in\{1,2,3\}}\) and \(\theta_{n}^{l}\) are the only parameters of the decision maker which are computed (learned) during training along with the connection weights of DRNN. ### _Local Learning_ We now present the methodology of the local learning procedure that node \(n\) executes to learn parameters of \(I_{n}^{l}\) only using local data. In this procedure, node \(n\) respectively learns the DRNN weights and SWBC parameters for window \(l\) based on the available data \(\mathcal{D}_{n}^{l}\). #### Iii-B1 Learning DRNN Weights Using the local data of node \(n\), DRNN in \(I_{n}^{l}\) is trained to create an auto-associative memory for the normal - benign - network traffic. To this end, first the connection weights of each hidden layer \(h\in\{1,\ldots,H-1\}\) are calculated by minimizing a square cost with L1 regularization via Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) with the following objective: \[W_{(n,h)}^{l}= \operatorname*{argmin}_{\{W:\,W\geq 0\}}\Big{(} \tag{9}\] \[\Big{|}\Big{|}[adj(\Psi(\hat{X}_{(n,h-1)}^{l}W_{R}\ )),\mathbf{1}_{| \mathcal{D}_{n}^{l}|}]\,W-\hat{X}_{(n,h-1)}^{l}\Big{|}\Big{|}_{2}^{2}\] \[+||W||_{1}\Big{)}\,\] where \(\hat{X}_{(n,h-1)}^{l}\) is the matrix of \(\hat{x}_{(n,h-1)}^{k}\) collected for \(k\in\mathcal{D}_{n}^{l}\) for \(h\geq 1\), \(\hat{X}_{(n,0)}^{l}=X_{n}^{l}\) which is the matrix of \(x_{n}^{k}\) collected for \(k\in\mathcal{D}_{n}^{l}\), \(\mathbf{1}_{|\mathcal{D}_{n}^{l}|}\) is a column vector of ones with length \(|\mathcal{D}_{n}^{l}|\), and \(W_{R}\) is randomly generated (\(H\times H\)) matrix with elements in the range \([0,1]\). In addition, \(adj(A)\) linearly maps the elements of matrix \(A\) to the range \([0,1]\) then applies z-score, and adds a positive constant to remove negativity. For each layer \(h\in\{1,\ldots,H-1\}\), after FISTA is executed, we normalize each resulting weight matrix \(W_{(n,h)}^{l}\): \[W_{(n,h)}^{l}\gets 0.1\frac{W_{(n,h)}^{l}}{\max_{k\in\mathcal{D}_{n}^{l}} \big{(}\hat{X}_{(n,h)}^{k}\big{)}}. \tag{10}\] The connection weights of the output layer \(H\) are calculated via an extreme learning machine as \[W_{(n,H)}^{l}=(\hat{X}_{(n,H-1)}^{l})^{+}X_{n}^{l}, \tag{11}\] where \(A^{+}\) denotes the pseudo-inverse of matrix \(A\). #### Iii-B2 Computing SWBC Parameters Using the training data, \(\mathcal{D}_{n}^{l}\), which consists of only benign traffic features, we determine the values of \(\theta_{n}^{l}\) and \(w_{n,i}^{l}\) for each statistic \(i\). To this end, for each \(i\), the value of the absolute difference \(z_{n,i}^{k}=|x_{n,i}^{k}-\hat{x}_{n,i}^{k}|\) is computed for all \(k\in\mathcal{D}_{n}^{l}\). Then, we compute the lower quartile \(Q_{n,i}^{l}\) and upper quartile \(Q_{n,i}^{l}\) of \(\{z_{n,i}^{k}\}_{k\in\mathcal{D}_{n}^{l}}\). Using \(Q_{n,i}^{L}\) and \(Q_{n,i}^{l}\), the upper whisker \(w_{n,i}^{l}\) is calculated as \[w_{n,i}^{l}=Q_{n,i}^{l}+\frac{3}{2}(Q_{n,i}^{l}-Q_{n,i}^{L})\ \ \ \ \ \ \forall i\in\{1,2,3\} \tag{12}\] Since the training data contains only benign traffic, \(\theta_{n}^{l}\) must be selected to classify training samples as benign traffic. Meanwhile, we should also consider that the training data may include false negative samples. Therefore, we determine \(\theta_{n}^{l}\) to classify the majority but not all of the training samples as benign traffic, and we set the value of \(\theta_{n}^{l}\) to the mean of \(\zeta_{n}^{l}\) (i.e. the average number of abnormal statistics) plus two standard deviations of \(\zeta_{n}^{l}\) in \(\mathcal{D}_{n}^{l}\): \[\theta_{n}^{l}=\operatorname*{mean}_{\mathcal{D}_{n}^{l}}(\zeta_{n}^{l})+2 \operatorname*{std}_{\mathcal{D}_{n}^{l}}(\zeta_{n}^{l}) \tag{13}\] ### _Decentralized Federated Update_ We now present the Decentralized Federated Update (DFU) algorithm that is performed as the last step of our DOF-ID architecture. In the DFU algorithm, the parameters of node \(n\) are updated using the parameters of other nodes in DOF-ID, whose data is unknown by node \(n\). To this end, at each window \(l\) in this algorithm, node \(n\) performs three main operations: 1) select the set of concurring nodes, denoted by \(\mathcal{C}_{n}^{l}\) that achieve decisions similar to those of node \(n\), 2) update the value of each parameter segment in \(I_{n}^{l}\) using the corresponding segment with closest value to that segment among all nodes in \(\mathcal{C}_{n}^{l}\), 3) recalculate the output layer weights of DRNN via extreme learning machine in order to fully adapt updated parameters to the local network traffic. #### Ii-C1 Selecting a Set of Concurring Nodes In the current window \(l\), node \(n\) first selects a set of nodes that concur with it for most of its decisions regarding local data. In order to select the concurring nodes, node \(n\) evaluate the performance of each node \(m\in\mathcal{N}\setminus n\) on the local data of node \(n\) over all time windows up to current window \(l\): \[\mathcal{C}_{n}^{l}=\{m:\ \ \frac{1}{l}\sum_{k=1}^{l}\mathbf{1}\big{(}I_{m}^{l}(x _{n}^{k})=y_{n}^{k}\big{)}\geq\Theta,\ \ \forall m\in\mathcal{N}\setminus n\} \tag{14}\] #### Ii-C2 Updating IDS Parameters Using the IDSs of the concurring nodes, the parameters of \(I_{n}^{l}\) are updated separately for each segment of the IDS (such as each DRNN layer, each SWBC whisker, and the SWBC threshold) averaging with the closest one for that segment among all the concurring nodes. To this end, first, for each layer \(h\in\{1,\ldots,H-1\}\) of DRNN in \(I_{n}^{l}\), the node \(m_{h}^{*}\) that has the closest connection weights \(W_{(n,h)}^{l}\) with node \(n\) in window \(l\) is obtained: \[m_{h}^{*}=\operatorname*{arg\,min}_{m\in\mathcal{C}_{n}^{l}}\Bigg{(}\left| \left|W_{(n,h)}^{l}-W_{(m,h)}^{l}\right|\right|_{1}\Bigg{)}. \tag{15}\] Then, the connection weights of this layer, \(W_{(n,h)}^{l}\), are updated as \[W_{(n,h)}^{l}\gets c\,W_{(n,h)}^{l}+(1-c)\,W_{(m_{h}^{*},h)}^{l}, \tag{16}\] where \(0.5\leq c\leq 1\) is a coefficient of weighted averaging that prioritizes locally learned weights over federated weights. Similarly, for each whisker \(w_{n,i}^{l}\) of SWBC in \(I_{n}^{l}\), the node \(m_{i}^{*}\) with the whisker value \(w_{m_{i}^{*},i}^{l}\) closest to \(w_{n,i}^{l}\) is obtained among concurring nodes in window \(l\): \[m_{i}^{*}=\operatorname*{arg\,min}_{m\in\mathcal{C}_{n}^{l}}\Bigg{(}\left|w_{ n,i}^{l}-w_{m,i}^{l}\right|\Bigg{)}, \tag{17}\] and each whisker \(w_{n,i}^{l}\) of SWBC in \(I_{n}^{l}\) is updated: \[w_{n,i}^{l}\gets c\,w_{n,i}^{l}+(1-c)\,w_{m_{i}^{*},i}^{l}. \tag{18}\] The decision threshold \(\theta_{n}^{l}\) is also updated as \[\theta_{n}^{l}\gets c\,\theta_{n}^{l}+(1-c)\,\theta_{m_{\theta}^{*}}^{l} \tag{19}\] for \[m_{\theta}^{*}=\operatorname*{arg\,min}_{m\in\mathcal{C}_{n}^{l}}\Bigg{(} \left|\theta_{n}^{l}-\theta_{m}^{l}\right|\Bigg{)}. \tag{20}\] #### Ii-C3 Adapting the Updated IDS to Local Network Traffic Finally, the output layer weights of DRNN in the IDS \(I_{n}^{l}\) are updated to fully adapt \(I_{n}^{l}\) to local _benign_ network traffic of node \(n\). To this end, (11) is repeated: \[W_{(n,H)}^{l}=(\tilde{X}_{(n,H-1)}^{l})^{+}X_{n}^{l}. \tag{21}\] ## III Experimental Results We now evaluate the performance of the proposed DOF-IDS architecture. To this end, from two publicly available datasets Kitsune [25] and Bot-IoT [26], we use three attack data each of which corresponds to a single node in DOF-IDS architecture. That is, we consider three collaborating nodes each of which is an IoT network whose data is obtained from a public dataset. We perform the experiments on a computer with 16 GB of ram and M1 Pro 8-core 3.2 GHz processor. The performance of DOF-IDS is also compared against four benchmark methods. ### _IoT Traffic Datasets and Their Processing_ As the first node in DOF-IDS architecture, we use "Mirai Botnet" attack data from the Kitsune dataset [25], which is the collection of \(764,137\) individual traffic packets, which are transmitted by \(107\) unique IP addresses within \(7137\) seconds (approximately \(2\) hours). As the second and third nodes in DOF-IDS, we use "DoS HTTP" and "DDoS HTTP" attacks from Bot-IoT dataset [26]. The DoS HTTP attack data contains \(29,762\) packets transmitted in \(49\) minutes, and the DDoS HTTP attack data contains \(19,826\) packets transmitted in \(42\) minutes. Since these two types of attacks start this data with an attack traffic and the presented system requires cold-start with only benign traffic, we use each of these data by flipping it on the time axis. During our experimental results in order to obtain approximately the same number of time windows from each dataset, we set the values of \(T_{n}\) as follows: \(23\) for Mirai, \(9\) for DoS HTTP, and \(8\) for DDoS HTTP. One should also note that both datasets include a binary ground truth \(a(p_{n}^{t})\) for each packet \(p_{n}^{t}\), which is determined by the providers, stating whether the packet is a normal "benign" packet or "malicious" corresponding to an ongoing attack. Accordingly, based on the individual packet ground truths, we determine an overall ground truth, denoted by \(g_{n}^{l}\), for each node \(n\) in each time window \(l\): \[g_{n}^{l}=\mathbf{1}\Bigg{(}\frac{\sum_{p\in P_{n}^{l}}a(p)}{|P_{n}^{l}|}>0.5 \Bigg{)} \tag{22}\] ### _Benchmark Methods_ #### Iii-B1 No Federated In this method, the contributions of the other nodes are not considered in the learning of the IDS parameters. This is the conventional training approach, which is equivalent to the local learning procedure of our DOF-IDS architecture. #### Iv-B2 Average over All Collaborating Nodes The rest of the benchmark methods are used in place of the DFU algorithm to update the IDS parameters after local learning. In this method called "Average", the parameters of IDS in node \(n\) (i.e. \(I_{n}^{l}\)) are updated as the average of all connection weights over all collaborating nodes in the DOF-ID architecture. To this end, connection weights for each layer \(h\) of DRNN in \(I_{n}^{l}\) are updated as \[W_{(n,h)}^{l}\leftarrow\frac{1}{N}\sum_{m\in\mathcal{N}}W_{(m,h)}^{l}, \tag{23}\] Subsequently, SWBC parameters are also updated in the same way: \[w_{n,i}^{l}\leftarrow\frac{1}{N}\sum_{m\in\mathcal{N}}w_{m,i}^{l}\ \ \forall i,\ \ \text{ and }\ \ \theta_{n}^{l}\leftarrow\frac{1}{N}\sum_{m\in\mathcal{N}}\theta_{n}^{l} \tag{24}\] One should note that the parameters updated using this method become the same for all nodes. That is, at the end of this method, \(I_{n}^{l}=I_{m}^{l},\ \forall n,m\in\mathcal{N}\). #### Iv-B3 Average with Closest Node In this method called "Average with Closest Node (ACN)", the parameters of \(I_{n}^{l}\) are updated by taking their average with the closest parameters among all nodes in the DOF-ID architecture. To this end, first, the node \(m^{*}\) that has the closest parameters with node \(n\) at time window \(l\) is obtained: \[m^{*}=\operatorname*{arg\,min}_{m\in\mathcal{N}\setminus n} \Bigg{(}\sum_{h=1}^{H}\Bigl{|}\left|W_{(n,h)}^{l}-W_{(m,h)}^{l} \right|\Bigr{|}_{1} \tag{25}\] \[+\sum_{i=1}^{3}\Bigl{|}w_{n,i}^{l}-w_{m,i}^{l}\Bigr{|}+\left| \theta_{n}^{l}-\theta_{m}^{l}\right|\Bigg{)}\] Then, in time window \(l\), the parameters \(I_{n}^{l}\) are updated taking their average with the parameters of \(I_{m^{*}}^{l}\) as \[W_{(n,h)}^{l}\leftarrow\frac{W_{(n,h)}^{l}+W_{(m^{*},h)}^{l}}{2 },\ \ \forall h \tag{26}\] \[w_{n,i}^{l}\leftarrow\frac{w_{n,i}^{l}+w_{m^{*},i}^{l}}{2},\ \ \forall i \theta_{n}^{l}\leftarrow\frac{\theta_{n}^{l}+\theta_{m^{*}}^{l}}{2}\] #### Iv-B4 Average with Closest Node per Layer In the last benchmark method called "Average with Closest Node per Layer (ACN-L)", the parameters of \(I_{n}^{l}\) are updated for each parameter segment of the IDS (such as a layer of DRNN, a whisker of SWBC, and the threshold of SWBC) individually taking the average with the same part of the closest node. For each layer \(h\) of DRNN in \(I_{n}^{l}\), the connection weights of this layer are updated using (16) for a value of \(m_{h}^{*}\) calculated using (15). Then, each whisker \(w_{n,i}^{l}\) of SWBC in \(I_{n}^{l}\) is updated (18) for a value of \(m_{i}^{*}\) calculated using (17). The decision threshold \(\theta_{n}^{l}\) is updated by subsequently using (20) and (19). ### _Performance Evaluation_ We now present the performance evaluation results of our DOF-ID architecture, where we set \(c=0.75\) and \(\Theta=0.65\). In addition, we used the DRNN model that has \(10\) neurons in each cluster and has the following parameter settings: \(p=0.05\), \(r=0.001\), and \(\lambda^{+}=\lambda^{-}=0.1\). Figure 3 displays the average performance of DOF-ID with respect to Accuracy, True Positive Rate (TPR), and True Negative Rate (TNR). The results in this figure show that each node (i.e. Mirai, DoS HTTP and DDoS HTTP) achieves above \(0.86\) detection performance with respect to all metrics. One may also see that although the nodes suffer from some false positive alarms (shown by the TNR metric), all nodes detect local intrusions with a considerably high performance (shown by the TPR metric). We also compare the performance of DOF-ID with benchmark methods in Figure 4. The results in Figure 4 (top) show that the proposed method has the best accuracy among all methods compared. Another important observation of this figure is the poor performance of the averaging over all collaborating nodes. This is an expected result as network traffic across nodes varies considerably. The evaluation results further show that FL-based methods (i.e. DOF-ID, Average, ACN, and ACN-L) significantly improve the detection performance measured by TPR in Figure 4 (middle), while they mostly tend to raise more false positive alarms compared to local learning as shown in Figure 4 (bottom). On the other hand, the proposed DOF-ID method appears to have a small decrease in TNR (i.e. a slight increase in false alarms) but a significant improvement in the detection rate, TPR. Finally, we measure the training time of the proposed and compared methods. We especially measure the time required for federated update and present it in Table I since the local learning time is the same (with negligible random deviations) for all models, which equals \(19.2\ ms\) on average. The federated update time measurements in Table I show that the time spent by DOF-ID in addition to local learning is about \(30\ ms\) for each node. This time is significantly larger than other methods as its operations are more advanced and detailed. On the other hand, a method is considered to be acceptable for a real-time application as long as the total operation time spent on local and federated learning and detection is shorter than the window length \(T_{n}\). For the DOF-ID architecture, the total operation time is \(48.91\ ms\) on average as the sum of local Fig. 3: Performance of the DOF-ID architecture for each node among Mirai, DoS HTTP, and DDoS HTTP with respect to Accuracy, TPR, and TNR learning time of \(19.2\ ms\), federated learning time of \(29.6\ ms\), and detection time of \(0.11\ ms\). ## IV Conclusions This paper proposed a novel Decentralized and Online Federated Learning Intrusion Detection (DOF-ID) architecture to improve the detection performance of anomaly-based IDS using a DRNN model and SWBC decision maker, both of which learn using only normal "benign" network traffic. The presented DOF-ID architecture provides a collaborative learning system that enables each node to learn from the experiences of other collaborating nodes without violating data confidentiality. In this way, DOF-ID improves both local and global security levels of all collaborating nodes simultaneously, quickly and effectively eliminating the requirement for a large learning data. This paper also evaluates the performance of DOF-ID and compares it against the benchmark methods using two public well-known datasets, Kitsune and Bot-IoT. During the performance evaluation, the impacts of FL on intrusion detection performance are also investigated. Our experimental results revealed that the proposed DOF-ID method significantly improves the detection performance with a small increase in false positive alarms compared to the same IDS structure learning only from local traffic. In addition, the proposed method has significantly superior performance (at least \(15\%\) accuracy difference) over benchmark methods with higher computation time. Future work shall primarily expand the experimental setup and evaluate the performance of the proposed DOF-ID architecture for large networked systems such as smart grids or large IoT networks. It would also be interesting to address the performance and security issues regarding the parameter exchange within the proposed DOF-ID architecture. Accordingly, we shall analyse the time, bandwidth and energy requirements of this architecture due to P2P parameter exchange and investigate the security breaches that may aim to leak or corrupt IDS parameters during their transfer. Another important issue that will have to be considered is that FL itself may come under attack in a distributed system of systems [27], so that this aspect will also require further research and attention.
2302.08840
Learnable Topological Features for Phylogenetic Inference via Graph Neural Networks
Structural information of phylogenetic tree topologies plays an important role in phylogenetic inference. However, finding appropriate topological structures for specific phylogenetic inference tasks often requires significant design effort and domain expertise. In this paper, we propose a novel structural representation method for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern graph representation learning techniques, our learnable topological features can provide efficient structural information of phylogenetic trees that automatically adapts to different downstream tasks without requiring domain expertise. We demonstrate the effectiveness and efficiency of our method on a simulated data tree probability estimation task and a benchmark of challenging real data variational Bayesian phylogenetic inference problems.
Cheng Zhang
2023-02-17T12:26:03Z
http://arxiv.org/abs/2302.08840v1
# Learnable Topological Features for Phylogenetic Inference via Graph Neural Networks ###### Abstract Structural information of phylogenetic tree topologies plays an important role in phylogenetic inference. However, finding appropriate topological structures for specific phylogenetic inference tasks often requires significant design effort and domain expertise. In this paper, we propose a novel structural representation method for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern graph representation learning techniques, our learnable topological features can provide efficient structural information of phylogenetic trees that automatically adapts to different downstream tasks without requiring domain expertise. We demonstrate the effectiveness and efficiency of our method on a simulated data tree probability estimation task and a benchmark of challenging real data variational Bayesian phylogenetic inference problems. ## 1 Introduction Phylogenetics is an important discipline of computational biology where the goal is to identify the evolutionary history and relationships among individuals or groups of biological entities. In statistical approaches to phylogenetics, this has been formulated as an inference problem on hypotheses of shared history, i.e., _phylogenetic trees_, based on observed sequence data (e.g., DNA, RNA, or protein sequences) under a model of evolution. The phylogenetic tree defines a probabilistic graphical model, based on which the likelihood of the observed sequences can be efficiently computed (Felsenstein, 2003). Many statistical inference procedures therefore can be applied, including maximum likelihood and Bayesian approaches (Felsenstein, 1981; Yang & Rannala, 1997; Mau et al., 1999; Huelsenbeck et al., 2001). Phylogenetic inference, however, has been challenging due to the composite parameter space of both continuous and discrete components (i.e., branch lengths and the tree topology) and the combinatorial explosion in the number of tree topologies with the number of sequences. Harnessing the topological information of trees hence becomes crucial in the development of efficient phylogenetic inference algorithms. For example, by assuming conditional independence of separated subtrees, Larget (2013) showed that conditional clade distributions (CCDs) can provide more reliable tree probability estimation that generalizes beyond observed samples. A similar approach was proposed to design more efficient proposals for tree movement when implementing Markov chain Monte Carlo (MCMC) algorithms for Bayesian phylogenetics (Hohna & Drummond, 2012). Utilizing more sophisticated local topological structures, CCDs were later generalized to subsplit Bayesian networks (SBNs) that provide more flexible distributions over tree topologies (Zhang & Matsen IV, 2018). Besides MCMC, variational Bayesian phylogenetics inference (VBPI) was recently proposed that leveraged SBNs and a structured amortization of branch lengths to deliver competitive posterior estimates in a more timely manner (Zhang & Matsen IV, 2019; Zhang, 2020; Zhang & Matsen IV, 2022). Azouri et al. (2021) used a machine learning approach to accelerate maximum likelihood tree-search algorithms by providing more informative topology moves. Topological features have also been found useful for comparison and interpretation of the reconstructed phylogenies (Matsen IV, 2007; Hayati et al., 2022). While these approaches prove effective in practice, they all rely on heuristic features (e.g., clades and subsplits) of phylogenetic trees that often require significant design effort and domain expertise, and may be insufficient for capturing complicated topological information. Graph Neural Networks (GNNs) are an effective framework for learning representations of graph-structured data. To encode the structural information about graphs, GNNs follow a neighborhood aggregation procedure that computes the representation vector of a node by recursively aggregating and transforming representation vectors of its neighboring nodes. After the final iteration of aggregation, the representation of the entire graph can also be obtained by pooling all the node embeddings together via some permutation invariant operators (Ying et al., 2018). Many GNN variants have been proposed and have achieved superior performance on both node-level and graph-level representation learning tasks (Kipf and Welling, 2017; Hamilton et al., 2017; Li et al., 2016; Zhang et al., 2018; Ying et al., 2018). A natural idea, therefore, is to adapt GNNs to phylogenetic models for automatic topological feature learning. However, the lack of node features for phylogenetic trees makes it challenging as most GNN variants assume fully observed node features at initialization. In this paper, we propose a novel structural representation method for phylogenetic inference that automatically learns efficient topological features based on GNNs. To obtain the initial node features for phylogenetic trees, we follow previous studies (Zhu and Ghahramani, 2002; Rossi et al., 2021) to minimize the Dirichlet energy, with one not encoding for the tip nodes. Unlike these previous studies, we present a fast linear time algorithm for Dirichlet energy minimization by taking advantage of the hierarchical structure of phylogenetic trees. Moreover, we prove that these features are sufficient for identifying the corresponding tree topology, i.e., there is no information loss in our raw feature representations of phylogenetic trees. These raw node features are then passed to GNNs for more sophisticated structure representation learning required by downstream tasks. Experiments on a synthetic data tree probability estimation problem and a benchmark of challenging real data variational Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our method. ## 2 Background NotationA phylogenetic tree is denoted as \((\tau,\mathbf{q})\) where \(\tau\) is a bifurcating tree that represents the evolutionary relationship of the species and \(\mathbf{q}\) is a non-negative branch length vector that characterizes the amount of evolution along the edges of \(\tau\). The tip nodes of \(\tau\) correspond to the observed species and the internal nodes of \(\tau\) represent the unobserved characters (e.g., DNA bases) of the ancestral species. The transition probability \(P_{ij}(t)\) from character \(i\) to character \(j\) along an edge of length \(t\) is often defined by a continuous-time substitution model (e.g., Jukes and Cantor (1969)), whose stationary distribution is denoted as \(\eta\). Let \(E(\tau)\) be the set of edges of \(\tau\), \(r\) be the root node (or any internal node if the tree is unrooted and the substitution model is reversible). Let \(\mathbf{Y}=\{Y_{1},Y_{2},\ldots,Y_{M}\}\in\Omega^{N\times M}\) be the observed sequences (with characters in \(\Omega\)) of length \(M\) over \(N\) species. Phylogenetic posteriorAssuming different sites \(Y_{i},i=1,\ldots,M\) are independent and identically distributed, the likelihood of observing \(\mathbf{Y}\) given the phylogenetic tree \((\tau,\mathbf{q})\) takes the form \[p(\mathbf{Y}|\tau,\mathbf{q})=\prod_{i=1}^{M}p(Y_{i}|\tau,\mathbf{q})=\prod_{i=1}^{M}\sum _{a^{i}}\eta(a^{i}_{r})\prod_{(u,v)E(\tau)}P_{a^{i}_{u}a^{i}_{v}}(q_{uv}), \tag{1}\] where \(a^{i}\) ranges over all extensions of \(Y_{i}\) to the internal nodes with \(a^{i}_{u}\) being the assigned character of node \(u\). The above phylogenetic likelihood function can be computed efficiently through the pruning algorithm (Felsenstein, 2003). Given a prior distribution \(p(\tau,\mathbf{q})\) of the tree topology and the branch lengths, Bayesian phylogenetics then amounts to properly estimating the phylogenetic posterior \(p(\tau,\mathbf{q}|\mathbf{Y})\propto p(\mathbf{Y}|\tau,\mathbf{q})p(\tau,\mathbf{q})\). Variational Bayesian phylogenetic inferenceLet \(Q_{\mathbf{\phi}}(\tau)\) be an SBN-based distribution over the tree topologies and \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) be a non-negative distribution over the branch lengths. VBPI finds the best approximation to \(p(\tau,\mathbf{q}|\mathbf{Y})\) from the family of products of \(Q_{\mathbf{\phi}}(\tau)\) and \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) by maximizing the following multi-sample lower bound \[L^{K}(\mathbf{\phi},\mathbf{\psi})=\mathbb{E}_{Q_{\mathbf{\phi},\mathbf{\psi}}(\tau^{1:K}, \mathbf{q}^{1:K})}\log\left(\frac{1}{K}\sum_{i=1}^{K}\frac{p(\mathbf{Y}|\tau^{i},\mathbf{ q}^{i})p(\tau^{i},\mathbf{q}^{i})}{Q_{\mathbf{\phi}}(\tau^{i})Q_{\mathbf{\psi}}(\mathbf{q}^{i}| \tau^{i})}\right)\leq\log p(\mathbf{Y}) \tag{2}\] where \(Q_{\mathbf{\phi},\mathbf{\psi}}(\tau^{1:K},\mathbf{q}^{1:K})=\prod_{i=1}^{K}Q_{\mathbf{\phi}}( \tau^{i})Q_{\mathbf{\psi}}(\mathbf{q}^{i}|\tau^{i})\). To properly parameterize the variational distributions, a support of the conditional probability tables (CPTs) is often acquired from a sample of tree topologies via fast heuristic bootstrap methods (Minh et al., 2013; Zhang & Matsen IV, 2019). The branch length approximation \(Q_{\mathbf{\psi}}(\mathbf{q}|\tau)\) is taken to be the diagonal Lognormal distribution \[Q_{\mathbf{\psi}}(\mathbf{q}|\tau)=\prod\nolimits_{e\in E(\tau)}p^{\mathrm{Lognormal}} \left(q_{e}\mid\mu(e,\tau),\sigma(e,\tau)\right)\] where \(\mu(e,\tau),\sigma(e,\tau)\) are amortized over the tree topology space via shared local structures (i.e., split and primary subsplit pairs (PSPs)), which are available from the support of CPTs. More details about structured amortization, VBPI and SBNs can be found in section 3.2.2 and Appendix A. Graph neural networksLet \(G=(V,E)\) denote a graph with node feature vectors \(\mathbf{X}_{v}\) for node \(v\in V\), and \(\mathcal{N}(v)\) denote the set of nodes adjacent to \(v\). GNNs iteratively update the representation of a node by running a message passing (MP) scheme for \(T\) time steps. During each MP time step, the representation vectors of each node are updated based on the aggregated messages from its neighbors as follows \[\mathbf{h}_{v}^{(t+1)}=\mathrm{UPDATE}^{(t)}\left(\mathbf{h}_{v}^{(t)},\mathbf{m}_{v}^{(t +1)}\right),\quad\mathbf{m}_{v}^{(t+1)}=\mathrm{AGG}^{(t)}\left(\left\{\mathbf{h}_{u} ^{(t)}:u\in\mathcal{N}(v)\right\}\right)\] where \(\mathbf{h}_{v}^{(t)}\) is the feature vector of node \(v\) at time step \(t\), with initialization \(\mathbf{h}_{v}^{(0)}=\mathbf{X}_{v}\), \(\mathrm{UPDATE}^{(t)}\) is the update function, and \(\mathrm{AGG}^{(t)}\) is the aggregation function. A number of powerful GNNs with different implementations of the update and aggregation functions have been proposed (Kipf & Welling, 2017; Hamilton et al., 2017; Li et al., 2016; Velickovic et al., 2018; Xu et al., 2019; Wang et al., 2019). In additional to the local node-level features, GNNs can also provide features for the entire graph. To learn these global features, an additional \(\mathrm{READOUT}\) function is often introduced to aggregate node features from the final iteration \[\mathbf{h}_{G}=\mathrm{READOUT}\left(\left\{\mathbf{h}_{v}^{(T)}:v\in V\right\} \right).\] \(\mathrm{READOUT}\) can be any function that is permutation invariant to the node features. ## 3 Proposed Method In this section, we propose a general approach that automatically learns topological features directly from phylogenetic trees. We first introduce a simple embedding method that provides raw features for the nodes of phylogenetic trees, together with an efficient linear time algorithm for obtaining these raw features and a discussion on some of their theoretical properties regarding tree topology representation. We then describe how these raw features can be adapted to learn efficient representations of certain structures of trees (e.g., edges) for downstream tasks. ### Interior Node Embedding Learning tree structure features directly from tree topologies often requires raw node/edge features, as typically assumed in most GNN models. Unfortunately, this is not the case for phylogenetic models. Figure 1: An overview of the proposed topological feature learning framework for phylogenetic inference. **Left**: A phylogenetic tree topology with one hot encoding for the tip nodes and missing features for the interior nodes. **Middle**: Interior node embedding via Dirichlet energy minimization. **Right**: Subsequently, the tree topology with embedded node features are fed into a GNN model for more sophisticated tree structure representation learning required by downstream tasks. Although we can use one hot encoding for the tip nodes according to their corresponding species (taxa names only, not the sequences), the interior nodes still lack original features. The first step of tree structure representation learning for phylogenetic models, therefore, is to properly input those missing features for the interior nodes. Following previous studies (Zhu & Ghahramani, 2002; Rossi et al., 2021), we make a common assumption that the node features change smoothly across the tree topologies (i.e., the features of every node are similar to those of the neighbors). A widely used criterion of smoothness for functions defined on nodes of a graph is the _Dirichlet energy_. Given a tree topology \(\tau=(V,E)\) and a function \(f:V\mapsto\mathbb{R}^{d}\), the Dirichlet energy is defined as \[\ell(f,\tau)=\sum_{(u,v)\in E}\|f(u)-f(v)\|^{2}.\] Let \(V=V^{b}\cup V^{o}\), where \(V^{b}\) denotes the set of leaf nodes and \(V^{o}\) denotes the set of interior nodes. Let \(\mathbf{X}^{b}=\{\mathbf{x}_{v}|v\in V^{b}\}\) be the set of one hot embeddings for the leaf nodes. The interior node features \(\mathbf{X}^{o}=\{\mathbf{x}_{v}|v\in V^{o}\}\) then can be obtained by minimizing the Dirichlet energy \[\widehat{\mathbf{X}^{o}}=\mathop{\arg\min}_{\mathbf{X}^{o}}\ell(\mathbf{X}^{o},\mathbf{X}^{b},\tau)=\mathop{\arg\min}_{\mathbf{X}^{o}}\sum_{(u,v)\in E}\|\mathbf{x}_{u}-\mathbf{x}_{v} \|^{2}.\] #### 3.1.1 A Linear Time Two-pass Algorithm Note that the above Dirichlet energy function is convex, its minimizer therefore can be obtained by solving the following optimality condition \[\frac{\partial\ell(\mathbf{X}^{o},\mathbf{X}^{b},\tau)}{\partial\mathbf{X}^{o}}(\widehat{ \mathbf{X}^{o}})=\mathbf{0}. \tag{3}\] It turns out that equation 3 has a close-form solution based on matrix inversion. However, as matrix inversion scales cubically in general, it is infeasible for graphs with many nodes. Fortunately, by leveraging the hierarchical structure of phylogenetic trees, we can design a more efficient linear time algorithm for the solution of equation 3 as follows. We first rewrite equation 3 as a system of linear equations \[\sum\nolimits_{v\in\mathcal{N}(u)}(\widehat{\mathbf{x}}_{u}-\widehat{\mathbf{x}}_{v} )=\mathbf{0},\quad\forall u\in V^{o},\qquad\widehat{\mathbf{x}}_{v}=\mathbf{x}_{v},\quad \forall v\in V^{b}, \tag{4}\] where \(\mathcal{N}(u)\) is the set of neighbors of node \(u\). Given a topological ordering induced by the tree1, we can obtain the solution within a two-pass sweep through the tree topology, similar to the Thomas algorithm for solving tridiagonal systems of linear equations (Thomas, 1949). In the first pass, we traverse the tree in a postorder fashion and express the node features as a linear function of those of their parents, Footnote 1: This is trivial for rooted trees since they are directed. For unrooted trees, we can choose an interior node as the root node and use the topological ordering of the corresponding rooted trees. \[\widehat{\mathbf{x}}_{u}=c_{u}\widehat{\mathbf{x}}_{\pi_{u}}+\mathbf{d}_{u}, \tag{5}\] for all the nodes expect the root node, where \(\pi_{u}\) denotes the parent node of \(u\). More specifically, we first initialize \(c_{u}=0,\mathbf{d}_{u}=\mathbf{x}_{u}\) for all leaf nodes \(u\in V^{b}\). For all the interior nodes except the root node, we compute \(c_{u},\mathbf{d}_{u}\) recursively as follows (see a detailed derivation in Appendix B) \[c_{u}=\frac{1}{|\mathcal{N}(u)|-\sum_{v\in\mathrm{ch}(u)}c_{v}},\quad\mathbf{d}_{u }=\frac{\sum_{v\in\mathrm{ch}(u)}\mathbf{d}_{v}}{|\mathcal{N}(u)|-\sum_{v\in \mathrm{ch}(u)}c_{v}}, \tag{6}\] where \(\mathrm{ch}(u)\) denotes the set of child nodes of \(u\). In the second pass, we traverse the tree in a preorder fashion and compute the solution by back substitution. Concretely, at the root node \(r\), given equation 5 for all the child nodes from the first pass, we can compute the node feature directly from equation 4 as below \[\widehat{\mathbf{x}}_{r}=\frac{\sum_{v\in\mathrm{ch}(r)}\mathbf{d}_{v}}{|\mathcal{N}(r )|-\sum_{v\in\mathrm{ch}(r)}c_{v}}. \tag{7}\] For all the other interior nodes, the node features can be obtained via equation 5 by substituting the learned features for the parent nodes. We summarize our two-pass algorithm in Algorithm 1. Moreover, the algorithm is numerically stable due to the following lemma (proof in Appendix C). **Lemma 1**.: _Let \(\lambda=\min_{u\in V^{o}\setminus\{r\}}|\mathcal{N}(u)|\). For all interior node \(u\in V^{o}\setminus\{r\}\), \(0\leq c_{u}\leq\frac{1}{\lambda-1}\)._ Besides bifurcating phylogenetic trees, the above two-pass algorithm can be easily adapted to interior node embedding for general tree-shaped graphs with given tip node features. #### 3.1.2 Tree Topology Representation Power In this section, we discuss some theoretical properties regarding the tree topology presentation power of the node features introduced above. We start with a useful lemma that elucidates an important behavior of the solution to the linear system 4, which is similar to the solutions to elliptic equations. **Lemma 2** (Extremum Principle).: _Let \(\{\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{d}|u\in V\}\) be a set of \(d\)-dimensional node features that satisfies equations 4. \(\forall 1\leq n\leq d\), let \(\widehat{\mathbf{X}}[n]=\{\widehat{\mathbf{x}}_{u}[n]|u\in V\}\) be the set of the \(n\)-th components of node features. Then, \(\forall 1\leq n\leq d\), we have: (i) the extremum values (i.e., maximum and minimum) of \(\widehat{\mathbf{X}}[n]\) can be achieved at some tip nodes; (ii) if the extremum values are achieved at some interior nodes, then \(\widehat{\mathbf{X}}[n]\) has only one member, or in other words, \(\widehat{\mathbf{x}}_{u}[n]\) is the same \(\forall u\in V\)._ **Theorem 1**.: _Let \(N\) be the number of tip nodes. Let \(\{\widehat{\mathbf{x}}_{u}\in\mathbb{R}^{N}|u\in V\}\) be the solution to the linear system 4 with one hot encoding for the tip nodes. Then, \(\forall u\in V^{o}\), we have_ \[(i)\ 0<\widehat{\mathbf{x}}_{u}[n]<1,\quad\forall 1\leq n\leq N,\quad\text{and} \quad(ii)\ \sum\nolimits_{n=1}^{N}\widehat{\mathbf{x}}_{u}[n]=1.\] The complete proofs of Lemma 2 and Theorem 1 are provided in Appendix C. When the tip node features are linearly independent, a similar proposition holds when we consider the coefficients of the linear combination of the tip node features for the interior node features instead. **Corollary 1**.: _Suppose that the tip node features are linearly independent, the interior node features obtained from the solution to the linear system 4 all lie in the interior of the convex hull of all tip node features._ The proof is provided in Appendix C. The following lemma reveals a key property of the nodes that are adjacent to the boundary of the tree topology in the embedded feature space. **Lemma 3**.: _Let \(\{\widehat{\mathbf{x}}_{u}|u\in V\}\) be the solution to the linear system 4, with linearly independent tip node features. Let \(\{\widehat{\mathbf{x}}_{u}=\sum_{v\in V^{h}}a^{u}_{u}\mathbf{x}_{v}|u\in V^{o}\}\) be the convex combination representations of the interior node features. For any tip node \(v\in V^{b}\), we have_ \[u^{*}=\arg\max_{u\in V^{o}}a^{u}_{u}\quad\Leftrightarrow\quad u^{*}\in \mathcal{N}(v).\] **Theorem 2** (Identifiability).: _Let \(\mathbf{X}^{o}=\{\widehat{\mathbf{x}}_{u}|u\in V^{o}\}\) and \(\mathbf{Z}^{o}=\{\widehat{\mathbf{x}}_{u}|u\in V^{o}\}\) be the sets of interior node features that minimizes the Dirichlet energy for phylogenetic tree topologies \(\tau_{x}\) and \(\tau_{z}\) respectively, given the same linearly independent tip node features. If \(\mathbf{X}^{o}=\mathbf{Z}^{o}\), then \(\tau_{x}=\tau_{z}\)._ The proofs of Lemma 3 and Theorem 2 are provided in Appendix C. By Theorem 2, we see that the proposed node embeddings are complete representations of phylogenetic tree topologies with no information loss. ### Structural Representation Learning via Graph Neural Networks Using node embeddings introduced in section 3.1 as raw features, we now show how to learn more sophisticated representations of tree structures for different phylogenetic inference tasks via GNNs. Given a tree topology \(\tau\), let \(\{\mathbf{h}^{(0)}_{v}:v\in V\}\) be the raw features and \(\{\mathbf{h}^{(T)}_{v}:v\in V\}\) be the output features after the final iteration of GNNs. We feed these output features of GNNs into a multi-layer perceptron (MLP) to get a set of learnable features for each node \[\mathbf{h}_{v}=\operatorname{MLP}^{(0)}\left(\mathbf{h}^{(T)}_{v}\right),\quad\forall \ v\in V,\] before adapting to different downstream tasks, as demonstrated in the following examples. #### 3.2.1 Energy Based Models for Tree Probability Estimation Our first example is on graph-level representation learning of phylogenetic tree topologies. Let \(\mathcal{T}\) denote the entire tree topology space. Given learnable node features of tree topologies, one can use a permutation invariant function \(g\) to obtain graph-level features and hence create an energy function \(F_{\mathbf{\phi}}:\mathcal{T}\mapsto\mathbb{R}\) that assigns each tree topology a scalar value as follows \[F_{\mathbf{\phi}}(\tau)=\mathrm{MLP}^{(1)}(\mathbf{h}_{G}),\quad\mathbf{h}_{G}=g\left(\{ \mathbf{h}_{v}:v\in V\}\right).\] where \(g\circ\mathrm{MLP}^{(0)}\) can be viewed as a \(\mathrm{READOUT}\) function in section 2. This allows us to construct energy based models (EBMs) for tree probability estimation \[q_{\mathbf{\phi}}(\tau)=\frac{\exp\left(-F_{\mathbf{\phi}}(\tau)\right)}{Z(\mathbf{\phi})},\quad Z(\mathbf{\phi})=\sum\nolimits_{\tau\in\mathcal{T}}\exp\left(-F_{\mathbf{\phi} }(\tau)\right).\] As \(Z(\mathbf{\phi})\) is usually intractable, we can employ noise contrastive estimation (NCE) (Gutmann and Hyvarinen, 2010) to train these energy based models. Let \(p_{n}\) be some noise distribution that has tractable density and allows efficient sampling procedures. Let \(D_{\mathbf{\phi}}(\tau)=\log q_{\mathbf{\phi}}(\tau)-\log p_{n}(\tau).\) We can train \(D_{\mathbf{\phi}}\)2 to minimize the following objective function (NCE loss) Footnote 2: Here \(Z(\mathbf{\phi})\) is taken as a free parameter and is included into \(\mathbf{\phi}\). \[J(\mathbf{\phi})=-\left(\mathbb{E}_{\tau\sim p_{\mathrm{data}}(\tau)}\log\left(S \left(D_{\mathbf{\phi}}(\tau)\right)\right)+\mathbb{E}_{\tau\sim p_{n}(\tau)}\log \left(1-S\left(D_{\mathbf{\phi}}(\tau)\right)\right)\right),\] where \(S(x)=\frac{1}{1+\exp(-x)}\) is the sigmoid function. It is easy to verify that the minimum is achieved at \(D_{\mathbf{\phi}^{*}}(\tau)=\log p_{\mathrm{data}}(\tau)-\log p_{n}(\tau).\) Therefore, \(q_{\mathbf{\phi}^{*}}(\tau)=p_{\mathrm{data}}(\tau)=p_{n}(\tau)\exp\left(D_{\mathbf{ \phi}^{*}}(\tau)\right)\). #### 3.2.2 Branch Length Parameterization for Vbpi The branch length parameterization in VBPI so far has relied on hand-engineered features (i.e., splits and PSPs) for the edges on tree topologies. Let \(\mathbb{S}_{\mathrm{r}}\) denote the set of splits and \(\mathbb{S}_{\mathrm{psp}}\) denote the set of PSPs. The simple split-based parameterization assigns parameters \(\mathbf{\psi}^{\mu},\mathbf{\psi}^{\sigma}\) for splits in \(\mathbb{S}_{\mathrm{r}}\). The mean and standard deviation for each edge \(e\) on \(\tau\) are then given by the associated parameters of the corresponding split \(e/\tau\) as follows \[\mu(e,\tau)=\psi_{e/\tau}^{\mu},\quad\sigma(e,\tau)=\psi_{e/\tau}^{\sigma}. \tag{8}\] The more flexible PSP parameterization assigns additional parameters for PSPs in \(\mathbb{S}_{\mathrm{psp}}\) and adds the associated parameters of the corresponding PSPs \(e/\tau\) to equation 8 to refine the mean and standard deviation parameterization \[\mu(e,\tau)=\psi_{e/\tau}^{\mu}+\sum\nolimits_{s\in e/\tau}\psi_{s}^{\mu},\ \ \sigma(e,\tau)=\psi_{e/\tau}^{\sigma}+\sum\nolimits_{s\in e/\tau}\psi_{s}^{\sigma}. \tag{9}\] Although these heuristic features prove effective, they often require substantial design effort, a sample of tree topologies for feature collection, and can not adapt themselves during training which makes it difficult for amortized inference over different tree topologies. Based on the learnable node features, we can design a more flexible branch length parameterization that is capable of distilling more effective structural information of tree topologies for variational approximations. For each edge \(e=(u,v)\) on \(\tau\), similarly as in section 3.2.1, one can use a permutation invariant function \(f\) to obtain edge-level features and transform them into the mean and standard deviation parameters as follows \[\mu(e,\tau)=\mathrm{MLP}^{\mu}\left(\mathbf{h}_{e}\right),\quad\sigma(e,\tau)= \mathrm{MLP}^{\sigma}\left(\mathbf{h}_{e}\right),\quad\mathbf{h}_{e}=f\left(\{\mathbf{h}_ {u},\mathbf{h}_{v}\}\right). \tag{10}\] Compared to heuristic feature based parameterizations in 8 and 9, learnable topological feature based parameterizations in 10 allow much richer design for the branch length distributions across different tree topologies and do not require pre-sampled tree topologies for feature collection. ## 4 Experiments In this section, we test the effectiveness and efficiency of learnable topological features for phylogenetic inference on the two aforementioned benchmark tasks: tree probability estimation via energy based models and branch length parameterization for VBPI. Following Zhang and Matsen (2019), in VBPI we used the simplest SBN for the tree topology variational distribution, and the CPT supports were estimated from ultrafast maximum likelihood phylogenetic bootstrap trees using UFBoot (Minh et al., 2013). The code is available at [https://github.com/zcrabbit/vbpi-gnn](https://github.com/zcrabbit/vbpi-gnn). Experimental setup.We evaluate five commonly used GNN variants with the following convolution operators: graph convolution networks (GCN), graph isomorphism operator (GIN), GraphSAGE operator (SAGE), gated graph convolution operator (GGNN) and edge convolution operator (EDGE). See more details about these convolution operators in Appendix F. In addition to the above GNN variants, we also considered a simpler model that skips all GNN iterations (i.e., \(T=0\)) and referred to it as MLP in the sequel. All GNN variants have 2 GNN layers (including the input layer), and all involved MLPs have 2 layers. We used summation as our permutation invariant aggregation function for graph-level features and maximization for edge-level features. All models were implemented in Pytorch (Paszke et al., 2019) with the Adam optimizer (Kingma & Ba, 2015).We designed our experiments with the goals of (i) verifying the effectiveness of GNN-based EBMs for tree topology estimation and (ii) verifying the improvement of GNN-based branch length parameterization for VBPI over the baseline approaches (i.e., split and PSP based parameterizations) and investigating how helpful the learnable topological features are for reducing the amortization gaps. ### Simulated Data Tree Probability Estimation We first investigated the representative power of learnable topological features for approximating distributions on phylogenetic trees using energy based models (EBMs), and conducted experiments on a simulated data set. We used the space of unrooted phylogenetic trees with 8 leaves, which contains 10395 unique trees in total. Similarly as in Zhang & Matsen IV (2019), we generated a target distribution \(p_{0}(\tau)\) by drawing a sample from the symmetric Dirichlet distribution \(\mathrm{Dir}(\beta 1)\) of order 10395 with a pre-selected arbitrary order of trees. The concentration parameter \(\beta\) is used to control the diffuseness of the target distribution and was set to 0.008 to provide enough information for inference while allowing for adequate diffusion in the target. As mentioned earlier in section 3.2.1, we used noise contrastive estimation (NCE) to train our EBMs where we set the noise distribution \(p_{n}(\tau)\) to be the uniform distribution. Results were collected after 200,000 parameter updates. Note that the minimum NCE loss in this case is \[J^{*}=-2\mathrm{JSD}\left(p_{0}(\tau)\|p_{n}(\tau)\right)+2\log 2,\] where \(\mathrm{JSD}(\cdot\|\cdot)\) is the Jensen-Shannon divergence. Figure 2 shows the empirical performance of different methods. From the left plot, we see that the NCE losses converge rapidly and the gaps between NCE losses for the GNN variants and the best NCE loss \(J^{*}\) (dashed red line) are close to zero, demonstrating the representative power of learnable topological features on phylogenetic tree probability estimations. The evolution of KL divergences (middle plot) is consistent with the NCE losses. Compared to MLP, all GNN variants perform better, indicating that the extra flexibility provided by GNN iterations is crucial for tree probability estimation that would benefit from more informative graph-level features. Although the Figure 2: Comparison of learnable topological feature based EBMs for probability mass estimation of unrooted phylogenetic trees with 8 leaves using NCE. **Left:** NCE loss. **Middle:** KL divergence. **Right:** EBM approximations vs ground truth probabilities. The NCE loss and KL divergence results were obtained from 10 independent runs and the error bars represent one standard deviation. raw features from interior node embedding contain all information of phylogenetic tree topologies, we see that distilling effective structural information from them is still challenging. This makes GNN models that are by design more capable of learning geometric representations a favorable choice. The right plot compares the probability mass approximations provided by EBMs using MLP and GGNN (which performs the best among all GNN variants), to the ground truth \(p_{0}(\tau)\). We see that EBMs using GGNN consistently provide accurate approximations for trees across a wide range of probabilities. On the other hand, estimates provided by those using MLP are often of large bias, except for a few trees with high probabilities. ### Real Data Variational Bayesian Phylogenetic Inference The second task we considered is VBPI, where we compared learnable topological feature based branch length parameterizations to heuristic feature based parameterizations (denoted as Split and PSP resepectively) proposed in the original VBPI approach (Zhang & Matsen IV, 2019). All methods were evaluated on 8 real datasets that are commonly used to benchmark Bayesian phylogenetic inference methods (Hedges et al., 1990; Garey et al., 1996; Yang & Yoder, 2003; Henk et al., 2003; Lakner et al., 2008; Zhang & Blackwell, 2001; Yoder & Yang, 2004; Rossman et al., 2001; Hohna & Drummond, 2012; Larget, 2013; Whidden & Matsen IV, 2015). These datasets, which we call DS1-8, consist of sequences from 27 to 64 eukaryote species with 378 to 2520 site observations. We concentrate on the most challenging part of the Bayesian phylogenetics: joint learning of the tree topologies and the branch lengths, and assume a uniform prior on the tree topology, an i.i.d. exponential prior (\(\mathrm{Exp}(10)\)) for the branch lengths and the simple Jukes & Cantor (1969) substitution model. We gathered the support of CPTs from 10 replicates of 1000 ultrafast maximum likelihood bootstrap trees (Minh et al., 2013). We set \(K=10\) for the multi-sample lower bound, with a schedule \(\lambda_{n}=\min(1,0.001+n/100000)\), going from 0.001 to 1 after 100000 iterations. The Monte Carlo gradient estimates for the tree topology parameters and branch length parameters were obtained via VIMCO (Mnih & Rezende, 2016) and the reparameterization trick (Kingma & Welling, 2014) respectively. Results were collected after 400,000 parameter updates. Table 1 shows the estimates of the evidence lower bound (ELBO) and the marginal likelihood using different branch length parameterizations on the 8 benchmark datasets, including the results for the \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{l}{Data set} & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 & DS8 \\ \# Taka & 27 & 29 & 36 & 41 & 50 & 50 & 59 & 64 \\ \# SITKs & 1949 & 2520 & 1812 & 1137 & 378 & 1133 & 1824 & 1008 \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & Split & -7112.162.00 & -2639.950(0.89) & -33736.87(0.36) & -13332.82(0.75) & -4218.860.20 & -4729.40(0.47) & -37335.440(1.12) & -4861.562(0.00) \\ & PSP & -7111.231.241 & -26340.43 & -0.3473.714(0.45) & -13332.02(0.65) & -3123.840.19 & -4729.20(0.44) & -37335.180(1.03) & -4855.400(0.32) \\ & MLP & -7110.040.10 & -26348.583 & -0.3107.326(0.05) & -13331.900(0.00) & **-3213.012.13** & -0.7288.901(0.17) & -37334.90(1.12) & -4655.330(1.15) \\ & GCN & -7110.320(10) & -2638.585(0.09) & -33736.20(0.05) & -13331.940(0.00) & -4218.000(10.10) & -6728.70(10.16) & -37334.920(12.1) & -4655.170(10.15) \\ & GIN & -7110.270(0.09) & -2638.586(0.09) & -33736.20(0.05) & -13331.920(0.02) & -6717.530(10.17) & -4728.590(10.17) & -4565.00(0.15) \\ & SAGE & -7110.200(10) & **-2638.584.500(0.09)** & -33736.200(0.06) & -13331.600(0.10) & -2417.210(0.11) & -4728.510(10.15) & -37334.910(11.1) & -4655.050(10.14) \\ & GGNN & **-7110.260.100** & **-2638.584.000** & **-33736.200(0.06)** & **-13331.790(0.00)** & **-3217.830.100** & **-4728.560(10.16) & -37334.870(12.1) & -4655.051(01.15) \\ & EDGE & **-7110.260.100** & **-2638.584.000** & **-33736.200(0.08)** & -13331.800(0.10) & **-3217.500.100** & **-3273.570(10.16)** & **-37334.840(10.14)** & -4655.051(01.14) \\ \hline \multirow{7}{*}{ \begin{tabular}{} \end{tabular} } & Split & -710.470(0.27) & -2637.087(0.08) & -33735.125(1.23) & -13330.100(1.34) & -2418.501(0.48) & -6734.080(0.473) & -37332.180(0.48) & -4631.509(0.94) \\ & PSP & -710.108(0.10) & -26367.740(0.08) & -33735.12(10.12) & -13339.920(0.22) & -24314.600(0.44) & -4741.040(0.47) & -37332.000(0.53) & -6366.050(0.51) \\ & MLP & -7108.40(10.24) & -26367.740(0.08) & -33735.120(10.10) & -13332.990(0.22) & -2314.600(0.45) & -6724.470(0.47) & -37332.000(0.34) & -4560.720(0.53) \\ & GCN & -7108.400(10.24) & -26367.730(0.08) & -33735.120(110.10) & -13332.990(0.18) & -2314.600(0.49) & -6724.1400(0.49) & -37332.000(0.34) & -4650.680(0.54) \\ & GIN & -7108.400(10.17) & **-26387.700(0.08)** & **-33735.120(0.100)** & **-13332.990(0.18)** & -2314.600(0.45) & -6724.300(0.42) & -37332.000(0.30) & -3650.608(0.49) \\ & GCN & -7108.400(10.19) & -26367.730(0.100) & -33753.120(0.09) & -13329.90(0.19) & -2414.600(0.36) & -6724.300(0.42) & -37332.000(0.30) & -4650.608(0.45) \\ & EDGE & **-7108.410.140** & **-2630.750(0.07)** & **-3375.120(0.09)** & -13329.900(0.19) & -2314.640(0.38) & **-6724.300(0.49)** & **-37332.600(0.24)** & **-4680.508(0.45)** \\ & SS & -7108.420(10.18) & -26367.570(0.48) & -33775.440(0.50) & -13330.600(0.54) & **-5214.510(0.28)** & -6724.070(0.86) & -37332.762(0.42) & -4649.581(11.75) \\ \hline \hline \end{tabular} \end{table} Table 1: Evidence Lower bound (ELBO) and marginal likelihood (ML) estimates of different methods across 8 benchmark datasets for Bayesian phylogenetic inference. The marginal likelihood estimates of all variational methods are obtained via importance sampling using 1000 samples, and the results (in units of nats) are averaged over 100 independent runs with standard deviation in brackets. Results for stepping-stone (SS) are from Zhang & Matsen IV (2019)(using 10 independent MrBayes (Ronquist et al., 2012) runs, each with 4 chains for 10,000,000 iterations stepping-stone (SS) method (Xie et al., 2011), which is one of the state-of-the-art sampling based methods for marginal likelihood estimation. For each data set, a better approximation would lead to a smaller variance of the marginal likelihood estimates. We see that solely using the raw features, MLP-based parameterization already outperformed the Split and PSP baselines by providing tighter lower bounds. With more expressive representations of local structures enabled by GNN iterations, GNN-based parameterization further improved upon MLP-based methods, indicating the importance of harnessing local topological information for flexible branch length distributions. Moreover, when used as importance distributions for marginal likelihood estimation via importance sampling, MLP and GNN variants provide more steady estimates (less variance) than Split and PSP respectively. All variational approaches compare favorably to SS and require much fewer samples. The left plot in Figure 3 shows the evidence lower bounds as a function of the number of parameter updates on DS1. Although neural networks based parameterization adds to the complexity of training in VI, we see that by the time Split and PSP converge, MLP and EDGE3 achieve comparable (if not better) lower bounds and quickly surpass these baselines as the number of iteration increases. Footnote 3: We use EDGE as an example here for branch length parameterization since it can learn edge features (see Appendix F). All GNN variants (except the simple GCN) performed similarly in this example (see Table 1). As diagonal Lognormal branch length distributions were used for all parameterization methods, how these variational distributions were amortized over tree topologies under different parameterizations therefore is crucial for the overall approximation performance. To better understand this effect of amortized inference, we further investigated the amortization gaps4 of different methods on individual trees in the 95% credible set of DS1 as in Zhang (2020). The middle and right plots in Figure 3 show the amortization gaps of different parameterization methods on each tree topology \(\tau\). We see the amortization gaps of MLP and EDGE are considerably smaller than those of Split and PSP respectively, showing the efficiency of learnable topological features for amortized branch length distributions. Again, incorporating more local topological information is beneficial, as evidenced by the significant improvement of EDGE over MLP. More results about the amortization gaps can be found in Table 2 in the appendix. Footnote 4: The amortization gap on a tree topology \(\tau\) is defined as \(L(Q^{*}|\tau)-L(Q_{\phi}|\tau)\), where \(L(Q_{\psi}|\tau)\) is the ELBO of the approximating distribution \(Q_{\psi}(q|\tau)\) and \(L(Q^{*}|\tau)\) is the maximum lower bound that can be achieved with the same variational family. See more details in Zhang (2020); Cremer et al. (2018). ## 5 Conclusion We presented a novel approach for phylogenetic inference based on learnable topological features. By combining the raw node features that minimize the Dirichlet energy with modern GNN variants, our learnable topological features can provide efficient structural information without requiring domain expertise. In experiments, we demonstrated the effectiveness of our approach for tree probability estimation on simulated data and showed that our method consistently outperforms the baseline approaches for VBPI on a benchmark of real data sets. Future work would investigate more sophisticated GNNs for phylogenetic trees, and applications to other phylogenetic inference tasks where efficiently leveraging structural information of tree topologies is of great importance. Figure 3: Performance on DS1. **Left:** Lower bounds. **Middle & Right:** Amortization gaps on trees in the \(95\%\) credible sets. #### Acknowledgments This work was supported by National Natural Science Foundation of China (grant no. 12201014), as well as National Institutes of Health grant AI162611. The research of the author was support in part by the Key Laboratory of Mathematics and Its Applications (LMAM) and the Key Laboratory of Mathematical Economics and Quantitative Finance (LMEQF) of Peking University. The author is grateful for the computational resources provided by the High-performance Computing Platform of Peking University. The author appreciates the anonymous ICLR reviewers for their constructive feedback.
2310.07962
Clustering of Spell Variations for Proper Nouns Transliterated from the other languages
One of the prominent problems with processing and operating on text data is the non uniformity of it. Due to the change in the dialects and languages, the caliber of translation is low. This creates a unique problem while using NLP in text data; which is the spell variation arising from the inconsistent translations and transliterations. This problem can also be further aggravated by the human error arising from the various ways to write a Proper Noun from an Indian language into its English equivalent. Translating proper nouns originating from Indian languages can be complicated as some proper nouns are also used as common nouns which might be taken literally. Applications of NLP that require addresses, names and other proper nouns face this problem frequently. We propose a method to cluster these spell variations for proper nouns using ML techniques and mathematical similarity equations. We aimed to use Affinity Propagation to determine relative similarity between the tokens. The results are augmented by filtering the token-variation pair by a similarity threshold. We were able to reduce the spell variations by a considerable amount. This application can significantly reduce the amount of human annotation efforts needed for data cleansing and formatting.
Prathamesh Pawar
2023-10-12T00:57:32Z
http://arxiv.org/abs/2310.07962v1
# Clustering of Spell Variations for Proper Nouns Transliterated from the other languages ###### Abstract One of the prominent problems with processing and operating on text data is the non uniformity of it. Due to the change in the dialects and languages, the caliber of translation is low. This creates a unique problem while using NLP in text data; which is the spell variation arising from the inconsistent translations and transliterations. This problem can also be further aggravated by the human error arising from the various ways to write a Proper Noun from an Indian language into its English equivalent. Translating proper nouns originating from Indian languages can be complicated as some proper nouns are also used as common nouns which might be taken literally. Applications of NLP that require addresses, names and other proper nouns face this problem frequently. We propose a method to cluster these spell variations for proper nouns using ML techniques and mathematical similarity equations. We aimed to use Affinity Propagation to determine relative similarity between the tokens. The results are augmented by filtering the token-variation pair by a similarity threshold. We were able to reduce the spell variations by a considerable amount. This application can significantly reduce the amount of human annotation efforts needed for data cleansing and formatting. clustering, spell-errors, affinity propagation, text similarity ## I Introduction Proper Nouns translated from languages belonging to the Indian subcontinent have been quite difficult to spell check due to the nature of these words. This poses a challenge while working with any system that deals with names of geographical locations such localities, villages or districts. A significant number of databases in India rely on manually transliterated English versions of these words. Such transliterations are prone to errors or variations due to the subjectivity of the task. The similarity between two words is more than the similarity between a word and its misspelled version. The existence of common suffixes and prefixes further aggravates the problem. We propose a method to cluster the misspelled versions of words using Affinity propagation algorithm and jaro-winkler similarity. Affinity propagation is a graph based clustering algorithm. It does not require a predetermined number of clusters to run the algorithm. It uses a series of similarities and matrices that we use in conjunction with the jaro-winkler distance algorithm as a thresholding system to improve the results. Affinity propagation ## II Related Works Amorim et al. [1] used Clustering Algorithms for spell checking. They experimented with 36,133 words with 6,136 target words. They implemented it using PAM (Partition Around Medoids) and Anomalous Pattern Initialization. The PAM method proposed by Van der Laan et al. [2] is used to create clusters at random and assign a certain medoid for each of them. Then an optimum solution is derived by minimizing the net distance between the clusters and their respective medoids. This method was then improved by using Anomalous Pattern Initialization to initialize the medoids with specific values. This increases the accuracy as the medoids don't depend on random initializations. The words were also divided into 26 initial clusters based on the first alphabetical order in order to reduce the processing load on the machine. This method achieved an impressive accuracy of 88.42%. However, there is no equivalent implementation in Indian transliterated language words. ## III Proposed Method I have proposed a method based on Affinity Propagation and jaro-winkler similarity. ### _Affinity Propagation_ Affinity Propagation is a clustering algorithm proposed by Frey et al. [3] that uses a message based system which informs the tokens about the relative attractiveness of each other. The goal of the algorithm is to find an exemplar for a particular token which is the best representation of that token. To calculate the exemplar we first generate a nxn similarity matrix where n is the number of tokens in the dataset. The matrix is created by calculating the similarity between every token with every other token. Levenshtein distance is used to calculate the distance [4]. To make sure that the tokens do not assign themselves as their exemplar we make all the diagonal values as _min(sim(i,j))_ \[\text{S(i\,,j)=-||Xi-xj||}\] Calculating similarity We further calculate the Responsibility Matrix which is basically determining how suitable is a token k as an exemplar for token i with respect to the closest alternative token k'. \[r(i,k)\gets s(i,k)-\max_{k^{\prime}\neq k}\left\{a(i,k^{\prime})+s(i,k^{ \prime})\right\}\] The Responsibility Matrix is initialized as a nxn matrix with zeros. Each position in the matrix is populated by the equation. Now that we have determined if the token k would be an appropriate exemplar for i, we now determine if the token i would be a suitable cluster member for exemplar k with respect to other tokens. This is to be calculated using the Availability Matrix. \[a(i,k)\leftarrow\min\left(0,r(k,k)+\sum_{i^{\prime}\notin\{i,k\}}\max(0,r(i^{ \prime},k))\right)\text{ for }i\neq k\] The Availability is dependent on the positive responsibilities and the self availability of each token. \[a(k,k)\leftarrow\sum_{i^{\prime}\neq k}\overset{\cdot}{\max(0,r(i^{\prime},k ))}.\] \[\text{Self Availability}\] Criterion Matrix is the final stage of the Algorithm. It is the sum of the responsibility matrix and availability matrix. \[c(i,k)\gets r(i,k)+a(i,k).\] Criterion Matrix The exemplar is determined by the highest value in the criterion matrix. ### _Jaro-Winkler Distance_ In order to increase the number of tokens successfully, there are no checks imposed on the tokens assigned to a cluster. This hampers the accuracy of clusters in order to accommodate a wide range of tokens. Therefore to improve the accuracy, we introduce a threshold to filter the tokens that are not similar to the exemplar of that cluster. In order to introduce this threshold in the clusters we use Jaro-Winkler Similarity. \[sim_{j}=\left\{\begin{array}{ll}0&\text{if }m=0\\ \frac{1}{3}\left(\frac{m}{|s_{1}|}+\frac{m}{|s_{2}|}+\frac{m-t}{m}\right)& \text{otherwise}\end{array}\right.\] \[\text{Jaro similarity}\] \[\text{m = no. of common letters}\quad\text{t = no. of transpositions required}\] \[sim_{w}=sim_{j}+\ell p(1-sim_{j})\] \[\text{Jaro-Winkler Similarity}\] \[\text{simj}=\text{Jaro Similarity}\] \[\text{lp = length of common prefix}\] Jaro Winkler Similarity [5] is a modified version of Jaro similarity where the similarity is calculated by introducing common prefix length weight to the similarity. We use this similarity as a threshold to maintain the sanctity of the cluster. ## IV Data and Preprocessing ### _Data Extraction_ The goal was to make this process as practical as possible. In order to achieve that a dataset was obtained from government records of the dead records from Delhi real estate property addresses and extract localities from them. Since these records are manually entered we found a large amount of spelling variations of the extracted localities. For Eg. _Mehruli_ was represented as _Mehroli, Mehrali, Mehrouli_ etc We annotate the dataset to form the perfect clusters manually. The performance of the algorithm will be tested against this baseline. We also replicate the experiment with 2 more datasets. We use Mumbai Apartment names which have been obtained from public feed records of Mumbai. We use these datasets as they are a perfect representation of the problem that we face while translating the nouns. A literal translation cannot be done as they might distort the meaning of the word by taking its literal meaning instead of the word itself. While these datasets have been transliterated from Devanagari script which may not be the case for other Indian languages, it is representative of the problem that arises from transliterating [6] from them. An important aspect of these datasets is that they are generated through public records which are manually filled out and thus have a significant number of spelling errors in their original text which in turn results in spell errors in transliterated text. \begin{tabular}{|l|l|} \hline Dataset & Size \\ \hline Spell (F1) & 406 \\ \hline Mumbai Apartments (D1) & 16295 \\ \hline Delhi Localities (D2) & 13120 \\ \hline \end{tabular} ### _Preprocessing_ In order to make sure that the clustering algorithm can function efficiently, we clean the phrases and tokens.. We take out the common parts of a phrase that do not contribute to the uniqueness. For Eg. In _Anand Nagar_, the word _Nagar_ is common with many other tokens and might make the algorithm askew. All the abbreviations and numbers are also removed. It is assumed that the probability of the spelling variations occurring in the first letter. So, to increase the efficiency of the algorithm, we divide all the tokens into 26 (A-Z) groups according to the first letter of the tokens. ## IV Procedure 1. We first create the similarity matrix by calculating the similarity matrix using the levenshtein distance. 2. We implement affinity propagation on the matrix to generate clusters containing tokens with their respective spelling variations. 3. Damping Factor is a very important parameter that prevents the availability matrix and responsibility matrix from overshooting the results which may lead into oscillations instead of a straight convergence. For this particular application we keep it to 0.65. 4. After implementing Affinity Propagation, a dictionary is generated with clusters and their respective exemplars. 5. In order to filter the false positives from the clusters, the Jaro-Winkler Distance Threshold is implemented. Every element of the cluster is compared with the exemplar of that cluster to calculate the Jaro-Winkler Distance. 7. A threshold of 95% similarity is applied to keep the ones that are very close to the exemplar to remove all the False positives that we can. 8. While the majority of False positives are eliminated, a significant portion of the tokens are still not clustered. Inorder to utilize these tokens, the algorithm is run again on the remaining clusters. 9. The cluster dictionaries are combined to form a final cluster dictionary. ## V Evaluation Metrics Evaluating the performance of a clustering algorithm is fairly convoluted. The number of clusters created can be varied based on the given data and parameter values. In order to evaluate, we first create a set of true clusters by annotation. These clusters shall be used to create token-variation pairs which would essentially be pairs of each cluster element and the corresponding exemplar. In order to evaluate the performance of a clustering algorithm we use a mathematical method known as Adjusted Rand Index [8]. The ARI is used because it gives us the most realistic evaluation for the resultant clusters. ARI takes into account the Expected value of the clustering algorithm i.e. the likelihood of token-variations pairs being correct on random. In order to calculate ARI we first calculate the Rand Index (RI) [7] which is calculated as \[RI=\frac{TP+TN}{TP+FP+FN+TN}\] * **TP** is the number of token-variation pairs which are the same in predicted and true pairs (True Positives). * **TN** is the number of token-variation pairs that are neither in predicted pairs nor in true pairs. * **FP** is the number of token-variation pairs which are in predicted tokens but not in true true pairs. * **FN** is the number of token-variation pairs which are not in predicted tokens but are in true pairs. After calculating the Rand Index, ARI is calculated using the equation. \[ARI=\frac{\text{RI - Expected RI}}{\text{Max(RI) - Expected RI}}\] ## VI Conclusion The subjectivity in translating proper nouns from various languages in English poses a big challenge in processing data and performing NLP techniques on them. Based on the results of this application, clustering the tokens using Affinity Propagation and Jaro-Winkler similarity has proven to be an effective way to rectify the spell errors and spell variations. This could be used as a viable solution for many NLP problems as a way to cleanse and simplify the data. One of the shortcomings of this approach is the scalability. However, this can be mitigated by implementing parallel processing to measure similarity between tokens simultaneously and developing certain criteria to divide the tokens into smaller sets.
2301.00794
STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn discriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps, making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.
Anshul Shah, Benjamin Lundell, Harpreet Sawhney, Rama Chellappa
2023-01-02T18:32:45Z
http://arxiv.org/abs/2301.00794v3
# STEPs: Self-Supervised Key Step Extraction from Unlabeled Procedural Videos ###### Abstract We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn disciriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent various steps of the procedural tasks. ## 1 Introduction Rapid shifts in technology and business models have led to a mismatch between the skills needed by employers and the skills possessed by the labor force. It has been estimated that this mismatch will reduce manufacturing output by $2.4 trillion over ten years in the US alone [27, 29]. Increased attention has been placed on effective methods of "reskilling" workers [2]. Unfortunately, reskilling will not be easy: human expertise in performing a complex task takes years of training and mastery of domain-specific knowledge [9, 23]. Augmented Reality (AR) headsets can play an important role in collective reskilling efforts. AR headsets are known to improve the efficiency of front line workers during training and on the job, across industries as diverse as food service, manufacturing, medicine, and warehousing [1, 65, 67, 13]. AR plays a strikingly similar role across these diverse use cases: to assist the user in completing a complex task, the headset renders a sequence of visual cues on real-world objects. Our approach focuses on extracting key-steps of a complex task which is the most crucial component needed for automatic AR content creation. We employ a "learning-from-observation"-style framework [55], where an instructor is recorded while performing a complex task. The goal is to automatically parse the recording into _key steps_ (KSs) that succinctly represent the complete task. This greatly streamlines the content creation process, as the trainer no longer has to manually edit the recording to find the key steps. Consider the task of changing the cartridge in a printer. Using tools from the public repository [17], we captured data on a Microsoft HoloLens 2 of an "expert" undertaking this task. Using multiple cues - hand pose, head pose, eye gaze and first person video - we automatically generate the key-steps shown in Fig. 1. Using multiple cues when observing experts perform procedural tasks is important when generating training materials for novices [24, 28, 11]. The key step extraction problem for complex procedural tasks is challenging: (1) Recordings of tasks performed by experts are limited in number; (2) Supervision for key steps is hard due to the subjective nature of what constitutes a key step; (3) There are no large-scale datasets for real world procedural tasks. Unlike typical web-crawled videos used in video representation learning, procedures are often minutes Figure 1: An illustrative example of automatically generated key steps using our approach STEPs for the task of changing the cartridge in a printer. Data was captured using a Microsoft HoloLens 2 and extracted using a publicly available repository. Our approach leads to plausible key steps for sub-tasks of ‘opening cartridge lid’, ‘taking cartridge’, ‘placing cartridge’ and ‘closing cartridge lid’ and ‘end recording’. Note that the associated labels and arrows are for visualization purposes only and were not used for training.
2310.19584
Integrable maps in 4D and modified Volterra lattices
In recent work, we presented the construction of a family of difference equations associated with the Stieltjes continued fraction expansion of a certain function on a hyperelliptic curve of genus $g$. As well as proving that each such discrete system is an integrable map in the Liouville sense, we also showed it to be an algebraic completely integrable system. In the discrete setting, the latter means that the generic level set of the invariants is an affine part of an abelian variety, in this case the Jacobian of the hyperelliptic curve, and each iteration of the map corresponds to a translation by a fixed vector on the Jacobian. In addition, we demonstrated that, by combining the discrete integrable dynamics with the flow of one of the commuting Hamiltonian vector fields, these maps provide genus $g$ algebro-geometric solutions of the infinite Volterra lattice, which justified naming them Volterra maps, denoted ${\cal V}_g$. The original motivation behind our work was the fact that, in the particular case $g=2$, we could recover an example of an integrable symplectic map in four dimensions found by Gubbiotti, Joshi, Tran and Viallet, who classified birational maps in 4D admitting two invariants (first integrals) with a particular degree structure, by considering recurrences of fourth order with a certain symmetry. Hence, in this particular case, the map ${\cal V}_2$ yields genus two solutions of the Volterra lattice. The purpose of this note is to point out how two of the other 4D integrable maps obtained in the classification of Gubbiotti et al. correspond to genus two solutions of two different forms of the modified Volterra lattice, being related via a Miura-type transformation to the $g=2$ Volterra map ${\cal V}_2$. We dedicate this work to a dear friend and colleague, Decio Levi.
A. N. W. Hone, J. A. G. Roberts, P. Vanhaecke, F. Zullo
2023-10-30T14:42:02Z
http://arxiv.org/abs/2310.19584v2
# Integrable maps in 4D and modified Volterra lattices ###### Abstract In recent work, we presented the construction of a family of difference equations associated with the Stieltjes continued fraction expansion of a certain function on a hyperelliptic curve of genus \(g\). As well as proving that each such discrete system is an integrable map in the Liouville sense, we also showed it to be an algebraic completely integrable system. In the discrete setting, the latter means that the generic level set of the invariants is an affine part of an abelian variety, in this case the Jacobian of the hyperelliptic curve, and each iteration of the map corresponds to a translation by a fixed vector on the Jacobian. In addition, we demonstrated that, by combining the discrete integrable dynamics with the flow of one of the commuting Hamiltonian vector fields, these maps provide genus \(g\) algebro-geometric solutions of the infinite Volterra lattice, which justified naming them _Volterra maps_, denoted \(\mathcal{V}_{g}\). The original motivation behind our work was the fact that, in the particular case \(g=2\), we could recover an example of an integrable symplectic map in four dimensions found by Gubbiotti, Joshi, Tran and Viallet, who classified birational maps in 4D admitting two invariants (first integrals) with a particular degree structure, by considering recurrences of fourth order with a certain symmetry. Hence, in this particular case, the map \(\mathcal{V}_{2}\) yields genus two solutions of the Volterra lattice. The purpose of this note is to point out how two of the other 4D integrable maps obtained in the classification of Gubbiotti et al. correspond to genus two solutions of two different forms of the modified Volterra lattice, being related via a Miura-type transformation to the \(g=2\) Volterra map \(\mathcal{V}_{2}\). We dedicate this work to a dear friend and colleague, Decio Levi. Introduction This short article consists of some recollections of our colleague Decio Levi (in section 2 below), followed by a brief update on our recent results about integrable maps in four (and higher) dimensions, which provide algebro-geometric solutions of differential-difference equations of Volterra type [10]. Decio was one of the pioneers in the theory of integrability for differential-difference equations, especially in the construction of integrable lattices from Backlund transformations for continuous systems [13, 15], and the programme of applying the symmetry approach to the classification of such lattices, which he initiated with Yamilov [14]. Thus we like to think that Decio would have appreciated the results being presented here. After presenting a few memories of Decio, in section 3 we begin by giving a brief overview of the 4D integrable maps which were classified by Gubbiotti et al. [7]. We then proceed to review our construction of integrable maps obtained from the Stieltjes fraction expansion of certain functions on hyperelliptic curves [10], and explain how it reproduces one of the examples from [7], denoted (P.iv), in the particular case of genus two curves. Sections 4 and 5 are devoted to the maps (P.v) and (P.vi), respectively: we show how each of these maps is related to a different form of the modified Volterra lattice, and present explicit formulae which relate their solutions to the solutions of (P.iv) via a transformation of Miura type. We end with some very short conclusions in section 6. ## 2 Memories of Decio Levi _Andrew Hone writes:_ I first met Decio in Warsaw in September 1995, when I was a PhD student participating in the 1st Non-Orthodox School on Nonlinearity and Geometry [23]. Decio was one of the lecturers, along with Orlando Ragnisco, and it was thanks to extended conversations with Orlando that I resolved to apply for postdoctoral funding to work with him when I finished my PhD. After receiving a grant from the Leverhulme Trust two years later, I finally got to be a researcher at Roma Tre, where Orlando and Decio were both professors in the Dipartimento di Fisica. For approximately the first six months of my time in Rome, there was no available office space for postdocs, which meant that I had to share an office with Orlando. Far from being a negative aspect of my experience, this situation had many positive benefits for me, and not just scientific ones. By working in close proximity with Orlando, it meant that I was privy to the regular visits from the neighbour in the office next door, namely Decio, his long-time friend and collaborator. Apart from the pleasure of getting to know Decio, and learning many wonderful ideas about integrable systems from him, there was the fact that, by default, he would chat to Orlando in Italian, which helped me to rapidly improve my grasp of the language in those first few months. The strong bond of friendship between Orlando and Decio created a very happy atmosphere, and I have extremely fond memories of those times. In subsequent years, I would see Decio fairly often at various international conferences, or during return visits to Rome. He had an amiable manner and a warm, cheerful smile. It was always enjoyable to talk to him, whether about technical problems, sharing family news, or just musing about life in general. Talking with Decio would leave me feeling reassured, that all was right with the world, and I liked his gentle way of concluding a long conversation with "Vabbe" in somma". It is an honour to be able to remember Decio here, both for his contributions as a scientist, and as a wonderful human being. _Federico Zullo writes:_ The first time I met Decio was in 2003: I was a student at the Dipartimento di Fisica of Roma Tre University and needed an advisor for my last examination for my laurea triennale (bachelor's degree). I asked Orlando Ragnisco who, at that time, was very busy. He accompanied me to the office next door, where Decio was, and I asked him for a theme for my short dissertation. He very heartily introduced me to the subject of solitons, that I never heard about before, giving me books and kind advice. Later, during my laurea magistrale (master's degree), and during my PhD studies, I followed different classes taught by Decio, some with very few students. The familiar atmosphere and natural mildness of Decio's classes fostered my learning, and I'm greatly indebted to him for having taught me many topics used in mathematical physics, like group theory, symmetries of differential equations, physics of nonlinear systems, qualitative and quantitative analysis of solutions of differential equations and others. For my own teaching, I still use some of the material that I collected from his courses. For a period just before 2014, I was hosted by Decio in his office as a researcher. I remember the talks on disparate subjects, like religion, literature, politics, society and, obviously, our research. The talks would then continue during the lunch break, usually in Via Marconi, with Orlando and the other members of the very stimulating group of young researchers that was gathered at Roma Tre in that period, including Fabio Musso, Matteo Petrera, Christian Scimiterna, Danilo Riglioni, Riccardo Droghei, and later Giorgio Gubbiotti and Danilo Latini, all led by Decio and Orlando. I'll always keep these beautiful memories with me. ## 3 The map (P.iv) and the geometry of its solutions Discrete integrable systems can be constructed by applying an appropriate discretization procedure to continuous ones, and historically this is how many examples of discrete integrability were first discovered [13, 20]. However, from both a theoretical point of view and a practical one, it is important to have a notion of integrability for discrete systems that does not require making reference to some underlying continuous system, whether this be for lattice equations [14], or for integrable maps [2, 16, 22]. While integrable maps in two and three dimensions lead to families of invariant curves (as the level sets of first integrals), the case of four dimensions can lead to new features, namely invariant tori of dimension two. In [7], Gubbiotti et al. presented a classification of four-dimensional birational maps of recurrence type, that is \[\varphi:\qquad(w_{0},w_{1},w_{2},w_{3})\mapsto\Big{(}w_{1},w_{2},w_{3},F(w_{0 },w_{1},w_{2},w_{3})\Big{)}, \tag{3.1}\] for a suitable rational function \(F\) of the affine coordinates \((w_{0},w_{1},w_{2},w_{3})\in\mathbb{C}^{4}\), where the map \(\varphi\) is required to be invariant under the involution \(\iota:\,(w_{0},w_{1},w_{2},w_{3})\mapsto(w_{3},w_{2},w_{1},w_{0})\), and to possess two independent polynomial invariants, \(H_{1}\), \(H_{2}\) say, with specific degree patterns \((\deg_{w_{0}}H_{j},\deg_{w_{1}}H_{j},\deg_{w_{2}}H_{j},\deg_{w_{3}}H_{j})=(1,3,3,1)\) and \((2,4,4,2)\) for \(j=1,2\), respectively. The result of this classification was six maps with parameters, labelled (P.i-vi), together with six associated maps, denoted (Q.i-vi) respectively. Each of the "Q" maps arises from a corresponding "P" map, as a discrete integrating factor for linear combinations of the first integrals, so they are dual to one another in the sense of [17]. As described previously, first in [11] and then [6], the original motivation for classifying such maps was to understand autonomous versions of the fourth-order members of hierarchies of discrete Painleve I/II equations from [5]; but, aside from the latter connection, the "P" in this nomenclature has nothing to do with the usual labelling of continuous Painleve equations. From our point of view, the most interesting cases are the maps labelled (P.iv), (P.v) and (P.vi), since (from Table 1 in [7]) these are the only ones arising from a discrete variational principle (Lagrangian), leading to a non-degenerate Poisson bracket in four dimensions, such that the two first integrals \(H_{1}\), \(H_{2}\) are in involution; this means that in the real case the Liouville tori are two-dimensional. Subsequently, Gubbiotti obtained these 4D integrable maps via an alternative method, by classifying fourth-order difference equations with a discrete Lagrangian structure [8]. Here we begin with the case of (P.iv), which is the birational map given in affine coordinates by the recurrence \[\begin{array}{l}w_{n+4}w_{n+3}w_{n+2}+w_{n+2}w_{n+1}w_{n}+2w_{n+2}^{2}(w_{n+ 3}+w_{n+1})\\ +w_{n+2}(w_{n+3}^{2}+w_{n+3}w_{n+1}+w_{n+1}^{2})+w_{n+2}^{3}+\nu w_{n+2}(w_{n+ 3}+w_{n+2}+w_{n+1})+bw_{n+2}+a=0.\end{array} \tag{3.2}\] This map has three essential parameters \(a,b,\nu\) (in the formulae from [7] we have set the parameter \(d=1\), which can be achieved by a simple rescaling), and it is of the form (3.1), with \[F=-\frac{w_{0}w_{1}w_{2}+w_{1}w_{2}w_{3}+w_{1}^{2}w_{2}+w_{2}w_{3}^{2}+2w_{1} w_{2}^{2}+2w_{2}^{2}w_{3}+w_{2}^{3}+\nu(w_{1}w_{2}+w_{2}w_{3}+w_{2}^{2})+bw_{2}+a}{w_{ 2}w_{3}};\] this \(F\) is the rational function of \(w_{0},w_{1},w_{2},w_{3}\) obtained by solving for \(w_{4}\) in (3.2) with \(n=0\). The first integral denoted \(I_{\rm low}^{\rm P.iv}\) in [7] is given in affine coordinates by \[H_{1}=w_{1}w_{2}\Big{(}w_{2}w_{3}+w_{0}w_{1}-w_{0}w_{3}+(w_{1}+w_{2})^{2}+\nu(w _{1}+w_{2})+b\Big{)}+a(w_{1}+w_{2}). \tag{3.3}\] The latter has the degree pattern \((1,3,3,1)\). In particular, it is linear in \(w_{3}\), which implies that, on each three-dimensional level set \(H_{1}=h_{1}=\) const, the map (3.2) reduces to a birational map in three dimensions, given by the recurrence \[\begin{array}{rcl}w_{n+3}w_{n+2}w_{n+1}(w_{n+2}-w_{n})+w_{n+2}w_{n+1}^{2}w_{n }+w_{n+2}w_{n+1}(w_{n+1}+w_{n+2})^{2}\\ \qquad\qquad+\nu\,w_{n+2}w_{n+1}(w_{n+1}+w_{n+2})+b\,w_{n+2}w_{n+1}+a\,(w_{n+1 }+w_{n+2})\ \ =\ \ h_{1}.\end{array}\] A second independent invariant for (3.2), with degree pattern \((2,4,4,2)\), is given by \[\begin{array}{rcl}H_{2}&=&w_{1}w_{2}\left(\begin{array}{c}w_{0}^{2}w_{1}+w _{3}^{2}w_{2}+w_{0}w_{3}(w_{1}+w_{2})+w_{0}(w_{2}^{2}+2w_{1}^{2})+w_{3}(w_{1}^{ 2}+2w_{2}^{2})\\ \qquad\qquad\qquad\qquad\qquad+3(w_{0}+w_{3})w_{1}w_{2}+(w_{1}+w_{2})^{3}\\ +\nu\,\left(w_{0}w_{3}+(w_{0}+w_{3})(w_{1}+w_{2})+(w_{1}+w_{2})^{2}\right)+b \,(w_{0}+w_{1}+w_{2}+w_{3})\end{array}\right)\\ &&+a\,\Big{(}w_{0}w_{1}+w_{3}w_{2}+(w_{1}+w_{2})^{2}\Big{)}.\end{array} \tag{3.4}\] This differs slightly from the second invariant presented in [7], which is \(I_{\rm high}^{\rm P.iv}=H_{2}-\nu H_{1}\). The nondegenerate Poisson bracket between the coordinates, which was obtained in [7] by making use of a discrete Lagrangian for (3.2), is given by \[\{\,w_{n},w_{n+1}\,\}=0,\,\{\,w_{n},w_{n+2}\,\}=\frac{1}{w_{n+1}},\,\{\,w_{n},w_{n+3}\,\}=-\frac{w_{n}+2w_{n+1}+2w_{n+2}+w_{n+3}+\nu}{w_{n+1}w_{n+2}}, \tag{3.5}\] for all \(n\). So (3.1) is a Poisson map, in the sense that \(\{\,\varphi^{*}G,\varphi^{*}H\,\}=\varphi^{*}\{\,G,H\,\}\) for all functions \(G,H\) on \(\mathbb{C}^{4}\). The two independent invariants given in [7] are in involution with respect to this bracket, which is equivalent to the involutivity of functions (3.3) and (3.4), that is to say \[\{\,H_{1},H_{2}\,\}=0.\] Hence the four-dimensional map defined by (3.2) is integrable in the Liouville sense. Computing the Hamiltonian vector field for the first flow, generated by \(H_{1}\), we find that this takes the form \[\frac{{\rm d}w_{n}}{{\rm d}t}=w_{n}(w_{n+1}-w_{n-1}) \tag{3.6}\] for \(n=1,2\). However, since (3.2) is a Poisson map that commutes with this flow, it follows that the relation (3.6) extends to all \(n\in\mathbb{Z}\). Thus the combined solutions of the map and the flow, which are compatible with one another, generate a sequence of functions \(\big{(}w_{n}(t)\big{)}_{n\in\mathbb{Z}}\) satisfying (3.6), which is the Volterra lattice equation, first considered by Kac and van Moerbeke [12]. Hence, in a certain sense that can be made precise, these will turn out to be genus 2 solutions of this lattice hierarchy. The complex geometry of the solutions of the map defined by (3.2) is related to a family of hyperelliptic curves of genus 2, given by the Weierstrass quintic \[\Gamma:\quad y^{2}=(1+\nu x+bx^{2})^{2}+4a(1+\nu x+bx^{2})x^{3}+4h_{1}x^{4}+4( h_{2}+\nu h_{1})x^{5}. \tag{3.7}\] On any genus 2 curve \(\Gamma\) of the above form, we take the meromorphic function \(F\) given by \[F=\frac{y+{\cal P}(x)}{{\cal Q}(x)}=\frac{{\cal R}(x)}{y-{\cal P}(x)}, \tag{3.8}\] where \({\cal P},{\cal Q},{\cal R}\) are polynomials in the spectral parameter \(x\), given by \[{\cal P}(x)=1+p_{1}x+p_{2}x^{2},\quad{\cal Q}(x)=2+q_{1}x+q_{2}x^{2},\quad{ \cal R}(x)=r_{1}x+r_{2}x^{2}+r_{3}x^{3}, \tag{3.9}\] which are required to satisfy \[{\cal P}(x)^{2}+{\cal Q}(x){\cal R}(x)=f(x), \tag{3.10}\] with \(f(x)=(1+\nu x+bx^{2})^{2}+4a(1+\nu x+bx^{2})x^{3}+4h_{1}x^{4}+4(h_{2}+\nu h_{1})x ^{5}\) being the quintic on the right-hand side of (3.7). Then the key to the construction in [10] is to expand the function \(F\) as a continued fraction of Stieltjes type (S-fraction), that is \[F=1-\frac{w_{1}x}{ 1-\frac{w_{2}x}{ 1-\frac{w_{3}x}{ 1-\cdots}}}, \tag{3.11}\] and by iterating from one line of the fraction to the next we find that we obtain a recurrence for the coefficients \(w_{j}\). More precisely, the non-trivial coefficients of the polynomials (3.9) are given in terms of \(w_{j}\) and the parameters by \[p_{1}=2w_{0}+\nu,\,p_{2}=2w_{0}(w_{1}+w_{0}+w_{-1})+b,\,\tfrac{1}{2}q_{1}=w_{0 }+w_{1}+\nu,\,\tfrac{1}{2}q_{2}=w_{0}w_{-1}+w_{1}w_{2}+(w_{1}+w_{0})^{2}+\nu(w _{0}+w_{1})+b,\] and \(r_{1}=-2w_{0}\); there are similar (but slightly more unwieldy) expressions for \(r_{2}\) and \(r_{3}\), which are omitted here, but are easily obtained from the relation (3.10). With these identifications, the iteration of the S-fraction (3.11) for \(F\) becomes precisely the map (P.iv) in terms of the affine coordinates \(w_{j}\), as given by (3.2). In [10] it was also shown that each iteration of the continued fraction is equivalent to the discrete Lax equation \[{\bf L}(x){\bf M}(x)={\bf M}(x)\widetilde{\bf L}(x)\;,\] where \[{\bf L}(x):=\left(\begin{array}{cc}{\cal P}(x)&{\cal R}(x)\\ {\cal Q}(x)&-{\cal P}(x)\end{array}\right)\;,\qquad{\bf M}(x):=\left(\begin{array} []{cc}1&-w_{1}x\\ 1&0\end{array}\right).\] Furthermore, we found that each generic common level set of the two invariants \(H_{1},H_{2}\) is isomorphic to an affine part of the Jacobian of the associated spectral curve \(\Gamma\) (or rather, of its completion), and each iteration of the map corresponds to a translation on the Jacobian by the divisor class \([(0,-1)-\infty]\). Thus, in addition to being integrable in the Liouville sense, the map (3.2) is an algebraic completely integrable system, being a discrete analogue of an a.c.i. system (see [1, 21]). The map (3.2) can also be rewritten in terms of tau functions \(\tau_{n}\), related to \(w_{n}\) via \[w_{n}=\frac{\tau_{n}\tau_{n+3}}{\tau_{n+1}\tau_{n+2}}.\] These tau functions satisfy a Somos-9 recurrence, that is \[\alpha_{1}\,\tau_{n+9}\tau_{n}+\alpha_{2}\,\tau_{n+8}\tau_{n+1}+\alpha_{3}\, \tau_{n+7}\tau_{n+2}+\alpha_{4}\,\tau_{n+6}\tau_{n+3}+\alpha_{5}\,\tau_{n+5} \tau_{n+4}=0, \tag{3.12}\] with coefficients \(\alpha_{j}\) that depend on \(a,b,\nu\) and the values of \(H_{1},H_{2}\) along each orbit of (3.2); for details see Proposition 2.1 in [10]. Using the S-fraction (3.11), we were also able to write explicit Hankel determinant formulae for these tau functions \(\tau_{n}\), analogous to results for Somos sequences in genus 1 [3], and other Hankel determinant formulae for solutions of the Volterra lattice [4]. Furthermore, we found a Miura map relating the solutions of (P.iv) to one of the maps derived from J-fractions in [9], using the classical method of contraction of continued fractions due to Stieltjes [19] (see also [18]), which in this case turned out to provide solutions of the infinite Toda lattice. In what follows, we will present analogous properties for the maps (P.v) and (P.vi), and point out how they are closely connected to (P.iv). The map (P.v) The map (P.v) is given by the recurrence \[w_{n+4}w_{n+3}^{2}w_{n+2}^{2}+w_{n+2}^{2}w_{n+1}^{2}w_{n}+w_{n+2}^{3}(w_{n+1}+w_ {n+3})^{2}+\tilde{\nu}w_{n+2}^{2}(w_{n+1}+w_{n+3})+\tilde{c}w_{n+2}+\tilde{a}=0, \tag{4.1}\] with three essential parameters \(\tilde{a},\tilde{c},\tilde{\nu}\) (compared with [7] we have put tildes here to distinguish them from the parameters in (3.2), and rescaled so that the parameter \(d\to 1\)). The lowest degree first integral of the map defined by (4.1), with degree pattern \((1,3,3,1)\), is given by \[H_{1}=w_{3}w_{2}^{3}w_{1}^{2}+w_{2}^{2}w_{1}^{3}w_{0}-w_{3}w_{2}^{2}w_{1}^{2}w _{0}+w_{2}^{3}w_{1}^{3}+\tilde{\nu}w_{2}^{2}w_{1}^{2}+\tilde{c}w_{2}w_{1}+ \tilde{a}(w_{2}+w_{1}), \tag{4.2}\] and this is the same as \(I_{\rm low}^{\rm P.v}\) in [7]. Another first integral, with degree pattern \((2,4,4,2)\), is \[\begin{array}{rcl}H_{2}&=&w_{2}^{2}w_{1}^{2}\Big{(}(w_{3}w_{2}+w_{1}w_{0}+w_ {2}w_{1})^{2}+\tilde{\nu}(w_{3}+w_{1})(w_{2}+w_{0})\Big{)}\\ &&+\tilde{c}w_{2}w_{1}(w_{3}w_{2}+w_{1}w_{0}+w_{2}w_{1})+\tilde{a}(w_{3}w_{2}^{ 2}+w_{1}^{2}w_{0}+w_{2}^{2}w_{1}+w_{2}w_{1}^{2}).\end{array} \tag{4.3}\] The second invariant presented in [7] is \(I_{\rm high}^{\rm P.v}=H_{2}-\tilde{\nu}H_{1}\). The nondegenerate Poisson bracket between the coordinates is given by \[\{\,w_{n},w_{n+1}\,\}=0,\,\{\,w_{n},w_{n+2}\,\}=\frac{1}{w_{n+1}^{2}},\,\{\,w _{n},w_{n+3}\,\}=-\frac{2(w_{n}w_{n+1}+w_{n+1}w_{n+2}+w_{n+2}w_{n+3})+\tilde{ \nu}}{w_{n+1}^{2}w_{n+2}^{2}}.\] The independent first integrals (4.2) and (4.3) are in involution with respect to this bracket, which shows that the map (4.1) is Liouville integrable. Computing the Hamiltonian vector field for the first flow, generated by \(H_{1}\), we find that this takes the form \[\frac{{\rm d}w_{n}}{{\rm d}t}=w_{n}^{2}(w_{n+1}-w_{n-1}) \tag{4.4}\] for \(n=1,2\). However, since the map (4.1) is Poisson and commutes with the flow \(\{\cdot,H_{1}\}\), the equation (4.4) holds for all \(n\in\mathbb{Z}\). Thus the compatible solutions of the map and the flow together provide a sequence of functions \(\big{(}w_{n}(t)\big{)}_{n\in\mathbb{Z}}\) which satisfy (4.4), which is a degenerate case of the modified Volterra lattice equation [25]. If we make the tau function substitution \[w_{n}=\frac{\tau_{n}\tau_{n+2}}{\tau_{n+1}^{2}} \tag{4.5}\] for (P.v), then we find that the sequence \((\tau_{n})\) satisfies a Somos-8 relation. More precisely, by direct computer algebra calculations we can show the following: **Proposition 4.1**.: _Whenever \(w_{n}\) is a solution of (4.1), the sequence \((\tau_{n})\) satisfies the following Somos-8 recurrence, with coefficients that are functions of the Hamiltonians \(H_{1},H_{2}\) as in (4.2) and (4.3) above (constant along each orbit):_ \[\alpha_{1}\,\tau_{n+8}\tau_{n}+\alpha_{2}\,\tau_{n+7}\tau_{n+1}+\alpha_{3}\, \tau_{n+6}\tau_{n+2}+\alpha_{4}\,\tau_{n+5}\tau_{3}+\alpha_{5}\,\tau_{n+4}^{2} =0, \tag{4.6}\] _where the coefficients are given by_ \[\alpha_{1}=H_{1},\qquad\alpha_{2}=\tilde{a}H_{2},\qquad\alpha_{3}= \tilde{a}^{2}H_{2}-H_{1}^{3},\] \[\alpha_{4}=\tilde{a}\Big{(}H_{2}^{2}+\tilde{\nu}H_{1}H_{2}+\tilde{ c}H_{1}^{2}+\tilde{a}^{2}H_{1}\Big{)},\qquad\alpha_{5}=-H_{1}\Big{(}H_{2}^{2}+ \tilde{\nu}H_{1}H_{2}+\tilde{c}H_{1}^{2}+\tilde{a}^{2}H_{1}\Big{)}.\] Let us denote a solution of the Volterra lattice (3.6) by \(\hat{w}_{n}\). Then the Miura map from the modified Volterra lattice (4.4) takes the form \[\hat{w}_{n}=w_{n+1}w_{n}. \tag{4.7}\] This Miura map remains valid at the level of the maps (3.2) and (4.1), in the following sense. **Theorem 4.2**.: _Let \(w_{n}\) be a solution of (4.1) with parameters \(\tilde{a},\tilde{c},\tilde{\nu}\), lying on the level set \(H_{1}=\tilde{h}_{1}\), \(H_{2}=\tilde{h}_{2}\), of the first integrals (4.2) and (4.3). Then \(\hat{w}_{n}\) given by the Miura map (4.7) is a solution of (3.2) with parameters_ \[\nu=\tilde{\nu},\qquad b=\tilde{c},\qquad a=\tilde{h}_{1}.\] _Furthermore, on this solution \(\hat{w}_{n}\), the values \(h_{1},h_{2}\) of the first integrals (3.3) and (3.4) for the map (3.2) are given by_ \[h_{1}=\tilde{h}_{2},\qquad h_{2}=-\tilde{a}^{2}-\tilde{\nu}\tilde{h}_{2}- \tilde{c}\tilde{h}_{1}.\] **Proof:** The first part of this result is verified by substituting the Miura formula (4.7) directly into (3.2), using (4.1) to eliminate \(w_{n+5}\) followed by \(w_{n+4}\), and then using the formula for \(H_{1}\) in (4.2) to eliminate \(w_{n+3}\) on the level set \(H_{1}=\tilde{h}_{1}\). Analogous calculations, rewriting (3.3) and (3.4) in terms of \(w_{n}\) satisfying (4.1) and comparing with \(\tilde{h}_{2}\), the value of the first integral (4.3) for the latter map, yield the above expressions for \(h_{1},h_{2}\). It is worth commenting on the meaning of the Miura formula (4.7), restricted to this finite-dimensional setting. Given initial data \(w_{0},w_{1},w_{2},w_{3}\) for the map (4.1), we can fix a level set \(H_{1}=\tilde{h}_{1}\) to write \[\hat{w}_{0}=w_{0}w_{1},\,\hat{w}_{1}=w_{1}w_{2},\,\hat{w}_{2}=w_{2}w_{3},\, \hat{w}_{3}=w_{3}\,G(w_{1},w_{2},w_{3},\tilde{h}_{1}),\] for some rational function \(G\), obtained by using the formula (4.2) for \(H_{1}\) to eliminate \(w_{4}\). Similarly, we can use \(H_{1}\) to eliminate \(w_{0}\) above in terms of \(w_{1},w_{2},w_{3}\) and \(\tilde{h}_{1}\), and after taking resultants we can do further elimination to solve for each of \(w_{0},w_{1},w_{2},w_{3}\) as algebraic functions of \(\hat{w}_{0},\hat{w}_{1},\hat{w}_{2},\hat{w}_{3}\) and \(\tilde{h}_{1}\). So this leads to an explicit inverse of (4.7), at least in the form of an algebraic correspondence. ## 5 The map (P.vi) The map (P.vi) is given by \[w_{n+4}(w_{n+3}^{2}-\delta^{2})(w_{n+2}^{2}-\delta^{2})+w_{n}(w_ {n+1}^{2}-\delta^{2})(w_{n+2}^{2}-\delta^{2})\] \[+w_{n+2}\Big{(}(w_{n+2}^{2}-\delta^{2})(w_{n+3}+w_{n+1})^{2}+ \bar{c}-\delta^{4}\Big{)}+\bar{\nu}(w_{n+2}^{2}-\delta^{2})(w_{n+3}+w_{n+1})+ \bar{a}=0. \tag{5.1}\] This depends on only three essential parameters \(\bar{a},\bar{c},\bar{\nu}\); compared with [7] we have replaced \(a\to\bar{a}\), \(c\to\bar{c}\), \(d\to-\bar{\nu}\) and \(\delta\to\delta^{2}\). Note the map P(v) in the previous section arises from P(vi) in the limit \(\delta\to 0\), while for \(\delta\neq 0\) the map can always be rescaled so that \(\delta\to 1\), but it will be convenient to retain this parameter which has the same weight as \(w_{n}\) in (5.1). The lowest degree first integral of the map defined by (5.1), with degree pattern \((1,3,3,1)\), is given by \[H_{1} = \big{(}w_{1}^{2}w_{2}^{2}-\delta^{2}(w_{1}^{2}+w_{2}^{2})\big{)} \Big{(}w_{3}w_{2}+w_{0}w_{1}+w_{1}w_{2}-w_{3}w_{0}+\bar{\nu}\Big{)}\] \[\delta^{4}(w_{3}w_{2}+w_{0}w_{1}-w_{0}w_{3})+\bar{c}w_{2}w_{1}+ \bar{a}(w_{2}+w_{1}).\] A nondegenerate Poisson bracket between the coordinates is given by \[\{\,w_{n},w_{n+1}\,\}=0,\,\{\,w_{n},w_{n+2}\,\}=\frac{1}{w_{n+1}^{2}-\delta^{2} },\,\{\,w_{n},w_{n+3}\,\}=-\frac{2(w_{n}w_{n+1}+w_{n+1}w_{n+2}+w_{n+2}w_{n+3})+ \bar{\nu}}{(w_{n+1}^{2}-\delta^{2})(w_{n+2}^{2}-\delta^{2})},\] and was derived in [7] using a discrete Lagrangian structure for (5.1). A second independent first integral \(H_{2}\) was given in [7], which is in involution with \(H_{1}\) with respect to this bracket. Here we take the second independent quantity as \[H_{2} = (w_{1}^{2}-\delta^{2})(w_{2}^{2}-\delta^{2})^{2}\,w_{3}^{2}+(w_{1 }^{2}-\delta^{2})^{2}(w_{2}^{2}-\delta^{2})\,w_{0}^{2}+(2w_{1}w_{2}+\bar{\nu})( w_{1}^{2}-\delta^{2})(w_{2}^{2}-\delta^{2})\,w_{3}w_{0} \tag{5.3}\] \[+ \big{(}2w_{1}^{3}w_{2}^{2}+\bar{\nu}w_{1}^{2}w_{2}+\bar{c}w_{1}+ \bar{a}-(2w_{1}w_{2}^{2}+\bar{\nu}w_{2})\delta^{2}-w_{1}\delta^{4}\big{)}(w_{2}^ {2}-\delta^{2})\,w_{3}\] \[+ \big{(}2w_{1}^{2}w_{3}^{2}+\bar{\nu}w_{1}w_{2}^{2}+\bar{c}w_{2}+ \bar{a}-(2w_{1}^{2}w_{2}+\bar{\nu}w_{1})\delta^{2}-w_{2}\delta^{4}\big{)}(w_{1}^ {2}-\delta^{2})\,w_{0}\] \[+ w_{1}^{4}w_{2}^{4}+\bar{\nu}w_{1}^{3}w_{2}^{3}+\bar{c}w_{1}^{2}w_ {2}^{2}+\bar{a}w_{1}w_{2}(w_{1}+w_{2})\] \[- \Big{(}\big{(}w_{1}^{2}w_{2}^{2}+\bar{\nu}w_{1}w_{2}\big{)}(w_{1} ^{2}+w_{2}^{2})+\bar{a}(w_{1}+w_{2})\Big{)}\delta^{2}+(w_{1}^{2}w_{2}^{2}+\bar{ \nu}w_{1}w_{2}-\bar{c})\delta^{4}-(w_{1}^{2}+w_{2}^{2})\delta^{6};\] so the map (5.1) is Liouville integrable. The Hamiltonian vector field for the first flow, generated by \(H_{1}\), takes the form \[\frac{{\rm d}w_{n}}{{\rm d}t}=(w_{n}^{2}-\delta^{2})(w_{n+1}-w_{n-1}) \tag{5.4}\] for \(n=1,2\), and once again, since the Poisson map (5.1) is compatible with the flow \(\{\cdot,H_{1}\}\), the equation (5.4) holds for all \(n\in\mathbb{Z}\), and thus the map and the flow together produce a sequence of functions \(\big{(}w_{n}(t)\big{)}_{n\in\mathbb{Z}}\) satisfying (5.4), which (up to rescaling) is the general form of the modified Volterra lattice equation. If we set \(\delta\to 0\) in (5.4), then the equation (4.4) is recovered, corresponding to the same limit that reproduces (4.1) as a degenerate case of (5.1). However, the behaviour of the degenerate map (4.1) is sufficiently different compared with (5.1) e.g. with respect to singularity structure, that it is worth giving it a separate analysis as we have done here. Let us denote a solution of the Volterra lattice (3.6) by \(\hat{w}_{n}\). Then the Miura map from the modified Volterra lattice (5.4) takes the form \[\hat{w}_{n}=(w_{n+1}\mp\delta)(w_{n}\pm\delta), \tag{5.5}\] (so there are effectively two maps, with an opposite choice of sign in each factor on the right-hand side above). Moreover, this persists at the level of the maps (3.2) and (5.1), in the following sense. **Theorem 5.1**.: _Let \(w_{n}\) be a solution of (5.1) with parameters \(\bar{a},\bar{c},\bar{\nu}\), lying on the level set \(H_{1}=\bar{h}_{1}\), \(H_{2}=\bar{h}_{2}\) of the first integrals (5.2) and (5.3). Then for either choice of signs, \(\hat{w}_{n}\) given by the Miura map (5.5) is a solution of (3.2) with parameters_ \[\nu=\bar{\nu}+6\delta^{2},\qquad b=\bar{c}+4\bar{\nu}\delta^{2}+7\delta^{4}, \qquad a=\bar{h}_{1}+\bar{c}\delta^{2}+\bar{\nu}\delta^{4}-\delta^{6}.\] _Moreover, on either solution \(\hat{w}_{n}\), the values \(h_{1},h_{2}\) of the first integrals (3.3) and (3.4) for the map (3.2) are given by_ \[h_{1}=\bar{h}_{2}+2\delta^{8},\qquad h_{2}=-\bar{a}^{2}-\bar{\nu}\bar{h}_{2}- \bar{c}\bar{h}_{1}-2\bar{h}_{2}\delta^{2}+(\bar{h}_{1}-\bar{\nu}\bar{c}) \delta^{4}-\bar{\nu}\delta^{8}-4\delta^{10}.\] **Proof:** The first part of this result is verified by substituting the Miura formula (5.5) directly into (3.2), using (5.1) to eliminate \(w_{n+5}\) followed by \(w_{n+4}\), and then using (5.2) to eliminate \(w_{n+3}\) on the level set \(H_{1}=\bar{h}_{1}\). After the initial substitution of the Miura map and eliminating, all the final results are quadratic in \(\delta\), so do not depend on the choice of sign in (5.5). Similar calculations using the same substitutions in the formulae (3.3) and (3.4), together with the expression (5.3) on the level set \(H_{2}=\bar{h}_{2}\), produce the expressions for \(h_{1},h_{2}\), which are the corresponding values of the first integrals for (3.2). We can also make use of a tau function substitution for (P.vi), which has the more complicated structure \[w_{n}+\delta=\rho_{n}\,\frac{\sigma_{n+2}\tau_{n}}{\sigma_{n+1}\tau_{n+1}}, \tag{5.6}\] \[w_{n}-\delta=\frac{1}{\rho_{n+1}}\,\frac{\sigma_{n}\tau_{n+2}}{\sigma_{n+1} \tau_{n+1}}, \tag{5.7}\] with \[\rho_{n+2}=\rho_{n}.\] This implies that \[\hat{w}_{n}^{(+)}=(w_{n}-\delta)(w_{n+1}+\delta)=\frac{\sigma_{n}\sigma_{n+3}}{ \sigma_{n+1}\sigma_{n+2}}, \tag{5.8}\] \[\hat{w}_{n}^{(-)}=(w_{n}+\delta)(w_{n+1}-\delta)=\frac{\tau_{n}\tau_{n+3}}{ \tau_{n+1}\tau_{n+2}} \tag{5.9}\] are both solutions of (3.2), and both sequences \((\sigma_{n})\) and \((\tau_{n})\) satisfy the same Somos-9 relation. Thus the two different formulae for the Miura map in (5.5) can be regarded as defining a Backlund transformation for the discrete equation (3.2) with parameter \(\delta\), since given \(\hat{w}_{n}^{(-)}\) and a solution \(w_{n}\) of (5.1), a new solution \(\hat{w}_{n}^{(+)}\) of (3.2) is generated by taking \[\hat{w}_{n}^{(+)}=\hat{w}_{n}^{(-)}+2\delta(w_{n+1}-w_{n}).\] Conclusion We have shown that the integrable maps (P.iv), (P.v) and (P.vi) from [7] are closely related to one another, via Miura-type transformations, and they provide genus two solutions of Volterra and modified Volterra lattices, respectively. So far we do not have a complete understanding of what the relations between these maps mean geometrically, particularly from the Poisson and algebro-geometric points of view. However, since the construction of the integrable maps \(\mathcal{V}_{g}\) presented in [10] is valid for any \(g\geq 1\), this strongly suggests that (P.v) and (P.vi) should each be the \(g=2\) members of a family of maps defined for any \(g\). In the elliptic case (\(g=1\)) we have constructed elliptic solutions of the modified Volterra and Volterra lattices, and showed how they are linked by the Miura transformation, essentially recovering the solutions found in [24], which can be interpreted in terms of integrable maps in the plane (QRT type). The complete description of these results, together with the proposed extension to families of maps for all \(g\geq 1\), is planned for future work. **Acknowledgments:** The research of ANWH was supported by Fellowship EP/M004333/1 from the Engineering & Physical Sciences Research Council, UK, extended by EP/V520718/1 COVID 19 Grant Extension Allocation University of Kent, and the grant IEC\(\backslash\)R3\(\backslash\)193024 from the Royal Society; he is also grateful to the School of Mathematics and Statistics, University of New South Wales, for hosting him during 2017-2019 as a Visiting Professorial Fellow with funding from the Distinguished Researcher Visitor Scheme, and to Wolfgang Schief, who provided additional support during his time in Sydney. ANWH also thanks DICATAM for supporting his visit to Brescia in November 2022. FZ acknowledges the support of Universita di Brescia, GNFM-INdAM and INFN, Sezione di Milano-Bicocca, Gr. IV - Mathematical Methods in NonLinear Physics (Milano, Italy).
2305.13579
Enhancing Detail Preservation for Customized Text-to-Image Generation: A Regularization-Free Approach
Recent text-to-image generation models have demonstrated impressive capability of generating text-aligned images with high fidelity. However, generating images of novel concept provided by the user input image is still a challenging task. To address this problem, researchers have been exploring various methods for customizing pre-trained text-to-image generation models. Currently, most existing methods for customizing pre-trained text-to-image generation models involve the use of regularization techniques to prevent over-fitting. While regularization will ease the challenge of customization and leads to successful content creation with respect to text guidance, it may restrict the model capability, resulting in the loss of detailed information and inferior performance. In this work, we propose a novel framework for customized text-to-image generation without the use of regularization. Specifically, our proposed framework consists of an encoder network and a novel sampling method which can tackle the over-fitting problem without the use of regularization. With the proposed framework, we are able to customize a large-scale text-to-image generation model within half a minute on single GPU, with only one image provided by the user. We demonstrate in experiments that our proposed framework outperforms existing methods, and preserves more fine-grained details.
Yufan Zhou, Ruiyi Zhang, Tong Sun, Jinhui Xu
2023-05-23T01:14:53Z
http://arxiv.org/abs/2305.13579v1
# Enhancing Detail Preservation for Customized Text-to-Image Generation: ###### Abstract Recent text-to-image generation models have demonstrated impressive capability of generating text-aligned images with high fidelity. However, generating images of novel concept provided by the user input image is still a challenging task. To address this problem, researchers have been exploring various methods for customizing pre-trained text-to-image generation models. Currently, most existing methods for customizing pre-trained text-to-image generation models involve the use of regularization techniques to prevent over-fitting. While regularization will ease the challenge of customization and leads to successful content creation with respect to text guidance, it may restrict the model capability, resulting in the loss of detailed information and inferior performance. In this work, we propose a novel framework for customized text-to-image generation without the use of regularization. Specifically, our proposed framework consists of an encoder network and a novel sampling method which can tackle the over-fitting problem without the use of regularization. With the proposed framework, we are able to customize a large-scale text-to-image generation model within half a minute on single GPU, with only one image provided by the user. We demonstrate in experiments that our proposed framework outperforms existing methods, and preserves more fine-grained details. ## 1 Introduction Text-to-image generation is a research topic that has been explored for years [33; 36; 38; 39; 41; 42], with remarkable progresses recently. Nowadays, researchers are able to perform zero-shot text-to-image generation with arbitrary text input by training large-scale models on web-scale datasets. Starting from DALL-E [21] and CogView [5], numerous methods have been proposed [3; 6; 7; 20; 22; 24; 37; 40], leading to impressive capability in generating text-aligned images of high resolution with exceptional fidelity. Besides text-to-image generation, these large-scale models also have huge impacts on many other applications including image manipulation [1; 10] and video generation [11; 29]. Although aforementioned large-scale text-to-image generation models are able to perform text-aligned and creative generation, they may face difficulties in generating novel and unique concepts [8] specified by users. Thus, researchers have exploited different methods in customizing pre-trained text-to-image generation models. For instance, [17; 23] propose to fine-tune the pre-trained generative models with few samples, where different regularization methods are applied to prevent over-fitting. [8; 9; 34] propose to encode the novel concept of user input image in a word embedding, which is obtained by an optimization method or from an encoder network. All these methods lead to customized generation for the novel concept, while satisfying additional requirements described in arbitrary user input text. Despite these progresses, recent research also makes us suspect that the use of regularization may potentially restrict the capability of customized generation, leading to the information loss of fine-grained details. In this paper, we propose a novel framework called _ProFusion_, which consists of an encoder called _PromptNet_ and a novel sampling method called _Fusion Sampling_. Different from previous methods, our ProFusion does not require any regularization, the potential over-fitting problem can be tackled by the proposed Fusion Sampling method at inference, which saves training time as there is no need to tune the hyper-parameters for regularization method. Our main contributions can be summarized as follows: * We propose ProFusion, a novel framework for customized generation. Given single testing image containing a unique concept, the proposed framework can generate customized output for the unique concept and meets additional requirement specified in arbitrary text. Only about 30 seconds of fine-tuning on single GPU is required; * The proposed framework does not require any regularization method to prevent over-fitting, which significantly reduces training time as there is no need to tune regularization hyperparameters. The absence of regularization also allows the proposed framework to achieve enhanced preservation of fine-grained details; * Extensive results,including qualitative, quantitative and human evaluation results, have demonstrated the effectiveness of the proposed ProFusion. Ablation studies are also conducted to better understand the components in the proposed framework; ## 2 Methodology We now present our proposed ProFusion framework, which consists of a neural network called PromptNet and a novel sampling method called Fusion Sampling. Specifically, PromptNet is an encoder network which can generate word embedding \(S^{*}\) conditioned on input image \(\mathbf{x}\), inside the input embedding space of the text encoder from Stable Diffusion 2. The major benefit of mapping \(\mathbf{x}\) into \(S^{*}\) is that \(S^{*}\) can be readily combined with arbitrary text to construct prompt for creative generation, _e.g._, "\(S^{*}\) from a superhero movie screenshot"; Meanwhile, the Fusion Sampling is a sampling method leads to promising generation which meets the specified text requirements while maintaining fine-grained details of the input image \(\mathbf{x}\). Figure 1: Customized generation with the proposed framework. Given only single testing image, we are able to perform customized generation which satisfies arbitrary specified requirements and preserves fine-grained details. Our core idea is presented in Figure 2. The proposed PromptNet infers \(S^{*}\) from an input image \(\mathbf{x}_{0}\) and current noisy generation \(\mathbf{x}_{t}\). Instead of using \(\mathbf{x}_{0}\), we can use \(\tilde{\mathbf{x}}_{0}\) during the training of PromptNet, which denotes a different view of \(\mathbf{x}_{0}\) and can be obtained by data augmentation, _e.g._, resizing, rotation. The PromptNet is trained with diffusion loss: \[L_{\text{Diffusion}}=\mathbb{E}_{\mathbf{x},\mathbf{y}(S^{*}),t,\boldsymbol{ \epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})}\left[\|\boldsymbol{\epsilon} -\boldsymbol{\epsilon}_{\boldsymbol{\theta}}(\mathbf{x}_{t},\mathbf{y}(S^{*}),t)\|_{2}^{2}\right], \tag{1}\] where \(\mathbf{y}(S^{*})\) denotes the constructed prompt containing \(S^{*}\), e.g. "A photo of \(S^{*}\)". Existing works [8, 9] use similar idea to obtain \(S^{*}\). However, regularization are often applied in these works. For instance, E4T [9] proposes to use an encoder to generate \(S^{*}\), which is optimized with \[L=L_{\text{Diffusion}}+\lambda\|S^{*}\|_{2}^{2}, \tag{2}\] where the \(L_{2}\) norm of \(S^{*}\) is regularized. Similarly, Textual Inversion [8] proposes to directly obtain \(S^{*}\) by solving \[S^{*}=\text{argmin}_{S^{\prime}}L_{\text{Diffusion}}+\lambda\|S^{\prime}-S\|_ {2}^{2}\] with optimization method, where \(S\) denotes a coarse embedding*. Footnote *: let \(S^{*}\) be a target embedding for a specific human face image, \(S\) can be set to be the embedding of text ”face”. In this work, we argue that although the use of regularization will ease the challenge and enables successful content creation with respect to testing text. It also leads to the loss of detailed information, resulting in inferior performance. To verify this argument, we conduct a simple experiment on FFHQ dataset [15]. We train several encoders with different levels of regularization by selecting different \(\lambda\) in (2). After training, we test their capability by classifier-free sampling [13] with different prompts containing resulting \(S^{*}\). The results are shown in Figure 3, from which we can find that smaller regularization leads to less information loss, which results in better preservation of details. However, the information could be too strong to prevent creative generation with respect to user input text. Meanwhile, large regularization leads to successful content creation, while fails to capture details of the input image, resulting in unsatisfactory results. A consequent question is, **is it possible to perform successful customized generation using \(S^{*}\) obtained without regularization so that the details from original image can be well-preserved?** To answer this question, we propose a novel sampling method called Fusion Sampling. Figure 3: The performance of customized generation is impacted by the level of regularization. Figure 2: Illustration of the proposed framework. ### Fusion Sampling Given a PromptNet pre-trained without regularization which can map input image \(\mathbf{x}_{0}\) into word embedding \(S^{*}\), our goal is to successfully perform customized generation which preserves details of \(\mathbf{x}_{0}\), and meets the requirements specified in arbitrary prompt containing \(S^{*}\). The task can be formulated as a conditional generation task with conditions \(S^{*}\) and \(C\), where \(C\) denotes arbitrary user input text. We start from the most commonly used classifier-free sampling [13]. To sample \(\mathbf{x}_{t-1}\) given current noisy sample \(\mathbf{x}_{t}\) and conditions \([S^{*},C]\), the diffusion model first outputs the predictions of conditional noise \(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},S^{*},C)\) and unconditional noise \(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})\). Then an updated prediction (with hyper-parameter \(\omega\)) \[\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},S^{*},C)=(1+\omega)\mathbf{ \epsilon_{\theta}}(\mathbf{x}_{t},S^{*},C)-\omega\mathbf{\epsilon_{\theta}}( \mathbf{x}_{t}), \tag{3}\] will be used in different sampling strategies [12; 14; 30; 31]. In customized generation, the reason that vanilla classifier-free sampling does not work without regularization is that, information from \(S^{*}\) can become too strong without regularization. As a result, \(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},S^{*},C)\) will degenerate to \(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},S^{*})\) and information of \(C\) will be lost. Thus, we need to propose a new sampling method, to produce a new prediction for \(\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},S^{*},C)\) which is enforced to be conditioned on both \(S^{*}\) and \(C\). Sampling with independent conditionsWe begin by assuming that \(S^{*}\) and \(C\) are independent. According to [13], we know that \[\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},S^{*},C)=-\sqrt{1-\bar{\alpha}_{t}} \nabla\log p(\mathbf{x}_{t}\,|S^{*},C), \tag{4}\] where \(\bar{\alpha}_{t}\) is a hyper-parameter as defined in [12]. By (4) and Bayes' Rule, we can re-write (3) as \[\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},S^{*},C)=\mathbf{\epsilon_{\theta}} (\mathbf{x}_{t})-(1+\omega)\sqrt{1-\bar{\alpha}_{t}}\nabla\log p(S^{*},C| \,\mathbf{x}_{t}). \tag{5}\] Since we assume that \(S^{*},C\) are independent, we can further re-write the above as \[\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},S^{*},C) =\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})-(1+\omega)\sqrt{1-\bar{ \alpha}_{t}}\nabla\log p(S^{*}|\,\mathbf{x}_{t})-(1+\omega)\sqrt{1-\bar{ \alpha}_{t}}\nabla\log p(C|\,\mathbf{x}_{t})\] \[=\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})+(1+\omega)\{\mathbf{\epsilon_ {\theta}}(\mathbf{x}_{t},S^{*})-\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})\}+(1+ \omega)\{\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},C)-\mathbf{\epsilon_{\theta}}( \mathbf{x}_{t})\}.\] We re-write it as \[\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},S^{*},C)=\mathbf{\epsilon_{\theta}} (\mathbf{x}_{t})+(1+\omega_{1})\{\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},S^{*}) -\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})\}+(1+\omega_{2})\{\mathbf{\epsilon_{ \theta}}(\mathbf{x}_{t},C)-\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})\} \tag{6}\] for more flexibility. (6) can be readily extended to more complicated scenarios, where a list of conditions \(\{S_{1}^{*},S_{2}^{*},...,S_{k}^{*},C\}\) are provided. The corresponding \(\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},\{S_{i}^{*}\}_{i=1}^{k},C)\) is \[\tilde{\mathbf{\epsilon_{\theta}}}(\mathbf{x}_{t},\{S_{i}^{*}\}_{i=1}^{k},C)=\mathbf{ \epsilon_{\theta}}(\mathbf{x}_{t})+\sum_{i=1}^{k}(1+\omega_{i})\{\mathbf{\epsilon_ {\theta}}(\mathbf{x}_{t},S_{i}^{*})-\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t})\}+ (1+\omega_{C})\{\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},C)-\mathbf{\epsilon_{ \theta}}(\mathbf{x}_{t})\}.\] Fusion Sampling with dependent conditionsOne major drawback of (6) is that the independence does not always hold in practice. As we will show in later experiment, assuming \(S^{*}\) and \(C\) to be independent can lead to inferior generation. To solve this problem, we propose Fusion Sampling, which consists of two stages at each timestep \(t\): a **fusion stage** which encodes information from both \(S^{*}\) and \(C\) into \(\mathbf{x}_{t}\) with an updated \(\tilde{\mathbf{x}}_{t}\), and a **refinement stage** which predicts \(\mathbf{x}_{t-1}\) based on Equation (6). The proposed algorithm is presented in Algorithm 1. Sampling with independent conditions can be regarded as a special case of Fusion Sampling with \(m=0\). In practice, \(m=1\) works well, thus we set \(m=1\) in all our experiments. The remaining challenge in Algorithm 1 is to sample \(\tilde{\mathbf{x}}_{t-1}\sim q(\tilde{\mathbf{x}}_{t-1}|\tilde{\mathbf{x}}_{t}, \tilde{\mathbf{x}}_{0})\) and \(\tilde{\mathbf{x}}_{t}\sim q(\tilde{\mathbf{x}}_{t}|\tilde{\mathbf{x}}_{t-1}, \tilde{\mathbf{x}}_{0})\). We take Denoising Diffusion Implicit Models (DDIM) [30] as an example, while the following derivation can be extended to other diffusion models. Let \(\mathbf{I}\) be the identity matrix, \(\sigma_{t}\) denotes a hyper-parameter controlling randomness. In DDIM, we have \[q(\tilde{\mathbf{x}}_{t}|\tilde{\mathbf{x}}_{0})=\mathcal{N}(\tilde{\mathbf{x}}_ {t};\sqrt{\bar{\alpha}_{t}}\tilde{\mathbf{x}}_{0},(1-\bar{\alpha}_{t})\mathbf{ I}) \tag{7}\] and \[q(\tilde{\mathbf{x}}_{t-1}|\tilde{\mathbf{x}}_{t},\tilde{\mathbf{x}}_{0})= \mathcal{N}(\tilde{\mathbf{x}}_{t-1};\sqrt{\bar{\alpha}_{t-1}}\tilde{\mathbf{x}}_ {0}+\sqrt{1-\bar{\alpha}_{t-1}-\sigma_{t}^{2}}\frac{\tilde{\mathbf{x}}_{t}-\sqrt{ \bar{\alpha}_{t}}\tilde{\mathbf{x}}_{0}}{\sqrt{1-\bar{\alpha}_{t}}},\sigma_{t}^{2 }\mathbf{I}). \tag{8}\] By the property of Gaussian distributions [2], we know that \[q(\tilde{\mathbf{x}}_{t}|\tilde{\mathbf{x}}_{t-1},\tilde{\mathbf{x}}_{0})= \mathcal{N}(\tilde{\mathbf{x}}_{t};\boldsymbol{\Sigma}(A^{T}L(\tilde{\mathbf{x} }_{t-1}-b)+B\boldsymbol{\mu}),\boldsymbol{\Sigma}) \tag{9}\] where \[\boldsymbol{\Sigma}=\frac{(1-\bar{\alpha}_{t})\sigma_{t}^{2}}{1-\bar{\alpha}_{t -1}}\mathbf{I},\quad\boldsymbol{\mu}=\sqrt{\bar{\alpha}_{t}}\tilde{\mathbf{x} }_{0},\quad b=\sqrt{\bar{\alpha}_{t-1}}\tilde{\mathbf{x}}_{0}-\frac{\sqrt{ \bar{\alpha}_{t}(1-\bar{\alpha}_{t-1}-\sigma_{t}^{2})}}{\sqrt{1-\bar{\alpha} _{t}}}\tilde{\mathbf{x}}_{0}\] \[A=\frac{\sqrt{1-\bar{\alpha}_{t-1}-\sigma_{t}^{2}}}{\sqrt{1-\bar{\alpha}_{t}}} \mathbf{I},\quad L=\frac{1}{\sigma_{t}^{2}}\mathbf{I},\quad B=\frac{1}{1- \bar{\alpha}_{t}}\mathbf{I}\] which leads to \[\tilde{\mathbf{x}}_{t}= \frac{\sqrt{(1-\bar{\alpha}_{t})(1-\bar{\alpha}_{t-1}-\sigma_{t} ^{2})}}{1-\bar{\alpha}_{t-1}}\tilde{\mathbf{x}}_{t-1}+\frac{(1-\bar{\alpha}_{t })\sigma_{t}^{2}}{1-\bar{\alpha}_{t-1}}\,\mathbf{z}\] \[+\frac{\tilde{\mathbf{x}}_{0}}{1-\bar{\alpha}_{t-1}}\{\sqrt{\bar{ \alpha}_{t}}(1-\bar{\alpha}_{t-1})-\sqrt{\bar{\alpha}_{t-1}(1-\bar{\alpha}_{t })(1-\bar{\alpha}_{t-1}-\sigma_{t}^{2})}\}\},\quad\mathbf{z}\sim\mathcal{N}( \mathbf{0},\mathbf{I}). \tag{10}\] With further derivation, we can summarize a single update in fusion stage as: \[\tilde{\mathbf{x}}_{t}\leftarrow\tilde{\mathbf{x}}_{t}-\frac{\sigma_{t}^{2} \sqrt{1-\bar{\alpha}_{t}}}{1-\bar{\alpha}_{t-1}}\tilde{\boldsymbol{\epsilon}}_ {\boldsymbol{\theta}}(\tilde{\mathbf{x}}_{t},\gamma S^{*},C)+\frac{\sqrt{(1- \bar{\alpha}_{t})(2-2\bar{\alpha}_{t-1}-\sigma_{t}^{2})}}{1-\bar{\alpha}_{t- 1}}\sigma_{t}\,\mathbf{z},\ \ \mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{11}\] **Remark 1**: _Recall \(\tilde{\boldsymbol{\epsilon}}_{\boldsymbol{\theta}}(\tilde{\mathbf{x}}_{t}, \gamma S^{*},C)=-\sqrt{1-\bar{\alpha}_{t}}\nabla\log\tilde{p}_{\omega}(\tilde {\mathbf{x}}_{t}|\gamma S^{*},C)\)[13], we can re-write (11) as_ \[\tilde{\mathbf{x}}_{t}\leftarrow\tilde{\mathbf{x}}_{t}+\frac{\sigma_{t}^{2} (1-\bar{\alpha}_{t})}{1-\bar{\alpha}_{t-1}}\nabla\log\tilde{p}_{\omega}( \tilde{\mathbf{x}}_{t}|\gamma S^{*},C)+\frac{\sqrt{(1-\bar{\alpha}_{t})(2-2 \bar{\alpha}_{t-1}-\sigma_{t}^{2})}}{1-\bar{\alpha}_{t-1}}\sigma_{t}\,\mathbf{ z}\,. \tag{12}\] _From (12), we can conclude that our fusion stage is actually an gradient-based optimization method similar to Langevin dynamics [35]. Compared to Langevin dynamics which is_ \[\tilde{\mathbf{x}}_{t}\leftarrow\tilde{\mathbf{x}}_{t}+\lambda\nabla\log \tilde{p}_{\omega}(\tilde{\mathbf{x}}_{t}|\gamma S^{*},C)+\sqrt{2\lambda}\, \mathbf{z}\,. \tag{13}\] _with \(\lambda\) being the step size, (12) has less randomness, because_ \[\frac{(1-\bar{\alpha}_{t})(2-2\bar{\alpha}_{t-1}-\sigma_{t}^{2})\sigma_{t}^{2} }{(1-\bar{\alpha}_{t-1})^{2}}\leq\frac{2\sigma_{t}^{2}(1-\bar{\alpha}_{t})}{1 -\bar{\alpha}_{t-1}}.\] **Remark 2**: _If we set the DDIM hyper-parameter to be \(\sigma_{t}=\sqrt{1-\bar{\alpha}_{t-1}}\), then (11) becomes_ \[\tilde{\mathbf{x}}_{t}\leftarrow\tilde{\mathbf{x}}_{t}-\sqrt{1-\bar{\alpha}_{t }}\tilde{\boldsymbol{\epsilon}}(\tilde{\mathbf{x}}_{t},\gamma S^{*},C)+\sqrt{1 -\bar{\alpha}_{t}}\,\mathbf{z},\quad\mathbf{z}\sim\mathcal{N}(\mathbf{0}, \mathbf{I})\] _which is equivalent to sampling \(\tilde{\mathbf{x}}_{t}\) using (7) without sampling intermediate \(\tilde{\mathbf{x}}_{t-1}\) in our Algorithm 1. Thus directly sampling \(\tilde{\mathbf{x}}_{t}\) using (7) is a special case of our Fusion Sampling algorithm._ ## 3 Experiments We conduct extensive experiments to evaluate the proposed framework. Specifically, we first pre-train a PromptNet on FFHQ dataset [15] on 8 NVIDIA A100 GPUs for 80,000 iterations with a batch size of 64, without any data augmentation. Given a testing image, the PromptNet and all attention layers of the pre-trained Stable Diffusion 2 are fine-tuned for 50 steps with a batch size of 8. Only half a minute and a single GPU is required in fine-tuning such a customized generative model, indicating the efficiency of the proposed method, especially considering the impressive results we could obtain. Some more implementation details are provided in the Appendix. Our code and pre-trained models will be publicly available at [https://github.com/drboog/ProFusion](https://github.com/drboog/ProFusion). Figure 4: Comparison with baseline methods. Our proposed approach exhibits superior capability for preserving fine-grained details. Figure 5: The proposed framework enables generation conditioned on multiple input images and text. Creative interpolation can be performed. ### Qualitative Results Our main results are shown in Figure 1 and Figure 6. From the results, we can see that the proposed framework effectively achieves customized generation which meets the specified text requirements while maintaining fine-grained details of the input image. More results are provided in the Appendix. As mentioned previously, our proposed framework is also able to perform generation conditioned on multiple images. We also provide these generated examples in Figure 5. Following [9], we then compare proposed framework with several baseline methods including Stable Diffusion+[22], Textual Inversion [8], DreamBooth [23], E4T [9]. The qualitative results are presented in Figure 4, where the results of related methods are directly taken from [9]. From the comparison we can see that our framework results in better preservation of fine-grained details. Footnote †: The results of Stable Diffusion is obtained by directly feeding corresponding researcher’s name and text requirements into the pre-trained text-to-image generation model. ### Quantitative Results We also evaluate our methods and baseline methods quantitatively. Specifically, we utilize different pre-trained CLIP models [19] to calculate the image-prompt similarity between the generated image and input text. The results are shown in Table 1, our ProFusion obtains higher image-prompt similarity on all CLIP models, indicating better prompt-adherence and edit-ability.. We then calculate the identity similarity between the generated image and input image, which is cosine similarity computed using features extracted by pre-trained face recognition models. The identity similarity is also evaluated across different pre-trained models [4; 16; 18; 25; 26; 27; 28; 32]. The results are shown in Table 2. In general, our ProFusion obtains higher similarity, indicating better identity preservation. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Pre-trained CLIP Models} \\ & VIT-B/32 & VIT-B/16 & VIT-L/14 & VIT-L/14 & 0336px & RN101 & RN50\(\times\)4 & RN50\(\times\)16 & RN50\(\times\)64 \\ \hline Stable Diffusion 2 & 0.271 & 0.256 & 0.196 & 0.196 & 0.428 & 0.202 & 0.355 & 0.254 & 0.181 \\ Textual Inversion & 0.257 & 0.251 & 0.197 & 0.201 & 0.426 & 0.195 & 0.350 & 0.247 & 0.173 \\ DreamBooth & 0.283 & 0.267 & 0.205 & 0.210 & 0.434 & 0.209 & 0.363 & 0.260 & 0.187 \\ E4T & 0.277 & 0.264 & 0.203 & 0.213 & 0.429 & 0.206 & 0.358 & 0.260 & 0.191 \\ **ProFusion (Ours)** & **0.293** & **0.283** & **0.225** & **0.229** & **0.446** & **0.223** & **0.374** & **0.279** & **0.202** \\ \hline \hline \end{tabular} \end{table} Table 1: Similarity (\(\uparrow\)) between generated example and input text. Figure 6: Some results of customized generation with the proposed framework. ### Human Evaluation We then conduct human evaluation on Amazon Mechanical Turk (MTurk). The workers are presented with two generated images from different methods along with original image and text requirements. They are then tasked with indicating their preferred choice. More details are provided in the Appendix. The results are shown in Figure 7, where we can find that our method obtains a higher preference rate compared to all other methods, indicating the effectiveness of our proposed framework. ### Ablation Study We conduct several ablation studies to further investigate the proposed ProFusion. Fusion SamplingFirst of all, we apply the proposed Fusion Sampling with both pre-trained and fine-tuned PromptNet. As shown in Figure 8, Fusion Sampling obtains better results on both pre-trained and fine-tuned models compared to baseline classifier-free sampling. We then investigate the effects of removing fusion stage or refinement stage in the proposed Fusion Sampling. As we can see from Figure 10, removing refinement stage leads to the loss in detailed information, while removing fusion stage leads to a generated image with disorganized structure. Intuitively, \(S^{*}\), which is the output of PromptNet, tries to generate a human face image following the structural information from the original image, while the text "is wearing superman costume" aims to generate a half-length photo. The conflicting nature of these two conditions results in an undesirable generation with a disorganized structure after we remove the fusion stage. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Pre-trained Face Recognition Models} \\ & VGG-Face & Facenet & Facenet512 & OpenFace & DeepFace & ArcFace & SFace & AdaFace \\ \hline Stable Diffusion 2 & 0.530 & 0.334 & 0.323 & 0.497 & 0.641 & 0.144 & 0.191 & 0.093 \\ Textual Inversion & 0.516 & 0.410 & 0.372 & 0.566 & 0.651 & 0.248 & 0.231 & 0.210 \\ DreamBooth & 0.518 & 0.483 & 0.415 & 0.516 & 0.643 & 0.379 & 0.304 & 0.307 \\ E4T & 0.677 & 0.596 & **0.621** & 0.660 & 0.732 & 0.454 & 0.398 & 0.426 \\ **ProFusion (Ours)** & **0.720** & **0.616** & 0.597 & **0.681** & **0.774** & **0.459** & **0.443** & **0.432** \\ \hline \hline \end{tabular} \end{table} Table 2: Similarity (\(\uparrow\)) between generated example and input image. Figure 8: Examples with prompt "\(S^{*}\) in anime style", Fusion Sampling outperforms baseline. Figure 7: Results of human evaluation. Data AugmentationWe then analyze the effects of data augmentation. In particular, we conduct separate fine-tuning experiments: one with data augmentation and one without, both models are tested with Fusion Sampling after fine-tuning. The results are shown in Figure 9, we observe an improvement in performance as a result of employing data augmentation. Our data augmentation strategy is presented in the Appendix. ## 4 Discussion Although the proposed framework has demonstrated remarkable capability in achieving high-quality customized generation, there are areas that can be improved. For instance, although ProFusion can reduce the training time by only requiring a single training without the need of tuning regularization hyperparameters, the proposed Fusion Sampling actually results in an increased inference time. This is due to the division of each sampling step into two stages. In the future, we would like to explore ways to improve the efficiency of Fusion Sampling. Similar to other related works, our framework utilizing large-scale text-to-image generation models can raise ethical implications, both positive and negative. On the one hand, customized generation can create images with sensitive information and spread misinformation; On the other hand, it also holds the potential to minimize model biases as discussed in [8; 9]. Thus it is crucial to exercise proper supervision when implementing these methods in real-world applications. ## 5 Conclusion In this paper, we present ProFusion, a novel framework for customized generation. Different from related methods which employs regularization, ProFusion successfully performs customized generation without any regularization, thus exhibits superior capability for preserving fine-grained details with less training time. Extensive experiments have demonstrated the effectiveness of the proposed ProFusion. Figure 10: Generated examples of ablation study, with prompt "\(S^{*}\) is wearing superman costume". Figure 9: Data augmentation in fine-tuning stage leads to performance improvement.
2308.02488
Recovering non-Maxwellian particle velocity distribution functions from collective Thomson-scattered spectra
Collective optical Thomson scattering (TS) is a diagnostic commonly used to characterize plasma parameters. These parameters are typically extracted by a fitting algorithm that minimizes the difference between a measured scattered spectrum and an analytic spectrum calculated from the velocity distribution function (VDF) of the plasma. However, most existing TS analysis algorithms assume the VDFs are Maxwellian, and applying an algorithm which makes this assumption does not accurately extract the plasma parameters of a non-Maxwellian plasma due to the effect of non-Maxwellian deviations on the TS spectra. We present new open-source numerical tools for forward modeling analytic spectra from arbitrary VDFs, and show that these tools are able to more accurately extract plasma parameters from synthetic TS spectra generated by non-Maxwellian VDFs compared to standard TS algorithms. Estimated posterior probability distributions of fits to synthetic spectra for a variety of example non-Maxwellian VDFs are used to determine uncertainties in the extracted plasma parameters, and show that correlations between parameters can significantly affect the accuracy of fits in plasmas with non-Maxwellian VDFs.
Bryan C. Foo, Derek B. Schaeffer, Peter V. Heuer
2023-08-04T17:59:23Z
http://arxiv.org/abs/2308.02488v1
Recovering non-Maxwellian particle velocity distribution functions from collective Thomson-scattered spectra ###### Abstract Collective optical Thomson scattering (TS) is a diagnostic commonly used to characterize plasma parameters. These parameters are typically extracted by a fitting algorithm that minimizes the difference between a measured scattered spectrum and an analytic spectrum calculated from the velocity distribution function (VDF) of the plasma. However, most existing TS analysis algorithms assume the VDFs are Maxwellian, and applying an algorithm which makes this assumption does not accurately extract the plasma parameters of a non-Maxwellian plasma due to the effect of non-Maxwellian deviations on the TS spectra. We present new open-source numerical tools for forward modeling analytic spectra from arbitrary VDFs, and show that these tools are able to more accurately extract plasma parameters from synthetic TS spectra generated by non-Maxwellian VDFs compared to standard TS algorithms. Estimated posterior probability distributions of fits to synthetic spectra for a variety of example non-Maxwellian VDFs are used to determine uncertainties in the extracted plasma parameters, and show that correlations between parameters can significantly affect the accuracy of fits in plasmas with non-Maxwellian VDFs. Thomson scattering, non-Maxwellian distribution functions, numerical methods ## I Introduction Thomson scattering (TS) refers to the scattering of electromagnetic radiation by a collection of many charged particles, such as a plasma [1; 2]. Optical TS, in which the probing radiation consists of optical wavelengths, is a widely-used _in situ_, non-perturbative diagnostic tool for characterizing plasmas, with applications ranging from laboratory astrophysics [3; 4; 5; 6] to fusion plasmas [7; 8; 9; 10]. In collective TS, the parameters associated with the plasmas of interest cause incident optical radiation to be scattered by electron plasma waves (EPW) and ion acoustic waves (IAW). The fluctuations in the charge density produce TS spectral features [1] which can be measured and analyzed in order to extract information about the particle velocity distribution functions (VDFs). Most TS analysis tools assume that the plasma is thermalized so that the electron and ion VDFs are both Maxwellian with corresponding parameters like the temperatures \(T_{e}\), \(T_{j}\) and the densities \(n_{e}\), \(n_{j}\), of the electron and ion populations. These Maxwellian parameters can then be inferred from the spectrum by an algorithm that minimizes the difference between the measured scattered spectrum and an analytic spectrum generated from these Maxwellian VDFs. Non-Maxwellian VDFs have been directly observed with TS diagnostics in recent experiments on high-energy-density (HED) plasmas [11; 12; 13] and laser-driven collisionless shocks [4; 14]. Indeed, deviations of the VDF from a Maxwellian can be crucial to understanding the plasma dynamics. Previous theoretical studies have discussed the form of the TS spectra generated from non-Maxwellian distributions such as a two-stream distribution [15; 16] and a super-Gaussian distribution [17], as well as how the resulting spectra deviate from their Maxwellian counterparts. Milder _et al._[18] have also shown that fitting the TS spectrum produced by a super-Gaussian electron VDF with a Maxwellian model can give incorrect plasma parameters. The inaccurate fitting is due to how the non-Maxwellian deviations affect the strength of Landau damping at the location of the Thomson spectral peaks. Consequently, existing tools designed for Maxwellian VDFs are insufficient to analyze TS spectra from plasmas with non-Maxwellian VDFs. In this paper we investigate how non-Maxwellian VDFs impact TS spectra with the aid of two new numerical tools that we have developed [19]. The first tool is a "forward model" which computes the analytic TS spectrum from a set of arbitrary discretized particle VDFs, and the second is a "fitting algorithm" which extracts non-Maxwellian plasma parameters from a TS spectrum by iteratively applying the non-Maxwellian forward model at different points in parameter space. These open-source tools expand on the work of Milder _et al._ by enabling the study of arbitrary non-Maxwellian VDFs. For the fitting algorithm, we examine the use of two schemes to optimize the parameter space exploration. One scheme is differential evolution (DE), which attempts to continuously optimize a candidate solution by combining previous solutions. This scheme can search a large parameter space and is often computationally more efficient that "brute force" methods, but it is susceptible to uncertainties caused by correlations between parameters in the solution. The second scheme is a Monte Carlo Markov Chain (MCMC). While generally computation ally more expensive than DE, MCMC is well-suited for exploring very large parameter spaces that can be used to estimate the uncertainties in the best-fit parameters from the DE scheme. In Sec. II we review the TS theory that forms the basis for our method. The numerical tools we developed to implement this routine are discussed in Sec. III-IV. We also describe a process for testing our method and comparing it with an open-source method that assumes Maxwellian VDFs, the results of which are presented in Sec. V. Finally, in Sec. V.2 we analyze the uncertainty and confidence associated with our fitting algorithm and discuss possible areas of improvement. Our conclusions are summarized in Sec. VI. ## II Thomson scattering theory In this section we briefly review the theory behind Thomson scattering, following the approach of Froula _et al_[1]. TS occurs when the absorption of an incident photon causes a charged particle to undergo acceleration, which then induces Larmor radiation as the charge emits a photon. In the charge's rest frame (the primed frame), the frequency \(\omega_{s}^{\prime}\) of the scattered photon is equal to the frequency \(\omega_{i}^{\prime}\) of the incident photon. In the (unprimed) lab frame, \(\omega_{s}\) can be solved for by computing the primed frame solution and applying the appropriate Doppler shifts, which are a function of the particle velocity \(\mathbf{v}\) as seen in the lab frame, as well as the incident and scattered wavevectors \(\mathbf{k}_{i}\) and \(\mathbf{k}_{s}\), respectively. If we define \(\omega\equiv\omega_{s}-\omega_{i}\) and \(\mathbf{k}\equiv\mathbf{k}_{s}-\mathbf{k}_{i}\), the final result is shown to be \[\omega=\mathbf{k}\cdot\mathbf{v}. \tag{1}\] In the case of many charges, monochromatic incident light is scattered into a spectrum of frequencies, which is determined by the velocities of all charged particles in the plasma. In that case, the scattered power density \(P\) has the following proportionality: \[P(\mathbf{k}_{s},\omega_{s})\propto\left(1+\frac{2\omega}{\omega_{i}}\right)S (\mathbf{k},\omega). \tag{2}\] The factor \(S(\mathbf{k},\omega)\) is the spectral density function, which contains the dependence on the velocities of the charged particles in the plasma. Note that when discussing TS forward models and fitting algorithms, the "TS spectrum" being computed and fitted is either the spectral density function itself, or the scattered power as in Eq. 2, depending on the data being fit. For the purposes of this paper, the TS spectrum is the normalized scattered power unless otherwise specified. In general, the spectral density function can be written in terms of the normalized VDFs of the electrons and ions in the plasma: \[S(\mathbf{k},\omega)=\underbrace{\frac{2\pi}{k}\bigg{|}1-\frac{ \chi_{e}}{\epsilon}\bigg{|}^{2}f_{eo}(\omega/k)}_{\text{Electron component}}\\ +\underbrace{\sum_{j}\frac{2\pi}{k}\frac{Z_{j}^{2}n_{j0}}{N}\bigg{|} \frac{\chi_{j}}{\epsilon}\bigg{|}^{2}f_{jo}(\omega/k)}_{\text{Ion component}}. \tag{3}\] Here \(f_{eo}\) is the one-dimensional electron VDF in the direction of measurement and \(f_{jo}\) are the ion VDFs, with \(j\) indexing the ion species. \(Z_{j}\) and \(n_{j0}\) are the charge and density of ion species respectively, and \(N\) is the combined density of all ions. The electron susceptibility \(\chi_{e}\) and the ion susceptibilities \(\chi_{j}\) are functions of \(\mathbf{k}\) and \(\omega\) given by \[\chi_{e}(\mathbf{k},\omega)=\frac{4\pi e^{2}n_{e0}}{m_{e}k^{2}} \int_{-\infty}^{\infty}d\mathbf{v}\frac{\mathbf{k}\cdot\partial f_{e0}/ \partial\mathbf{v}}{\omega-\mathbf{k}\cdot\mathbf{v}} \tag{4}\] \[\chi_{j}(\mathbf{k},\omega)=\frac{4\pi Z^{2}e^{2}n_{j0}}{m_{j}k^{2 }}\int_{-\infty}^{\infty}d\mathbf{v}\frac{\mathbf{k}\cdot\partial f_{j0}/ \partial\mathbf{v}}{\omega-\mathbf{k}\cdot\mathbf{v}}, \tag{5}\] where \(n_{e0}\) is the electron density, \(e\) is the electric charge, \(m_{e}\) is the electron mass, and the integrals can be performed along a Landau contour which deviates from the real axis just enough to avoid the pole at \(v=\omega/k.\) The longitudinal dielectric function \(\epsilon\) is given by \[\epsilon=1+\chi_{e}+\sum_{j}\chi_{j}. \tag{6}\] In practice, the TS will not greatly perturb the photon frequency, so the TS spectrum will only be non-negligible in a small range of scattered frequencies around the incident frequency, \(\omega_{s}\approx\omega_{i}.\) Therefore, \(\mathbf{k}_{s}\) and also \(\mathbf{k}\) will also not vary significantly. If we make the approximation that the direction of \(\mathbf{k}\) is effectively fixed, then \(\chi_{e}\) and \(\chi_{j}\) will only be sensitive to the 1D projected VDF in the direction of \(\mathbf{k}\) and Eqs. 4-5 can be rewritten as integrals over scalars. TS can either be non-collective, meaning dominated by scattering off of individual, non-correlated electrons in the plasma, or collective, meaning, in unmagnetized plasmas, dominated by electron plasma waves (EPWs) and ion acoustic waves (IAWs) which propagate through the plasma. These regimes can be distinguished by the scattering parameter \(\alpha\), defined in terms of the electron Debye length \(\lambda_{De}\) as \[\alpha=\frac{1}{k\lambda_{De}}. \tag{7}\] If \(\alpha\ll 1\), then \(\lambda\ll\lambda_{De}\) and the incident radiation effectively sees the electrons as free (unbound), leading to the non-collective regime. If \(\alpha\gtrsim 1\), then the radiation sees the Debye-shielded charges and therefore the correlations between the motions of the electrons, leading to collective scattering. The collective TS spectrum can be broken into two regimes. At high \(|\omega|\), the heavier ions are unable to respond, so the electron component as labeled in Eq. 3 dominates. At low \(|\omega|\), both the electrons and ions are able to respond, but the ion component tends to dominate. The electron component is generally still non-trivial, which becomes important when applying a fitting algorithm. We will refer to the high \(|\omega|\) regime as the EPW spectrum, as it is dominated by scattering from EPWs, and the low \(|\omega|\) regime as the IAW spectrum, as it is dominated by scattering from IAWs. Due to the difference in frequency scales between the electron- and ion-dominated components, the EPW and IAW spectra are usually measured with separate spectrometers in experiments [4; 20], so treating them separately is justified. Additionally, the measurement of the EPW spectrum usually involves the placement of a notch filter to block out the low-\(\omega\) portion of the spectrum as it is much brighter than the high-\(\omega\) electron contributions. Our code takes this notch filter into account. Because Eq. 3 can be used to compute the TS spectrum from an arbitrary set of VDFs, in principle it may be possible to invert this process and infer the VDFs from a TS spectrum, assuming that the TS spectrum is a non-degenerate function of the VDFs. In practice this inversion can be unreliable even if the TS spectrum is nearly degenerate or if the VDFs are described by too many parameters. This inversion process is known to work if the VDFs in question are Maxwellian [1], and most tools developed for TS analysis use this Maxwellian assumption. However, these tools cannot accurately reconstruct the VDFs when this assumption is violated, as described in following sections. ### Example VDFs Although in theory we can compute a TS spectrum from arbitrary VDFs, the examples presented in the figures in this paper focus on a representative selection of non-Maxwellian VDF models relevant to plasma physics. These VDF models as well as their plasma parameters are discussed below. Maxwellians are the most common VDFs in plasmas, as they describe plasmas which are in thermal equilibrium and can be derived from statistical mechanics and the Boltzmann distribution. A Maxwellian distribution takes the form \[f(v)=N\exp\bigg{(}-\frac{m(v-v_{D})^{2}}{2k_{B}T}\bigg{)}, \tag{8}\] where \(k_{B}\) is the Boltzmann constant and \(N\) is some normalization factor. A unit-normalized Maxwellian has two parameters: the drift velocity \(v_{D}\) and the temperature \(T\). A VDF could also be composed of a linear combination of two or more Maxwellians which have different temperatures and/or drift velocities. The kappa or generalized Lorentzian distribution is a non-Maxwellian distribution which is expected in plasmas where collisions are insufficient to thermalize the plasma, leaving more particles at high energy and forming suprathermal tails which deviate from Maxwellians. The integral-normalized kappa distribution takes the form \[f(v)=\frac{\kappa^{-\frac{3}{2}}}{2\pi w_{\kappa}^{3}}\frac{\Gamma(\kappa+1)}{ \Gamma(\kappa-\frac{1}{2})\Gamma(\frac{3}{2})}\bigg{(}1+\frac{(v-v_{D})^{2}}{ \kappa w_{\kappa}^{2}}\bigg{)}^{-(\kappa+1)}, \tag{9}\] with \(w_{\kappa}=\sqrt{(2\kappa-3)k_{B}T/\kappa m}\) for particles of mass \(m\). Here \(v_{D}\) is the drift velocity, \(\Gamma()\) is the Gamma function, and the spectral index \(\kappa\) is a measure of the non-Maxwellian deviation [21]. Note that the usual notion of temperature does not apply to non-Maxwellian distributions as they are not thermalized, but we can still define an equivalent temperature \(T\) in terms of the variance of the VDF: \[T\equiv C\int dvf(v)(v-v_{D})^{2}, \tag{10}\] where \(C\) is an appropriate normalization such that \(T\) matches the usual temperature for Maxwellians, and \(v_{D}\) is the mean drift of the VDF, defined as \[v_{D}\equiv\int dvf(v)v. \tag{11}\] In the limit \(\kappa\rightarrow\infty\), the kappa distribution approaches a Maxwellian with the same drift velocity and equivalent temperature, while at \(\kappa=3/2\), the distribution collapses and becomes undefined. Taking a linear combination of a hot kappa distribution and a cooler Maxwellian results in a core-halo distribution, which is commonly observed in the solar wind [22]. The super-Gaussian distribution is another common non-Maxwellian VDF model in HED plasmas and can be created by inverse bremsstrahlung heating [18]. The general form looks similar to a Maxwellian or Gaussian distribution but with the power in the exponent as an additional free parameter. The 3D isotropic super-Gaussian distribution takes the form [18] \[F(\mathbf{v})=\frac{p}{4\pi v_{p}^{3}\Gamma(3/p)}\exp\left(-\left|\frac{ \mathbf{v}-\mathbf{v}_{B}}{v_{p}}\right|^{p}\right), \tag{12}\] with drift velocity \(\mathbf{v}_{D}\) and additional parameters \(x\) and \(p\). When \(p=2\) this reduces to a Maxwellian (Eq. 8) and \(x\) becomes the temperature (up to a constant). For \(p\neq 2\), the parameter \(x\) remains linearly related to the temperature via a constant factor which depends on \(p\). The corresponding 1D projection is given by \[f(v_{x})=\int F(\mathbf{v})\;dv_{y}dv_{z}, \tag{13}\] which can be computed in terms of gamma functions. A related VDF which is simpler to express is the 1D super-Gaussian: \[f(v)=\frac{p}{2v_{p}\Gamma(1/p)}\exp\left(-\left|\frac{v-v_{B}}{v_{p}}\right|^ {p}\right). \tag{14}\] ### \(\chi\) for different VDF models Recalling the definitions of the \(\chi\) functions defined in Eqs. 4-5, note that if we perform a change of variables \(\mathbf{u}=\mathbf{v}/\left(v_{th}\sqrt{2}\right)\), \(\xi=\omega/\left(kv_{th}\sqrt{2}\right)\) and operate in the regime where \(k\) is nearly fixed, then we can write \(\chi\) as a function of \(\xi\), up to some constant factors determined by the plasma parameters: \[\chi_{e}(\xi) \propto\int_{-\infty}^{\infty}d\mathbf{u}\frac{\partial f_{e0}/ \partial\mathbf{u}}{\xi-\mathbf{u}}, \tag{15}\] \[\chi_{j}(\xi) \propto\int_{-\infty}^{\infty}d\mathbf{u}\frac{\partial f_{j0}/ \partial\mathbf{u}}{\xi-\mathbf{u}}. \tag{16}\] These dimensionless integrals have the temperature dependence factored out, so they only depend on the overall form of the VDF and will differ for different VDF models. Fig. 1 shows the dimensionless \(\chi(\xi)\) integrals for three different VDF models for reference. It is helpful to note that the real part of \(\chi\) represents the dispersion of an electrostatic wave at given \(\omega\) through the plasma, while the the imaginary part represents the Landau damping on electrostatic waves. The \(\chi\) functions significantly impact the features of the TS spectrum. For instance, a larger imaginary part of \(\chi\), corresponding to stronger Landau damping, is associated with broader spectral peaks [18]. This can be seen in Fig. 2, which shows several analytic TS spectra from Maxwellian and drifting Maxwellian VDFs. The location of the EPW spectral peaks depends on the dispersion relation of the EPWs [1], which only weakly depends on the temperature at optical wavelengths (assuming the charge density is fixed). Therefore, increasing the temperature leaves the location of the spectral peaks essentially unchanged in velocity space, but increases the thermal velocity and therefore moves the spectral peaks to lower \(\xi\), which corresponds to higher imaginary \(\chi\) (for \(\xi\gtrsim 1\)). Thus, we expect broader spectral peaks at higher temperatures, which is seen in the EPW TS spectra in Fig. 2. Non-Maxwellian deviations in the VDFs impact \(\chi\) even if the temperature and density are held constant, which then affects the TS spectrum. A purely Maxwellian fitting algorithm can only vary the Maxwellian temperature and density, so when fitting a TS spectrum with non-Maxwellian deviations in \(\chi\), the algorithm will attempt to compensate for those non-Maxwellian deviations by changing the temperature and density in order to affect \(\chi\). This leads to incorrect fitting of the plasma parameters. This can be illustrated following the above example with temperature. For a given \(\xi\gtrsim 1\), imaginary \(\chi\) (Landau damping) is smaller for a super-Gaussian compared to a Maxwellian distribution, as shown in Fig. 1. Consequently, the same temperature (as defined in Sec. II.1) will lead to narrower spectral peaks for the super-Gaussian. ## III Arbitrary forward model In this section we describe a forward model which maps a given set of discretized arbitrary VDFs to a corresponding TS spectrum, assuming that the VDFs are in quasi-equilibrium and that the plasma is unmagnetized [1]. The plasma physics scientific python package PlasmaPy [23] includes a built-in function which computes the TS spectrum given a set of Maxwellian plasma parameters (the "Maxwellian forward model"), which includes ion and electron temperatures, densities, and drift velocities. We expand upon this code and develope a numerical function which computes the TS spectra for arbitrary VDFs, which we refer to as the "arbitrary forward model". For a given VDF, the arbitrary forward model accepts an array of velocity values \(\{v_{j}\}\) and corresponding VDF values \(\{f_{j}\}\) which are meant to represent \(f_{j}=f(v_{j})\), where \(f(v)\) is the true continuous VDF. The Maxwellian forward model in PlasmaPy was used as a benchmark for testing and confirming the validity of our arbitrary forward model. ### Numerical Implementation Evaluating the spectral density in Eq. 3 requires computation of the integrals in Eqs. 4-5, which includes in Figure 1: Comparison of \(\chi(\xi)\) for three different VDF models: Maxwellian (blue), kappa (\(\kappa=3\), red), and 1D super-Gaussian (\(p=3\), yellow). The real part of \(\chi\) is shown in a) and the imaginary part is shown in b). The imaginary part is shown on a semi-log plot to emphasize the distinction between \(\chi\) for the different VDF models at high \(\xi\). tegrrating over a Landau contour to avoid the pole at \(v=\omega/k\). We note that the susceptibility integrals can be recast into the form \[\chi(k,\omega)=\int_{-\infty}^{\infty}du\frac{g(u)}{\xi-u} \tag{17}\] with an appropriate change of variable \(v\to u(v).\) This integral can be rewritten using Plemelj's formula [1] as \[\chi(k,\omega)=i\pi g(\xi)+\text{p.v.}\int_{-\infty}^{\infty}du\frac{g(u)}{\xi -u}, \tag{18}\] where p.v. \(\int\) refers to the Cauchy principal value of the integral, defined as \[\text{p.v.}\int_{a}^{b}\frac{g(u)}{\xi-u}dx=\\ \lim_{e\to 0}\left[\int_{a}^{\xi-e}\frac{g(u)}{\xi-u}dx+\int_{\xi+e}^{ b}\frac{g(u)}{\xi-u}dx\right]. \tag{19}\] If we pick some small but finite standoff value \(\epsilon=\phi,\) we can calculate the principal value by standard numerical integration in the ranges \([a,\xi-\phi]\) and \([\xi+\phi,b]\), plus a \begin{table} \begin{tabular}{c|c c c c c c c} Ex. \# & \(T_{e}\) [eV] & \(v_{e}\) [m/s] & \(T_{i}\) [eV] & \(v_{i}\) [m/s] & \(n_{e}\) [cm\({}^{-3}\)] & \(n_{i}\) [cm\({}^{-3}\)] & Resolution (\(\delta v_{e}\), \(\delta v_{i}\)) [m/s] \\ \hline 1 & 300 & 0 & 100 & 0 & \(4\times 10^{18}\) & \(4\times 10^{18}\) & \((2\times 10^{5},8\times 10^{3})\) \\ 2 & 100 & \(-3\times 10^{6}\) & 150 & 0 & \(4\times 10^{18}\) & \(4\times 10^{18}\) & — \\ 3 & 200 & 0 & 50 & \(2\times 10^{5}\) & \(4\times 10^{18}\) & \(4\times 10^{18}\) & — \\ 4 & 300 & 0 & (100, 200) & (0, 3)\(\times 10^{5}\) & \(4\times 10^{18}\) & \((1.6,2.4)\times 10^{18}\) & \((2\times 10^{5},8\times 10^{3})\) \\ 5 & 300 & 0 & 100 & 0 & \(4\times 10^{18}\) & \(4\times 10^{18}\) & \((8\times 10^{6},4\times 10^{5})\) \\ 6 & 300 & 0 & 100 & 0 & \(4\times 10^{18}\) & \(4\times 10^{18}\) & \((8\times 10^{4},1.2\times 10^{3})\) \\ \end{tabular} \end{table} Table 1: Example Maxwellian plasma parameters used for calculating VDFs and associated TS spectra. In all cases except Example 4, both the electron and ion VDFs are defined by a single Maxwellian. In Example 4, the ion VDF is composed of two Maxwellians, the parameters of which are indicated by ordered pairs. For Examples 1, 4, 5, and 6, the discretized velocity spacing \(\delta v_{e}\) and \(\delta v_{i}\) for the electron and ion VDFs, respectively, are also shown for reference. These parameters affect the input of the arbitrary forward model. For the purposes of these example spectra, 500 velocity points are used for each VDF. The Resolution column is empty for Examples 2 and 3 as only the Maxwellian forward model, which does not take discretized VDFs as input, is applied to those examples. Note that Examples 5 and 6 have the same plasma parameters as Example 1, but different resolutions in the numerical computation of the TS spectra, as shown in Fig. 4. Figure 2: Normalized Maxwellian electron and ion (proton) VDFs, and the resulting normalized EPW and IAW spectra for three different sets of plasma parameters. The different examples are arranged by row. All examples assume an electron-proton plasma with a number density of \(n=4\times 10^{18}\) cm\({}^{-3}\) for each species. The incident laser wavelength is \(532\) nm. The EPW spectra each have the wavelength range \([520,540]\) nm excluded to represent a notch filter. The Maxwellian plasma parameters are indicated in Table 1, with the example numbers in the subtitles of each plot indicating which set of parameters it is associated with. All TS spectra were computed using the Maxwellian forward model from PlasmaPy [23]. Each spectrum is normalized to have unit integral when computed in SI units, after accounting for the notch filter. correction term: \[\text{p.v.}\int_{a}^{b}\frac{g(u)}{\xi-u}dx\approx\\ \left[\int_{a}^{\xi-\phi}\frac{g(u)}{\xi-u}dx+\int_{\xi+\phi}^{b} \frac{g(u)}{\xi-u}dx\right]+2\phi g^{\prime}(\xi). \tag{20}\] Using this result, the susceptibility can be approximated by \[\chi(k,\omega)\approx\int_{-\infty}^{\xi-\phi}du\frac{g(u)}{\xi-u }+\int_{\xi+\phi}^{\infty}du\frac{g(u)}{\xi-u}\\ +i\pi g(\xi)+2\phi g^{\prime}(\xi). \tag{21}\] The function \(g(\xi)\) is proportional to \(\partial f/\partial v\), which is numerically computed from the discretized input VDFs using a finite difference scheme to 4th order precision. The two definite integrals in Eq. 21 cross no poles, so they can be evaluated using a standard numerical integration scheme such as a Riemann sum. To minimize the number of integration points while maintaining precision, we implement a scheme in which the points are finely spaced close to the pole, where the integrand varies rapidly, but coarsely spaced away from the pole, where the integrand varies slowly. ### Numerical Errors The estimation of the \(\chi\) integrals as done in Eq. 21 as part of the arbitrary forward model introduces some numerical error into the TS spectrum. Our algorithm uses the range of velocities defined for the input VDFs to determine an integration range. It also obtains the values of the integration points by interpolating from the input VDFs. This means that appropriate resolution of the input VDFs is necessary to minimize the error of the forward model. This constrains both the array spacing \(\delta v\) of the input VDFs as well as the total range over which the input VDFs are defined. For a VDF \(f(v)\), the array spacing must be sufficiently small to resolve the features of the VDF and to precisely compute \(\partial f/\partial v\). In general \(\delta v\) should be less than a characteristic velocity associated with the resolution of \(f(v)\), found by taking the ratio of the VDF with its derivative: \(\delta v\leq f(v)/f^{\prime}(v)\). The total range of the velocity array must be large enough so that the forward model includes all important features of the VDF. This can be expressed as \(f(v)/\max(f)\ll 1\) for all \(v\) outside the array range, where \(\max(f)\) is the maximum value that \(f(v)\) takes over all \(v\). Both of these conditions are necessary for the numerical calculation of the spectral density. If the input VDF spacing is too sparse, then the \(\chi\) integration may not have enough points to be accurate. If the VDF range is too small, then the finite integration is not a good approximation of the integral from \(-\infty\) to \(\infty\). Better quantification of the effects of the input velocity array on numerical integration will be the subject of future studies. We quantify the error in the Maxwellian case by comparing the output spectra generated by the arbitrary forward model using Maxwellian input VDFs to the corresponding output spectra from the Maxwellian forward model. Fig. 3 compares the forward-modeled spectra for examples with VDFs that are Maxwellians or linear combinations of Maxwellians. There are a few small differences in the spectra, but they have good agreement overall, validating the arbitrary forward model. The pointwise relative error of the forward-modeled spectra from the arbitrary forward model was \(\lesssim 5\%\) for the examples shown in Fig. 3. In addition, those differences can be decreased arbitrarily by increasing the resolution of the \(\chi\) integration scheme. To illustrate the numerical effects of velocity resolution and VDF range, Fig. 4 shows the output spectra when the arbitrary forward model is applied to the same Maxwellian VDFs (Example 1 in Table 1), but where the velocity arrays on which the VDFs are defined have been modified. The ranges of the velocity arrays have been changed while leaving the array length fixed in both cases. In Example 5, the velocity range is much larger and the VDF is effectively defined by very few velocity bins. We refer to this as the "narrow" case. In Example 6, the velocity range is smaller so that the tails of the VDFs are cut off. We refer to this as the "wide" case. In the narrow case, the array spacing \(\delta v\) is several orders of magnitude larger than the characteristic velocity \(f(v)/f^{\prime}(v)\) at some values of \(v\), so the VDF is not well-resolved. In the wide case, the VDFs are wider than the velocity array. The VDFs are cut off at a factor of \(\sim 0.01\) of their maxima, which is much larger compared to \(\lesssim 10^{-10}\) in the well-resolved case shown in Fig. 3. In both cases, the resulting computed spectra deviate significantly from the Maxwellian forward model. The pointwise relative error of the well-resolved, narrow, and wide cases are shown in Fig. 5. Due to the \(\chi\) integration over the entire VDF, every point on the TS spectrum is affected by the entire VDF. Even in the wide case, where a subset of the VDF is very well-resolved, the pointwise error of the arbitrary forward model is worse across the entire TS spectrum. ## IV Fitting Algorithms The current release of the plasma physics Python package PlasmaPy [23] contains a built-in function which can fit Maxwellian plasma parameters to TS spectra using the DE algorithm from the Python package lmfit. This algorithm can also fit to sums of Maxwellians. Because the computation of the susceptibility function for Maxwellians is optimized by tabulation [24], using the Maxwellian forward model makes the fitting significantly faster than it would be with the arbitrary forward model. Therefore the main use case of the arbitrary forward model for fitting would be in fitting of TS spectra from non-Maxwellian VDFs. There are several possible approaches to a non-Maxwellian iterative fitting algorithm, which produce fits to varying degrees of arbitrariness. If we had sufficient computing power, we could in theory treat each point on a discretized VDF array as its own parameter, and then fit to that set of parameters. However, implementing this approach with sufficient resolution of the velocity axis becomes computationally infeasible due to the large number of free parameters. Another potential approach would be to provide a number of pre-defined parametrized non-Maxwellian VDF models with low-dimension parameter spaces that the algorithm can fit to. This has the issue of being too restrictive, and a better method in this case would be to design individual forward models based around each of the VDF models. The approach our algorithm employs is to accept custom user-defined parametrized VDF models and to then fit an input TS spectra assuming that the VDFs are in the form of those user-defined models. This allows the user to fit to different non-Maxwellian models while still keeping the parameter space small. We compared the fits obtained by this approach with fits obtained from assuming Maxwellian VDFs to determine how well the arbitrary method performs. As discussed in Sec. II, the EPW spectrum is negligibly affected by the ion VDFs, so the EPW spectrum is used first to fit the electron parameters while holding the ion parameters fixed at some arbitrary values that would be reasonable for the system. The IAW spectrum depends on both electron and ion parameters, so to reduce the time needed for fitting, the electron parameters can be fixed at the values obtained from fitting the EPW spectrum and the remaining ion parameters are fitted using the IAW spectrum. If necessary, this process could be further iterated with additional parameter constraints to refine the fit as needed. Figure 4: Arbitrary forward model applied to the same electron and proton VDFs at different velocity resolutions. (Top row) The velocity arrays that the VDFs are defined on are too sparse to properly resolve the VDF. (Bottom row) The velocity array does not cover a wide enough range of velocities to capture the entire VDF. For each, the corresponding TS spectra from both the Maxwellian (gray) and arbitrary (red) forward models are shown. The plasma parameters are listed in Table 1. Figure 3: VDFs from a single Maxwellian (top row) or sums of Maxwellians (bottom row), and their corresponding TS spectra from both the Maxwellian (gray) and arbitrary (red) forward models. The plasma parameters are indicated in Table 1 and indexed by the example numbers in the figure. ### Procedure for comparing fitting algorithms We apply a general procedure to test both fitting algorithms and compare the results. First, we prepare synthetic electron and ion VDFs which have the same form as physically relevant VDFs. We then compute the EPW and IAW TS spectra of a plasma with these VDFs. Gaussian noise is added to make the resulting spectra more realistic, with a standard deviation given by \(\sigma_{n}=0.1\max(P(\lambda))\), with \(P(\lambda)\) being the TS spectrum. The VDFs are then reconstructed based on the fitted parameters and are compared to the initial input VDFs, while the parameters and their variances are compared to the input parameters. After obtaining the results from the fitting algorithms, an MCMC sampler is used to explore the parameter space and compute the one- and two-dimensional posterior probability distributions of the parameters in order to estimate the confidence in the fitted values. After fitting the synthetic spectra, we analyze and compare the accuracy of the fits by calculating the \(\chi^{2}\) statistic between the best-fit VDFs and the input VDFs, as well as the percent error of quantities such as the density, bulk flow velocity, and the equivalent temperature of the VDFs. We characterize the error bars associated with the fit and use these to search for correlations and degeneracies in the arbitrary forward model using an MCMC sampler [25]. ### Validating the arbitrary fitting algorithm We first fit a TS spectrum derived from Maxwellian VDFs in order to benchmark the arbitrary fitting algorithm against the existing fitting algorithm in PlasmaPy. For this test, the TS spectrum from Maxwellians is gen Figure 5: Pointwise relative error of the arbitrary forward model compared to the Maxwellian forward model for three different velocity resolutions. Both the narrow case, where the VDF is poorly resolved, and the wide case, where the VDF tails are cut off, result in significantly higher errors than the well-resolved case. Figure 6: Synthetic TS spectrum derived from a Maxwellian electron VDF and Maxwellian proton VDF and fitted using both the arbitrary and Maxwellian fitting algorithms. The true VDFs which were used to generate the synthetic spectrum are shown in gray in the top row, and the resulting (notched) EPW and IAW spectra are shown in gray in the bottom row. The best-fit spectra and VDFs from the arbitrary fitting algorithm are shown in red and the best-fit spectra and VDFs from the Maxwellian fitting algorithm are shown in blue. The EPW and IAW spectra are scaled to have unit integrals when evaluated in SI units. erated with the Maxwellian forward model and fitted with both the Maxwellian and arbitrary fitting models as shown in Fig. 6. The plasma parameters used to generate the VDFs are given in Table 2. The fitted VDFs agree with each other and with the input VDFs, validating the arbitrary fitting algorithm for fitting TS from Maxwellian VDFs. We can also verify this quantitatively by computing goodness-of-fit metrics for the fitted TS spectra and the VDFs, as shown in Table 3. For the VDFs, we use the usual definition of \(\chi^{2}\) goodness of fit: \[\chi^{2}_{VDF}=\frac{1}{N}\sum_{N}(F-I)^{2}, \tag{22}\] where the sum is over the indices of the discretized arrays for input function \(I\) and fitted function \(F\). In the case of the TS spectra, the Gaussian noise which is artificially added to the input spectra will increase \(\chi^{2}\) on its own, so the usual definition of \(\chi^{2}\) is normalized by \(\sigma_{n}^{2}\) to account for this and allow comparison between the different fitting examples. Therefore for the TS spectra we use \[\chi^{2}_{TS}=\frac{1}{N\sigma_{n}^{2}}\sum_{N}(F-I)^{2}. \tag{23}\] Note that under this scheme it is still possible for a good fit to have \(\chi^{2}<1\), if the randomly generated synthetic noise happens to contribute less than the expected squared-error of \(\sigma_{n}^{2}.\) From Table 3 we see that the Maxwellian and arbitrary fitting algorithms are approximately on par with each other when fitting Maxwellians. ## V Results ### Comparisons between best-fit VDFs After validating the arbitrary fitting algorithm, we test it on synthetic TS spectra from plasmas with non-Maxwellian VDFs which are computed using the arbitrary forward model. The resulting fits are compared to those generated by applying the Maxwellian fitting algorithm to the same data. The plasma parameters for these non-Maxwellian parameters are also given in Table 2. First, we study the combination of a non-thermal 1D super-Gaussian distribution for the electron VDF and a drifting Maxwellian distribution for the ion VDF. As can be seen in Fig. 7, the electron VDFs are not well-fit by a Maxwellian. Additionally, the moments of the VDF, which are the electron density and the temperature, are not accurately recovered. The algorithm attempts to fit a Maxwellian to the flattened top of the super-Gaussian, but greatly over-estimates the electron temperature in doing so. Additionally, while the ions are Maxwellian, the ion parameters are still not fit correctly with the Maxwellian fitting algorithm. This is because the IAW spectrum has non-trivial \begin{table} \begin{tabular}{c|c c c c c c c c} Ex. \# & Ion species & \(T_{e}\) [eV] & \(v_{e}\) [m/s] & \(T_{i}\) [eV] & \(v_{i}\) [m/s] & \(n_{e}\) [cm\({}^{-3}\)] & \(n_{i}\) [cm\({}^{-3}\)] & \(\kappa_{C}\) & \(p_{e}\) \\ \hline 7 & p & 200 & \(10^{6}\) & 50 & \(10^{9}\) & \(4\times 10^{18}\) & \(4\times 10^{18}\) & — & — \\ 8 & p & 300 & 0 & 50 & \(10^{5}\) & \(4\times 10^{18}\) & \(4\times 10^{18}\) & — & 3 \\ 9 & (p, C\({}_{12}^{6+}\)) & 200 & 0 & (100, 300) & \((2\times 10^{5},0)\) & \(4\times 10^{18}\) & \((2.4,0.267)\times 10^{18}\) & 2 & — \\ \end{tabular} \end{table} Table 2: Plasma parameters used in the fitting examples in Figs. 6-8. For Fig. 7, the \(p_{e}\) electron parameter follows the form of the 1D super-Gaussian in Eq. 14, and for Fig. 8, the \(\kappa_{C}\) parameter for the carbon ions is as defined in Eq. 9. Figure 7: TS spectrum from non-Maxwellian (super-Gaussian) electrons and Maxwellian protons fitted using the arbitrary and Maxwellian fitting algorithms. The same plot organization and color scheme are used as in Fig. 6. dependence on the electron VDF, so that the errors in the electron VDF effectively propagate into the IAW fitting. We also see this in Table 3, as the \(\chi^{2}_{IAW}\) and \(\chi^{2}_{VDF}\) values are significantly higher for the Maxwellian fitting algorithm than for the arbitrary fitting algorithm. For the arbitrary fitting algorithm, because the correct models are used for both the electrons and the ions, the VDFs are both well-fit. However, if the wrong VDF model were to be used for the electrons, the arbitrary fitting algorithm could suffer from the same error propagation issue. This emphasizes the importance of using the correct VDF models for fitting. Second, we study the synthetic spectrum of a drifting Maxwellian electron VDF, a drifting Maxwellian proton VDF, and a kappa-distributed ion (C\({}^{+6}\)) VDF. The relative densities of the protons and carbon ions are chosen to make the plasma quasi-neutral. In Fig. 8, we see that electron VDF is well-fit by the Maxwellian fitting algorithm, but the proton and carbon VDFs are fitted inaccurately. Similarly to the previous example where the poorly-fit non-Maxwellian electron VDF impacted the proton VDF fitting, here the non-Maxwellian carbon VDF causes a similar effect. Although the best-fit ion VDFs for the Maxwellian fitting algorithm are quite different from the corresponding best-fit VDFs from the arbitrary fitting algorithm, we see that the resulting best-fit IAW spectra look sufficiently similar to be within noise effects of each other. This can also be seen in Table 3, as the \(\chi^{2}_{IAW}\) values are similar for the two fitting algorithms, but the \(\chi^{2}_{VDF}\) values are significantly higher for the Maxwellian fitting algorithm. These results suggest a near-degeneracy in the TS forward model. The existence of near-degeneracies in the TS forward model raises the risk of getting a good fit on the TS spectrum which corresponds to an inaccurate VDF. This can be mitigated by restricting the fitting algorithm to a physically motivated VDF model and using other diagnostic results to put limits on the range of plasma parameters. The following section discusses how to identify degeneracies arising from different VDF models. Figure 8: TS spectrum from Maxwellian electrons, Maxwellian protons, and non-Maxwellian (kappa-distributed) carbon ions fitted using the arbitrary and Maxwellian fitting algorithms. The same plot organization and color scheme are used as in Fig. 6, except the second row now shows an additional figure for the carbon ion VDF. \begin{table} \begin{tabular}{c|c c c c|c c c c} & \multicolumn{4}{c|}{Maxwellian Fitting Algorithm} & \multicolumn{4}{c}{Arbitrary Fitting Algorithm} \\ Ex. & \(\#\) & \(\chi^{2}_{\rm{EPW}}\) & \(\chi^{2}_{\rm{VDF}}\) & \(\chi^{2}_{\rm{LAW}}\) & \(\chi^{2}_{\rm{VDF}}\) & \(\chi^{2}_{\rm{EPW}}\) & \(\chi^{2}_{\rm{LAW}}\) & \(\chi^{2}_{\rm{VDF}}\) \\ \hline 7 & 1.005 & \(2.97\times 10^{-20}\) & 1.203 & \(4.87\times 10^{-16}\) & 1.006 & \(3.58\times 10^{-20}\) & 1.200 & \(6.19\times 10^{-16}\) \\ 8 & 1.195 & \(6.92\times 10^{-18}\) & 1.765 & \(7.29\times 10^{-15}\) & 0.936 & \(5.34\times 10^{-20}\) & 1.121 & \(2.475\times 10^{-16}\) \\ 9 & 1.037 & \(3.76\times 10^{-19}\) & 0.915 & \((0.29,1.75)\times 10^{-12}\) & 1.036 & \(3.73\times 10^{-19}\) & 0.893 & \((4.57,5.19)\times 10^{-14}\) \\ \end{tabular} \end{table} Table 3: Values of the \(\chi^{2}\) goodness of fit parameter for the different fitted TS spectra and VDFs for both fitting algorithms. The \(\chi^{2}\) is calculated separately for the EPW spectrum, IAW spectrum, electron VDF (eVDF), and ion VDFs (iVDF). The listed values for the VDFs are given by the usual definition of \(\chi^{2}\) (Eqn. 22). The listed values for the TS spectra are normalized by the synthetic noise and also have the noise subtracted out. The arbitrary fitting algorithm performs at least as well as the Maxwellian fitting algorithm in all cases shown here. ### Uncertainty Analysis After finding the best-fit plasma parameters, we analyze the robustness and uncertainty in the fits. This allows us to determine which aspects of the fits we can be confident in, as well as reveal TS spectrum degeneracies which could make the fits inaccurate. The Python module emcee[25] was used to estimate the posterior probability distributions (PPDs) of the fits using an MCMC sampling algorithm, which is given initial values at the best-fit values from the arbitrary fitting algorithm. Using the PPDs we estimate error bars for each fitted parameter and show that when fitting TS spectra from non-Maxwellian distributions, fits using the arbitrary fitting algorithm derived plasma parameters with greater accuracy than the Maxwellian fitting algorithm. Fig. 9 shows the PPD associated with the fitting of the EPW spectrum in Fig. 6. We see that the 1D PPD projections (Figs. 6a,c,f) approximately follow Gaussian distributions. The two-dimensional slices (Figs. 6b,d,e) also look approximately symmetric, indicating that the variables are effectively fitted independently of each other. From this we conclude that the fitted parameters which were output by the arbitrary fitting algorithm are accurate, as there is one clear location in parameter space where the posterior probability density reaches a maximum, and all marginal distributions also reach their maxima. The 95% confidence intervals (dashed lines) for each parameter are shown for reference. Both fitting algorithms produce approximately equal-sized uncertainties for each parameter, and the true values are contained within these uncertainties. This benchmarks the arbitrary fitting algorithm to be approximately on par with the Maxwellian fitting algorithm when fitting TS spectra from Maxwellians. Some of the best-fit values in Fig. 9 visually appear far away from the true parameters, but this is due to a rescaling of the relevant axes in the figure in order to see the PPD features, and does not represent the actual ranges of parameters over which the algorithm was allowed to explore (this is especially the case with the velocity parameter, which was fit over a large range). For instance, the fitting algorithm used to produce the results of Fig. 9 ranged over electron temperatures of 10-2000 eV, drift velocities of \(-10^{7}\)-\(10^{7}\) m/s, and electron densities of \(10^{17}\)-\(10^{19}\) cm\({}^{-3}\). When taking the full ranges into account, the errors of the best fit parameters are on the order of \(\sim 1\%\) of the total parameter range, which shows that the algorithms find the correct plasma parameters even when allowed to explore over a very large region of parameter space. Fig. 10 shows the PPD corresponding to the fitting of the C\({}^{6+}\) ion parameters from the IAW spectrum in Fig. 7 using the arbitrary fitting algorithm. In order to focus on the kappa-distributed carbon ion VDF parameters, the electron and proton parameters were all held fixed at best-fit values during the MCMC sampling. Unlike the Maxwellian case, the 1D marginal distributions (Figs. 10a,c,f) are no longer Gaussian and the two-dimensional slices (Figs. 10b,d,e) illustrate a strong correlation between the fitted electron temperature and the spectral index \(\kappa\). This is because both the temperature and \(\kappa\) affect the width of the VDF, so they are degenerate. We see that the 95% confidence intervals for \(T\) and \(\kappa\) are large, which in the absence of other information might suggest that the fit is not robust. However, from the full PPD we see that the single-parameter confidence intervals are misleading due to the correlated parameters. It is possible that if the initial guesses for the DE fitting algorithm are chosen poorly, the algorithm would land somewhere along the curve where the value of the PPD is high, but far from the true values, therefore resulting in inaccurate fitted parameters. In practice, physical constraints obtained by analyzing other experimental data must be applied as boundaries on the fitting parameters to break this degeneracy. We also see that the Maxwellian-fitted temperature is significantly lower than the true equivalent temperature. This can be explained by the fact that at the location of the carbon spectral peaks at \(\xi\approx 1.57\) in this example, the imaginary (Landau damping) term of \(\chi\) for a kappa distribution is less than that of a Maxwellian (see Fig. 1), which the fitting algorithm compensates for by lowering the Maxwellian temperature. It is important to note that while the Maxwellian Landau damping term remains above the corresponding term for super-Gaussians at high \(\xi\) (corresponding to wavelengths far from the incident laser wavelength, see Fig. 1), the Maxwellian Landau damping term actually drops below the kappa Landau damping term at high \(\xi\). This indicates that in a parameter regime where the VDF is a kappa distribution but the spectral peaks lie at high \(\xi\), the error in the Maxwellian fitting could be reversed, which further illustrates the problems with applying the Maxwellian fitting algorithm to TS from non-Maxwellian plasmas. Although the arbitrary fitting algorithm can perform better than the Maxwellian fitting algorithm in fitting TS spectra from non-Maxwellian VDFs, we have shown that the presence of non-Maxwellian VDF models can make the results of the fitting difficult to interpret without taking into account the details of how the different parameters of the non-Maxwellian VDF model correlate and affect the forward model. The use of the MCMC sampler on a synthetic distribution is one means by which we can study these correlations for given VDF models. ## VI Conclusions We have developed an arbitrary forward model which can compute integral-normalized TS spectra from arbitrary discretized electron and ion VDFs. The arbitrary forward model is benchmarked against a forward model which uses a tabulated plasma dispersion relation to compute the TS spectra for VDFs which are Maxwellian or linear combinations of Maxwellians. We discuss the numerical error associated with the arbitrary forward model Figure 10: 1D and 2D projections of the PPDs associated with applying the arbitrary and Maxwellian fitting algorithms to an IAW spectrum from a kappa-distributed ion VDF. The same color scheme is used as in Fig. 9. Note that the \(\kappa\) parameter is not fitted by the Maxwellian fitting algorithm, so there are no red/magenta elements in the plots associated with the \(\kappa\) parameter. Figure 9: 1D and 2D projections of the PPDs associated with applying the arbitrary and Maxwellian fitting algorithms to a TS spectrum from Maxwellian VDFs. The plots labeled a), c), and f) are the 1D projections. The plots labeled b), d), and e) are the 2D projections. The green dots and solid lines mark the true values of the plasma parameters. The red dots and solid lines indicate the best-fit values obtained from the arbitrary fitting algorithm, and the blue dots and lines are the best-fit parameters from the Maxwellian fitting algorithm. The PPD projections from the arbitrary and Maxwellian fitting algorithms are in gray and light red, respectively, and used to estimate 95% confidence intervals for the fitted parameters in both algorithms, which are the dashed red and blue lines. The 1D PPD projections for the arbitrary fitting algorithm are fitted by Gaussians, shown by the solid black curves. For comparison, the parameters at which the PPDs are maximized are given by the magenta dots and solid lines for the Maxwellian fitting algorithm, and the cyan dots and solid lines for the arbitrary fitting algorithm. and provide examples of how to minimize these effects. The arbitrary forward model is used to implement an iterative fitting algorithm that accepts TS spectra as inputs and recovers plasma parameters defined by arbitrary user-defined VDF models. We show that the arbitrary fitting algorithm performs similarly to a Maxwellian TS fitting algorithm for Maxwellian VDFs, but outperforms the Maxwellian fitting algorithm for non-Maxwellian VDFs. The arbitrary forward model can be used to fit the plasma parameters associated with any parameterizable non-Maxwellian VDF model which satisfies the basic TS assumptions of quasi-neutrality and an unmagnetized plasma (although this assumption could be relaxed with a suitable extension of the TS analytic model). The fitting algorithm can be run with an MCMC sampler to estimate the PPD over the parameter space. This enables us to estimate the uncertainties of the fitted parameters and show that the numerical errors associated with the arbitrary forward model do not have a significant impact on the accuracy of the fitting. In the case of kappa VDFs, we also determine from the PPD that several parameters are linearly-dependent (i.e. degenerate). Use of the arbitrary fitting algorithm to fit synthetic spectra from other non-Maxwellian VDF models could also reveal correlated parameters in those models. Further work is needed to gain a better understanding of which VDF models contain these correlations and methods by which we can best mitigate them. Each VDF model could be examined separately using the tools we have developed in order to determine what parameter correlations are relevant in the PPDs. In addition, the computationally-heavy calculation of \(\chi\) and the resulting runtime increase could make the arbitrary forward model inconvenient to use in fitting experimental data. Further work should be done in either optimizing the speed of the forward model or in designing other forward model schemes. For instance, forward models which are tailored for specific non-Maxwellian VDF models could be developed and benchmarked against our arbitrary forward model. Although these types of forward models would only be suited for those particular VDF models and thus inherently less generalized, it is possible that additional speed optimizations can be made in these cases. ###### Acknowledgements. We thank R. Follett for many valuable discussions related to this work. This work was supported by the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) under Award Nos. DE-NA0004033, DE-NA0003856, and DE-SC0020431, the University of Rochester, and the New York State Energy Research and Development Authority. This work was also supported by NASA under Grant No. 80NSSC19K0493. This report was prepared as an account of work sponsored by an agency of the U.S. Government. Neither the U.S. Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the U.S. Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the U.S. Government or any agency thereof. This research made use of PlasmaPy version 2023.1.0, a community-developed open source Python package for plasma research and education (PlasmaPy Community et al. 2023).
2304.02985
The Boolean quadratic forms and tangent law
In \cite{EjsmontLehner:2020:tangent} we study the limit sums of free commutators and anticommutators and show that the generalized tangent function $$ \frac{\tan z}{1-x\tan z} $$ describes the limit distribution. This is the generating function of the higher order tangent numbers of Carlitz and Scoville \cite[(1.6)]{CarlitzScoville:1972} which arose in connection with the enumeration of certain permutations. In the present paper we continue to study the limit of weighted sums of Boolean commutators and anticommutators and we show that the shifted generalized tangent function appears in a limit theorem. In order to do this, we shall provide an arbitrary cumulants formula of the quadratic form. We also apply this result to obtain several results in a Boolean probability theory.
Wiktor Ejsmont, Patrycja Hęćka
2023-04-06T10:40:15Z
http://arxiv.org/abs/2304.02985v2
# The Boolean quadratic forms and tangent law ###### Abstract. In [21] we study the limit sums of free commutators and anticommutators and show that the generalized tangent function \[\frac{\tan z}{1-x\tan z}\] describes the limit distribution. This is the generating function of the higher order tangent numbers of Carlitz and Scoville [16, (1.6)] which arose in connection with the enumeration of certain permutations. In the present paper we continue to study the limit of weighted sums of Boolean commutators and anticommutators and we show that the shifted generalized tangent function appears in a limit theorem. In order to do this, we shall provide an arbitrary cumulants formula of the quadratic form. We also apply this result to obtain several results in a Boolean probability theory. Key words and phrases: Boolean infinite divisibility, central limit theorem, tangent numbers, Euler numbers, zigzag numbers, cotangent sums 2010 Mathematics Subject Classification: Primary: 46L54. Secondary: 62E10 Wiktor Ejsmont This research was funded in part by Narodowe Centrum Nauki, Poland WEAVE-UNISON grant 2022/04/Y/ST1/00008. ###### Abstract We consider the \(\chi^{2}\)-conjecture of the \(\chi^ are called _Boolean cumulants_ of the random variable \(X\). Using this, the Boolean convolution can be computed via the identity \[H_{\mu\ast\nu\nu}(z)=H_{\mu}(z)+H_{\nu}(z), \tag{2.4}\] see [49]. ### Boolean infinite divisibility In analogy with classical probability, a probability measure \(\mu\) on \(\mathbb{R}\) is said to be _Boolean infinitely divisible_ (or BID for short) if for each \(n\in\{1,2,3,\dots\}\) there exists a probability measure \(\mu_{n}\) such that \(\mu=\mu_{n}\uplus\mu_{n}\uplus\cdots\uplus\mu_{n}\) (\(n\)-fold Boolean convolution). Boolean infinite divisibility of a measure \(\mu\) is characterized by the property that its _self-energy transform_\(\phi_{\mu}(z)=zH_{\mu}(1/z)\) has a Nevanlinna-Pick representation [49] \[\phi_{\mu}(z)=\gamma+\int_{\mathbb{R}}\frac{1+xz}{z-x}\,d\rho(x) \tag{2.5}\] for some \(\gamma\in\mathbb{R}\) and some nonnegative finite measure \(\rho\). This \(\rho\) is called the _Boolean Levy measure_ of \(\mu\). It is worth to mention that all probability measures on \(\mathbb{R}\) are Boolean infinitely divisible. ### Interval Partitions We recall some facts about interval partitions. Let \(S\) be an ordered set. Then \(\pi=\{V_{1},\dots,V_{p}\}\) is a partition of \(S\), if the \(V_{i}\neq\emptyset\) are ordered and disjoint sets \(V_{i}=(v_{1},\dots,v_{k})\), where \(v_{1}<\dots<v_{k}\), whose union is \(S\). We write \(V\in\pi\) if \(V\) is a class of \(\pi\) and we say that \(V\) is a _block of \(\pi\)_. Any partition \(\pi\) defines an equivalence relation on \(S\), denoted by \(\sim_{\pi}\), such that the equivalence classes are the blocks \(\pi\). That is, \(i\sim_{\pi}j\) if \(i\) and \(j\) belong to the same block of \(\pi\). A block \(V\) of a partition \(\pi\) is called an _interval_ block if \(V\) is of the form \(V=(k,k+1,\dots,k+l)\) for \(k\geq 1\) and \(0\leq l\leq n-k\). A block of \(\pi\) is called a _singleton_ if it consists of one element. Let \(\operatorname{Sing}(\pi)\) denote the set of all singletons of \(\pi\). A partition \(\pi\) is called _interval partition_ if its every block is an interval. The set of interval partitions of \(S\) is denoted by \(\mathcal{I}(S)\), in the case where \(S=[n]:=\{1,\dots,n\}\) we write \(\mathcal{I}(n):=\mathcal{I}([n])\). \(\mathcal{I}(n)\) is a lattice under refinement order, where we say \(\pi\leq\rho\) if every block of \(\pi\) is contained in a block of \(\rho\). The maximal element of \(\mathcal{I}(n)\) under this order is the partition consisting of only one block and it is denoted by \(\hat{1}_{n}\). On the other hand, the minimal element \(\hat{0}_{n}\) is the unique partition whose every block is a singleton. Sometimes it is convenient to visualize partitions as diagrams, for example \(\hat{1}_{n}=\overline{\mid\dots\mid}\,\hat{0}_{n}=\{\,\,\mid\dots\mid}\). Two specific partitions will play a particularly important role, namely the _standard matching_\(\hat{1}_{2}^{n}=\mathsf{\mathsf{\mathsf{r}}}\cap\dots\mathsf{\mathsf{\mathsf{ \mathsf{r}}}}\in\mathcal{I}(2n)\) and \[\mathsf{\mathsf{\mathsf{\mathsf{r}}}}\cap\dots\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{r}}}}}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsf { \ ### Boolean cumulants Given a noncommutative probability space \((\mathcal{A},\varphi)\), the _Boolean cumulants_ are multilinear functionals \(K_{n}:\mathcal{A}^{n}\to\mathbb{C}\) defined implicitly in terms of the mixed moments by the relation \[\varphi(X_{1}X_{2}\ldots X_{n})=\sum_{\pi\in\mathcal{I}(n)}K_{\pi}(X_{1},X_{2}, \ldots,X_{n}), \tag{2.6}\] where \[K_{\pi}(X_{1},X_{2},\ldots,X_{n}):=\Pi_{B\in\pi}K_{|B|}(X_{i}:i\in B). \tag{2.7}\] Sometimes we will abbreviate the univariate cumulants as \(K_{n}(X)=K_{n}(X,\ldots,X)\). Boolean cumulants provide a powerful technical tool to investigate Boolean random variables. This is due to the basic property of _vanishing of mixed cumulants._ By this we mean the property that \[K_{n}(X_{1},X_{2},\ldots,X_{n})=0\] for any family of random variables \(X_{1},X_{2},\ldots,X_{n}\) which can be partitioned into two mutually Boolean nontrivial subsets. For Boolean sequences this can be reformulated as follows. Let \((X_{i})_{i\in\mathbb{N}}\) be a sequence of Boolean random variables and let \(h:[r]\to\mathbb{N}\) be a map. We denote by \(\ker h\) the set partition which is induced by the equivalence relation \[k\sim_{\ker h}l\Longleftrightarrow h(k)=h(l).\] Similarly, for a multiindex \(\underline{i}=i_{1}i_{2}\ldots i_{r}\) we denote its kernel \(\ker\underline{i}\) by the relation \(k\sim l\) if \(i_{k}=i_{l}\). In this notation, vanishing of mixed cumulants implies that \[K_{\pi}(X_{h(1)},X_{h(2)},\ldots,X_{h(r)})=0\text{ unless }\ker h\geq\pi. \tag{2.8}\] Our main technical tool is the Boolean version, due to Lehner [39], of the classical formula of James and Leonov/Shiryaev [34, 40] which expresses the cumulants of products in terms of individual cumulants. **Theorem 2.2**.: _Let \(r,n\in\mathbb{N}\) and \(i_{1}<i_{2}<\cdots<i_{r}=n\) be given and let_ \[\rho=\{(1,2,\ldots,i_{1}),(i_{1}+1,i_{1}+2,\ldots,i_{2}),\ldots,(i_{r-1}+1,i_{ r-1}+2,\ldots,i_{r})\}\in\mathcal{I}(n)\] _be the induced interval partition. Consider now random variables \(X_{1},\ldots,X_{n}\in\mathcal{A}\). Then the Boolean cumulants of the products can be expanded as follows:_ \[K_{r}(X_{1}\ldots X_{i_{1}},\ldots,X_{i_{r-1}+1}\ldots X_{n})=\sum_{\pi\in \mathcal{I}(n)\atop\pi\vee\rho=1_{n}}K_{\pi}(X_{1},\ldots,X_{n}). \tag{2.9}\] Next, [46, Proposition 3.3] investigates the properties of Boolean cumulants with scalars among their entries. This result is unexpected, because the identity operator is not Boolean independent of another operator. **Proposition 2.3**.: _If \(n,m\geq 1\) and \(X\in\mathcal{A}^{n}\), \(Y\in\mathcal{A}^{m}\), then_ 1. \(K_{m+1}(1,Y)=0\)_,_ 2. \(K_{m+1}(X,1)=0\)_,_ 3. \(K_{n+m+1}(X,1,Y)=K_{n+m}(X,Y)\) Figure 1. Examples of Kreweras complementation and closure partitions ### Some probability distributions Let us now recall basic properties of some specific probability distributions which play prominent roles in the present paper. #### 2.7.1. Boolean Gaussian distribution A non-commutative random variable \(X\) is said to be _Boolean Gaussian or Boolean normal_ if \(K_{r}(X)=0\) for \(r>2\). The reason for the latter is the fact that its Boolean cumulants \(K_{r}(X)=0\) for \(r>2\) and it appears in the Boolean version of the central limit theorem. The Boolean Gaussian law with mean zero and variance \(a^{2}\) has distribution \[\frac{1}{2}\delta_{-a}+\frac{1}{2}\delta_{a}.\] Its Cauchy-Stieltjes transform is given by the formula \[G_{\mu}(z)=\frac{1}{1-a^{2}/z}.\] For the purpose of this article we say that a family \(X_{i}\)\(i\in[n]\) is a _Boolean standard normal family_ if \(K_{1}(X_{i})=K_{2}(X_{i})=1\), \(K_{r}(X_{i})=0\) for \(r>2\) and \(X_{i}\) are Boolean independent. **Remark 2.4**.: (1). We assume that the Boolean standard normal family have mean one, because as one can see later, the cumulants of quadratic forms are zero whenever mean is zero. (2). In contrast to the classical convolution, it is not true that, for arbitrary \(a\in\mathbb{R}\), the convolution \(\mu\uplus\delta_{a}\) is equal to the shift of measure \(\mu\) by the amount \(a\). This fact is a consequence of Proposition (2.3). For example, one has \[(\frac{1}{2}\delta_{-a}+\frac{1}{2}\delta_{a})\uplus\delta_{c}=\frac{1}{2} \left(\Big{(}1+\frac{c}{\sqrt{4a^{2}+c^{2}}}\Big{)}\delta_{(c+\sqrt{4a^{2}+c^ {2}})/2}+\Big{(}1-\frac{c}{\sqrt{4a^{2}+c^{2}}}\Big{)}\delta_{(c-\sqrt{4a^{2}+ c^{2}})/2}\right).\] #### 2.7.2. Boolean Poisson distribution A non-commutative random variable \(X\) is said to be Boolean Poisson variable if it has distribution \(\nu=\nu(\lambda,\alpha)\) defined by the formula \[\nu=\frac{1}{\lambda+1}(\delta_{0}+\lambda\delta_{\alpha(\lambda+1)}), \tag{2.10}\] where \(\alpha,\lambda\geq 0\). The parameters \(\lambda\) and \(\alpha\) are called the rate and the jump size, respectively. It is easy to see that if \(X\) is Boolean Poisson, \(\nu(\lambda,\alpha)\), then \(K_{n}(X)=\alpha^{n}\lambda\). Therefore its \(H\)-transform has the form \[H(z)=\frac{\lambda\alpha z}{1-\alpha z}.\] #### 2.7.3. Even Poisson distribution We call an element \(X\in\mathcal{A}\)_even Poisson distribution_ if its even Boolean cumulants are equal \(K_{2n}(X)=1.\) The basic example of such a law is Boolean Poisson distribution with \(\lambda=\alpha=1.\) #### 2.7.4. Identically distributed random variables In this article, by identically distributed random variables we mean random variables with the same cumulant, which are uniformly bounded in the sample size. Precisely, we call that the elements \(X_{1},\ldots,X_{n}\in\mathcal{A}\) are _identically distributed_ if their all cumulants are equal \(K_{r}(X_{1})=\cdots=K_{r}(X_{n})\) for all \(r\in\mathbb{N}\) (they can depend on \(n\)), and \(|K_{r}(X_{i})|<C_{r}\), where \(C_{r}\) is a universal constant independent of \(n\). This definition play an important role in the section about the limit theorems of quadratic forms. ### Special matrices and lemmas Let \(M_{n}(\mathbb{C})\) and \(M_{n}^{sa}(\mathbb{C})\) denote the set of a scalar and self-adjoint matrices. For scalars \(a,b,c\in\mathbb{C}\) we denote by \(\left[\begin{smallmatrix}c&a\\ b&c\end{smallmatrix}\right]_{n}\in M_{n}(\mathbb{C})\) the matrix whose diagonal elements are equal to \(c\), whose upper-triangular entries are equal to \(a\) and whose lower-triangular elements are equal to \(b\), respectively. For simplicity of notation, we use the letter \(J_{n}\), \(P_{n}\) and \(B_{n}\) to denote the matrices \[J_{n}=\left[\begin{smallmatrix}1&1\\ 1&1\end{smallmatrix}\right]_{n},\ P_{n}=\frac{1}{n}J_{n}\text{ and }B_{n}=i\frac{1}{n}\left[ \begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right]_{n}.\] In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability, i.e., each row summing to \(1\). For our purpose, we introduce a _zero sum matrix_ which is a square matrix in \(M_{n}(\mathbb{C})\), with each row summing to \(0\). For our next result we will use the following lemmas about matrices. **Lemma 2.5**.: _Let \(A=[a_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\) and \(\Lambda=\operatorname{diag}(A)=\operatorname{diag}(a_{1,1},\ldots,a_{n,n})\) be its diagonal matrix. If_ \[\operatorname{Tr}(J_{n}\Lambda^{k}A\Lambda^{k})=0\text{ for all even }k\in \mathbb{N},\] _then \(A\) is a constant diagonal matrix (= multiple of an identity matrix)._ Proof.: By direct calculation we have \[\operatorname{Tr}(J_{n}\Lambda^{k}A\Lambda^{k})=\sum_{i,j}(a_{i,i}a_{j,j})^{k }a_{i,j}=0.\] The proof is by contradiction, so assume that \(A\) is not a constant diagonal matrix. Let \(C=\max_{i}|a_{i,i}|\) and \(max_{\Lambda}=\{i|i\in[n]\text{ and }|a_{i,i}|=C\}\) be the set of indexes of maximal diagonal elements, then \[\sum_{i,j}(a_{i,i}a_{j,j}/C^{2})^{k}a_{i,j}=0.\] Next, let \(k\) go to infinity, then in limit, we get \[\sum_{i\in max_{\Lambda}}a_{i,i}=\#max_{\Lambda}\times C=0,\] which implies \(C=0\) for all \(i\in[n]\), which contradicts the assumption. **Lemma 2.6**.: _Let \(A=[a_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\), then_ \[\operatorname{Tr}(J_{n}A^{k})=0\text{ for all }k\in\mathbb{N}\Longleftrightarrow \operatorname{Tr}(J_{n}A^{2})=0\Longleftrightarrow A\text{ is zero sum matrix}.\] Proof.: We will first prove the equivalence of first two conditions; then we show equivalence of last two conditions. We use the notation * \(\{\lambda_{1},\ldots,\lambda_{n}\}\) is the spectrum of \(A\), the eigenvalues are real; * \(\{U_{1},\ldots,U_{n}\}\) is the orthonormal basis (or unitary basis) of corresponding eigenvectors; * if \(x\) is an \(n\)-vector it will be convenient to denote by \(\sigma(x)\) the sum of the coordinates of \(x\). Let \(\mathbf{1}\) denote the vector \((1,1,\ldots,1)\), then \[\operatorname{Tr}(J_{n}A^{k}) =\mathbf{1}A^{k}\mathbf{1}^{*}\] \[=(\sigma(U_{1}),\ldots,\sigma(U_{n}))\operatorname{diag}(\lambda _{1}^{k},\ldots,\lambda_{n}^{k})(\sigma(U_{1}),\ldots,\sigma(U_{n}))^{*}\] \[=|\sigma(U_{1})|^{2}\,\lambda_{1}^{k}+\cdots+|\sigma(U_{n})|^{2} \,\lambda_{n}^{k}=0.\] The above equality holds for \(k=2\) if and only if \(\sigma(U_{i})\lambda_{i}=0\Longleftrightarrow\sigma(U_{i})=0\vee\lambda_{i}=0\) for all \(i\in[n]\), which means that \(\operatorname{Tr}(J_{n}A^{k})=0\) for all \(k\in\mathbb{N}\). Now let us observe that \[\operatorname{Tr}(J_{n}A^{2})=\sum_{i}\big{(}\sum_{j}a_{i,j}\underset{j}{\sum} a_{j,i}\big{)}=\sum_{i}\big{(}\sum_{j}a_{i,j}\underset{j}{\sum}a_{i,j}\big{)}=0 \Longleftrightarrow\sum_{j}a_{i,j}=0\text{ for all }i\in[n].\] ### Convergence in distribution In noncommutative probability we say that a sequence \(X_{n}\) of random variables _converges in distribution_ towards \(X\) as \(n\to\infty\), denoted by \[X_{n}\xrightarrow{d}X,\] if we have for all \(m\in\mathbb{N}\) \[\lim_{n\to\infty}\varphi(X_{n}^{m})=\varphi(X^{m})\text{ or equivalently }\lim_{n\to\infty}K_{m}(X_{n})=K_{m}(X).\] ### Convergence in state In this section we introduce the convergence in state \(\omega\) with density matrix \(P_{n}\), that is, \[\omega(C):=\operatorname{Tr}(P_{n}C)=\frac{1}{n}\sum_{i,j}^{n}c_{ij}=\xi^{T}C\xi,\] where, as above, by \(\xi\) we denote the unit vector \(\xi=\frac{1}{\sqrt{n}}(1,1,\dots,1)^{T}\) and \(C=[c_{i,j}]_{i,j=1}^{n}\in M_{n}(\mathbb{C})\). We say that a sequence of \(N\times N\) deterministic matrices \(A_{N}\) have limit distribution \(\mu\) with respect to the state \(\omega\) if for every \(m\in\mathbb{N}\) the moments satisfy \[\lim_{N\to\infty}\operatorname{Tr}(P_{N}A_{N}^{m})=\lim_{N\to\infty}\omega(A_ {N}^{m})=\int t^{m}d\mu(t).\] Note that in this case \(\mu\) is not necessarily a probability measure. ### Combinatorics of tangent numbers The _tangent numbers_ \[T_{2k-1}=(-1)^{k+1}\frac{4^{k}(4^{k}-1)B_{2k}}{2k} \tag{2.11}\] for \(k\in\mathbb{N}\) are the Taylor coefficients of the tangent function \[\tan z=\sum_{n=1}^{\infty}T_{n}\frac{z^{n}}{n!}=z+\frac{2}{3!}z^{3}+\frac{16} {5!}z^{5}+\frac{272}{7!}z^{7}+\cdots,\] see [30, Page 287]. The tangent numbers are complemented by the _secant numbers_. Together they form the sequence of _Euler zigzag numbers_\(E_{n}\) function \[\tan z+\sec z=\sum_{n=0}^{\infty}\frac{E_{n}}{n!}z^{n}.\] These numbers are also called _Springer numbers_ or _up-down numbers_[15] or _snake numbers_[8, 33] and appear in several different contexts, see for example [26, 7, 50, 51] or Andre's theorem [2]. The _higher order tangent numbers_\(T_{n}^{(k)}\) were introduced by Carlitz and Scoville [16] as the coefficients of the Taylor series \[\tan^{k+1}z=\sum_{n=k+1}^{\infty}T_{n}^{(k+1)}\frac{z^{n}}{n!}.\] The generating function of the tangent polynomials \(T_{n}(x)=\sum_{k=0}^{n-1}T_{n}^{(k+1)}x^{k}\) were computed by Carlitz and Scoville's [16, Equation (1.6)] and we have \[\begin{split} T(x,z)&=\frac{\tan z}{1-x\tan z}\\ &=\sum_{n=1}^{\infty}\frac{T_{n}(x)}{n!}z^{n}.\end{split} \tag{2.12}\] ## 3. Cumulants of quadratic forms In this subsection we express the cumulants of quadratic forms of random variables in terms of the conditional expectations of the system matrix onto the diagonal matrices. We start with a lemma, which establishes an isomorphism of interval partition of \(r+1\) elements, with special kind of interval partitions of \(2r\) points (the Boolean analogue of [20, Lemma 2.14]). In some sense the following lemma appears in literature (see for example [25]) but no one has systematized it in the following form. **Lemma 3.1**.: _Let \(r\in\mathbb{N}\) and \(\pi\in\mathcal{I}(2r)\), then \(\pi\vee\hat{1}_{2}^{r}=\hat{1}_{2r}\) if and only if \(\pi\geq\,\mathfrak{l}\,\,\cap\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, _is a lattice isomorphic to \(\mathcal{I}(r+1)\)._ **Corollary 3.2**.: _The above lemma implies that the structure of blocks is_ \[\pi=\cap\overline{\cdots}\ \ \ \text{or}\ \ \pi=\cap\overline{\cdots}\ The following lemma connects interval partition with conditional expectation \(E^{\mathcal{D}}\) and is the key to the main result. **Lemma 3.4** ([20], Lemma 4.2).: _For scalar matrices \(A\in M_{n}(\mathbb{C})\) we have_ 1. \[\sum_{i=1}^{n}E_{i}A_{1}E_{i}A_{2}\cdots E_{i}A_{r}E_{i}=E^{\mathcal{D}}(A_{1})E ^{\mathcal{D}}(A_{2})\cdots E^{\mathcal{D}}(A_{r}).\] 2. _Let_ \(\pi\in\mathcal{I}(r)\)_, then_ \[\sum_{\ker\pi\nmid\geqslant\pi}E^{\mathcal{D}}(A_{1}E_{i_{1}}A_{2}E_{i_{2}} \cdots A_{r}E_{i_{r}}A_{r+1})=E^{\mathcal{D}}_{\pi^{c}}(A_{1},A_{2},\ldots,A_{ r+1}).\] For \(\pi^{c}\in\mathcal{IC}(n)\) we define by \(E^{\bigcirc}_{k}:M^{k}_{n}(\mathcal{A})\to M_{n}(\mathbb{C})\) the corresponding (operator-valued) _Hadamard cumulants_ on the closure of \(\pi^{c}\). Let \(B=(i_{1},\ldots,i_{k})\in\overline{\pi^{c}}\), then \[E^{\bigcirc}_{|B|}(A_{i_{1}},\ldots,A_{i_{k}})=\left\{\begin{array}{ll}E^{ \mathcal{D}}(A_{i_{1}}\ldots A_{i_{k}})&\text{if $B=(1,\ldots,i_{k})$ i.e. contains $1$}\\ E^{\mathcal{D}}(A_{i_{1}}\bigcirc\cdots\bigcirc A_{i_{k}})&\text{if $B$ doesn't contain $1$} \end{array}\right.\] where \(\bigcirc\) is the Hadamard product of matrices. **Example 3.5**.: (1). If \(\pi=\{(1,2,3),(4,5,6)\}\), then \(\overline{\pi^{c}}=\{(1,4),(2,3),(5,6)\}\), and \[E^{\bigcirc}_{\overline{\pi^{c}}}(A_{1},\ldots,A_{6})=E^{\mathcal{D}}(A_{1}E^ {\mathcal{D}}(A_{2}\bigcirc A_{3})A_{4})E^{\mathcal{D}}(A_{5}\bigcirc A_{6}).\] (2). For the singleton partition and scalar matrices \(A_{i}\in M_{n}(\mathbb{C})\) we have \[E^{\mathcal{D}}_{\mathfrak{l}^{\cdot}\cdots\mathfrak{l}}(A_{1}, \ldots,A_{r}) =E^{\mathcal{D}}(A_{1})\ldots E^{\mathcal{D}}(A_{r})\] \[=E^{\mathcal{D}}(A_{1}\bigcirc\cdots\bigcirc A_{r}).\] We will use the following result as the main technical tool to express the cumulants of quadratic forms of Boolean random variables in terms of the diagonal map of matrices. **Proposition 3.6**.: _Let \(X_{1},X_{2},\ldots,X_{n}\in\mathcal{A}\) be a family of Boolean independent random variables, \(A=[a_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\) and \(T_{n}=\sum_{i,j}a_{i,j}X_{i}X_{j}\) a quadratic form. The cumulants of \(T_{n}\) are given by_ 1. \[K_{r}(T_{n})=\sum_{i_{0},i_{1},\ldots,i_{r}\in[n]}\operatorname{Tr}(J_{n}E_{i_ {0}}AE_{i_{1}}\ldots AE_{i_{r}})\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r )\\ \pi\vee\hat{1}_{2}^{r}=\hat{1}_{2r}\end{subarray}}K_{\pi}(X_{i_{0}},X_{i_{1}}, X_{i_{1}},\ldots,X_{i_{r-1}},X_{i_{r-1}},X_{i_{r}}).\] 2. _If we assume in addition that_ \(X_{i}\) _are identically distributed, then_ (3.3) \[K_{r}(T_{n})=\sum_{\pi\in\mathcal{I}(r+1)}\sum_{\ker\hat{1}\geqslant\pi} \operatorname{Tr}(J_{n}E_{i_{0}}AE_{i_{1}}\ldots AE_{i_{r}})K_{\hat{\pi}}(X),\] _which can be expressed as a convolution with Hadamard cumulants_ (3.4) \[=\sum_{\pi\in\mathcal{I}(r+1)}\operatorname{Tr}(E^{\bigcirc}_{\overline{\pi^{ c}}}(\underbrace{J_{n},A,\ldots,A}_{r+1}))K_{\hat{\pi}}(X),\] _where_ \(\hat{\pi}\) _is the image of_ \(\pi\in\mathcal{I}(r+1)\) _under the bijection introduced in Lemma_ 3.1_._ Proof.: Let \(Z_{i,j}=a_{i,j}X_{i}X_{j}\), then from the definition of \(T_{n}\) we see that \[K_{r}(T_{n})=\sum_{i_{1},i_{2},\ldots,i_{2r}\in[n]}K_{r}(Z_{i_{1},i_{2}},Z_{i_{3 },i_{4}},\ldots,Z_{i_{2r-1},i_{2r}}).\] We can apply Lemma 3.1 in the reverse direction and obtain \[=\sum_{\begin{subarray}{c}i_{1},i_{2},\ldots,i_{2r}\in[n]\\ \ker\,\mathbb{I}\ni\mathbb{I}\ \cap\ \bigcap\ \cdots\ \cap\ \mathbb{I}\\ =\sum_{i_{0},i_{1},i_{2},\ldots,i_{r}\in[n]}K_{r}(Z_{i_{0},i_{1}},Z_{i_{1}, i_{2}},Z_{i_{2},i_{3}}\ldots,Z_{i_{r-2},i_{r-1}},Z_{i_{r-1},i_{r}}).\end{subarray}\] We now expand further and obtain \[=\sum_{i_{0},i_{1},i_{2},\ldots,i_{r}\in[n]}a_{i_{0},i_{1}}a_{i_{1},i_{2}} \cdots a_{i_{r-1},i_{r}}K_{r}(X_{i_{0}}X_{i_{1}},X_{i_{1}}X_{i_{2}},\ldots,X_{i _{r-1}}X_{i_{r}}).\] Let us remind that \(J_{n}=[\begin{smallmatrix}1&1\\ 1&1\end{smallmatrix}]_{n}=[b_{i_{s}}]_{i,j=1}^{n}\), then \[=\sum_{i_{0},i_{1},i_{2},\ldots,i_{r}\in[n]}b_{i_{r},i_{0}}a_{i_{0 },i_{1}}a_{i_{1},i_{2}}\cdots a_{i_{r-1},i_{r}}K_{r}(X_{i_{0}}X_{i_{1}},X_{i_{1 }}X_{i_{2}},\ldots,X_{i_{r-1}}X_{i_{r}})\] \[=\sum_{i_{0},i_{1},\ldots,i_{r}\in[n]}\operatorname{Tr}(J_{n}E_{i _{0}}AE_{i_{1}}AE_{i_{2}}\ldots AE_{i_{r}})\,K_{r}(X_{i_{0}}X_{i_{1}},X_{i_{1 }}X_{i_{2}},\ldots,X_{i_{r-1}}X_{i_{r}})\] \[=\sum_{i_{0},i_{1},\ldots,i_{r}\in[n]}\operatorname{Tr}(J_{n}E_{i _{0}}AE_{i_{1}}AE_{i_{2}}\ldots AE_{i_{r}})\,\sum_{\begin{subarray}{c}\pi \in\mathcal{I}(\mathbb{Z}^{r})\\ \pi\vee\mathbb{I}_{2}^{r}=\mathbb{I}_{2r}\end{subarray}}K_{\pi}(X_{i_{0}},X_{ i_{1}},X_{i_{1}},X_{i_{2}},\ldots,X_{i_{r-1}},X_{i_{r}}),\] which yields (3.2). Now denoting by \(\hat{\pi}\) the image of \(\pi\in\mathcal{I}(r+1)\) under the bijection introduced in Lemma 3.1, we can rewrite this as \[=\sum_{\pi\in\mathcal{I}(r+1)}\sum_{\ker\mathbb{I}\ni\mathbb{I}} \operatorname{Tr}(J_{n}E_{i_{0}}AE_{i_{1}}AE_{i_{2}}\ldots AE_{i_{r}})K_{\hat{ \pi}}(X)\] which yields (3.3). Now we use Lemma 3.4 and obtain \[=\sum_{\pi\in\mathcal{I}(r+1)}\operatorname{Tr}(E_{\pi^{c}}^{ \mathcal{D}}(\underbrace{J_{n},A,\ldots,A}_{r+1}))K_{\hat{\pi}}(X).\] We further use the special structure of \(\pi^{c}\), namely that the first block decomposes \(\pi^{c}\) into disjoint segments, each consisting of a singleton. On every segment we can apply Remark 3.5 and obtain (3.4). **Corollary 3.7**.: _1. In the case of the Boolean normal distribution, with \(K_{1}=c\), \(K_{2}=1\) and \(K_{r}=0\) for \(r\geq 3\), formula (3.3) has only one contributing term \(\pi=\,_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ ## 4. The Boolean quadratic forms In this section we consider the Boolean analogue of some characterization problems (associated with quadratic form) from classical and free probability. Our main technical tool are the formulas introduced in the previous paragraph. The first problem is the Boolean analogue of [32, Proposition 2.2] and [32, Theorem 3.4] which can be formulated as follows. **Proposition 4.1**.: _Let \(X_{1},X_{2},\ldots,X_{n}\in\mathcal{A}\) be a Boolean standard normal family, \(A=[a_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\) and \(\lambda_{1},\ldots,\lambda_{n}\) and \(\sigma(U_{1}),\ldots,\sigma(U_{n})\) are as in the proof of Lemma 2.6. Then_ 1. _the_ \(H\)_-transform of the distribution of the quadratic form can be written in the form_ \[H_{T_{n}}(z)=\sum_{j=1}^{n}\frac{\left|\sigma(U_{i})\right|^{2}\lambda_{i}z}{ 1-\lambda_{i}z},\] 2. _the random variable_ \(T_{n}\) _has the Boolean Poisson distribution_ \(\nu(\lambda,\alpha)\) _if and only if the matrix_ \(A\) _has_ 1. _at least one eigenvalue equal to_ \(\lambda\) _and_ \[\sum_{i\text{ such that }\lambda_{i}=\lambda}\left|\sigma(U_{i})\right|^{2}=\alpha;\] 2. _all eigenvalues different than_ \(0\) _and_ \(\lambda\) _have corresponding eigenvectors with the sum of the coordinates equal to zero._ Proof.: (1) From the proof of Lemma 2.6, we obtain \[H_{T_{n}}(z)=\sum_{k=1}^{\text{\rm Tr}}(J_{n}A^{k})z^{k}=\sum_{k=1}(\left| \sigma(U_{1})\right|^{2}\lambda_{1}^{k}+\cdots+\left|\sigma(U_{n})\right|^{2} \lambda_{n}^{k})z^{k}=\sum_{j=1}^{n}\frac{\left|\sigma(U_{i})\right|^{2} \lambda_{i}z}{1-\lambda_{i}z}.\] (2) Again by Lemma 2.6, we have \[\text{\rm Tr}(J_{n}A^{k})=\left|\sigma(U_{1})\right|^{2}\lambda_{1}^{k}+ \cdots+\left|\sigma(U_{n})\right|^{2}\lambda_{n}^{k}=\alpha\lambda^{k}\] or equivalently \[\text{\rm Tr}(J_{n}(A/\lambda)^{k})=\left|\sigma(U_{1})\right|^{2}(\lambda_{1} /\lambda)^{k}+\cdots+\left|\sigma(U_{n})\right|^{2}(\lambda_{n}/\lambda)^{k}=\alpha. \tag{4.1}\] Therefore, if \(A\) satisfies (a) and (b), then \(T_{n}\) has the Boolean Poisson distribution \(\nu(\lambda,\alpha)\). To verify the sufficient condition, first we compute the limit of \[\lim_{k\to\infty}\text{\rm Tr}(J_{n}(A/\lambda)^{2k})=\begin{cases}0&\text{ if }\lambda_{i}<\lambda\text{ for all }i\in[n],\\ \infty&\text{ if }\lambda_{i}>\lambda\text{ and }\sigma(U_{i})>0\text{ at least for one }i\in[n],\end{cases}\] which is contradiction with \(\alpha>0\). This means that if \(\lambda_{i}>\lambda\), then \(\sigma(U_{i})=0\). Now, without loss of generality, we can assume that \(\lambda_{i}\leqslant\lambda\). From the last analysis we also conclude that \(\Lambda:=\{i|i\in[n]\text{ and }\lambda_{i}=\lambda\}\) is not empty and let \(\Lambda^{c}:=\{i|i\in[n]\text{ and }\lambda_{i}<\lambda\}\). We thus can rewrite the equation (4.1) as \[\sum_{i\in\Lambda^{c}}\left|\sigma(U_{i})\right|^{2}(\lambda_{i}/\lambda)^{k} =\alpha-\sum_{i\in\Lambda}\left|\sigma(U_{i})\right|^{2}.\] The limit of the left-hand side is zero as \(k\to\infty\) and we find that \(\alpha=\sum_{i\in\Lambda}\left|\sigma(U_{i})\right|^{2}\). In particular, we prove that \[\sum_{i\in\Lambda^{c}}\left|\sigma(U_{i})\right|^{2}\lambda_{i}^{k}=0\quad \text{ for }k\text{ even},\] which means that \(\sigma(U_{i})=0\), whenever \(\lambda_{i}\neq 0\) for \(i\in\Lambda^{c}\). The Boolean analogue of [32, Theorem 2.3] can be formulated as follows. **Proposition 4.2**.: _Let \(X_{1},X_{2},\ldots,X_{n}\in\mathcal{A}_{sa}\) be a Boolean standard normal family and let \(A=[a_{i,j}]_{i,j}^{n},B=[b_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\). Then quadratic forms \(Q_{1}=\sum_{i=1}^{n}a_{i,j}X_{i}X_{j}\) and \(Q_{2}=\sum_{i=1}^{n}b_{i,j}X_{i}X_{j}\) are Boolean independent iff_ 1. \(BA^{k}\) _and_ \(AB^{k}\) _are zero sums matrices for all_ \(k\in\mathbb{N}\) _if_ \(n\geq 3\)_;_ 2. \(BA\) _and_ \(AB\) _are zero sums matrices if_ \(n=2\)_,_ **Remark 4.3**.: 1. Proposition 4.2 implies that there is a gap in [38, Proposition 4.6]. It is a consequence that the author does not assume the fact that identity operator is not Boolean independent with the sample \(X_{1},X_{2},\ldots,X_{n}\). 2. The following example show that we cannot formulate the part (1) of Proposition 4.2, without parameter \(k\). We can define the matrices \(A\) and \(B\) for \(n\geq 3\) which satisfy \(J_{n}AB=0\), \(J_{n}BA=0\) and \(J_{n}A^{k}B\neq 0\), \(J_{n}B^{k}A\neq 0\) for some \(k\in\mathbb{N}\). Computer calculations show that it is possible to find many different such matrices. For example, for \(n=3\) \[A=\left[\begin{smallmatrix}-15&6&1\\ 6&9&-8\\ 1&-8\end{smallmatrix}\right]\text{ and }B=\left[\begin{smallmatrix}-1&16&60\\ 16&44&90\\ 6&90&75\end{smallmatrix}\right].\] Then \(J_{3}AB=0\), \(J_{3}BA=0\) and \(J_{3}A^{2}B\neq 0\), \(J_{3}B^{2}A\neq 0\). For \(n=4\) we find matrices \[A=\left[\begin{smallmatrix}1&2&3&4\\ 3&7&16&1\\ 4&16&10&10\end{smallmatrix}\right]\text{ and }B=\left[\begin{smallmatrix}81&-9&-9&-9\\ -9&1&1&1\\ -9&1&1&1\\ -9&1&1&1\end{smallmatrix}\right].\] Then \(J_{4}AB=0\) and \(J_{4}BA=0\). Here \(J_{4}A^{11}B\neq 0\) and \(J_{4}B^{12}A\neq 0\). It is possible to find such matrices \(A\) and \(B\) for \(n\geq 5\). You can create the system of equations using \(J_{n}AB=0\) and \(J_{n}BA=0\) and then find the set of solutions satisfying \(J_{n}A^{k}B\neq 0\) or \(J_{n}B^{k}A\neq 0\) for some \(k\geq 2\). Proof.: (1). We can write the joint cumulants of \(Q_{1}\), \(Q_{2}\) as \[K_{i_{1}+j_{1}\cdots+i_{k}+j_{k}}(\underbrace{Q_{1},\ldots,Q_{1}}_{i_{1}}, \underbrace{Q_{2},\ldots,Q_{2}}_{j_{1}},\ldots,\underbrace{Q_{1},\ldots,Q_{1}}_{ i_{k}},\underbrace{Q_{2},\ldots,Q_{2}}_{j_{k}})=\operatorname{Tr}(J_{n}A^{i_{1}}B^{j_{1}} \ldots A^{i_{k}}B^{j_{k}}),\] where \(i_{1},j_{1},\ldots,i_{k},j_{k}\in\mathbb{N}_{0}.\) Assume that \(Q_{1}\) and \(Q_{2}\) are Boolean independent, then mixed cumulants vanish and in particular we have \(\operatorname{Tr}(J_{n}AB^{k}A)=\frac{1}{n}\operatorname{Tr}(J_{n}AB^{k}AJ_{n})=0\). Then by the Schwarz inequality for nonnegative self-adjoint operators we have \(J_{n}A^{k}B=0\) (by symmetry we also get \(J_{n}B^{k}A=0\)), i.e., \(BA^{k}\) is zero sum matrix. For the inverse, let us observe that \(BA^{k}\) and \(AB^{k}\) are zero sums matrices if and only if \(J_{n}A^{k}B=0\) and \(J_{n}B^{k}A=0\), which imply that joint cumulants of \(Q_{1}\) and \(Q_{2}\) disappear. (2). If all mixed cumulants disappear then from part (1) we see that \(BA\) and \(AB\) are zero sums matrices. For the converse, let us observe that if \(J_{2}AB=0\) and \(J_{2}BA=0\), then \(AB=\left[\begin{smallmatrix}a+ib&-a-ib\\ -a-ib&a+ib\end{smallmatrix}\right]\) for some \(a,b\in\mathbb{R}\), because sums of rows and columns must be zero. Let us observe that multiplication of two self-adjoint matrices \(A\) and \(B\) has the structure \(\left[\begin{smallmatrix}x-iz&\cdot\\ \cdot&y+iz\end{smallmatrix}\right]\) for some \(x,y,z\in\mathbb{R}\). From this we conclude that \(AB=BA=\left[\begin{smallmatrix}a&-a\\ -a&a\end{smallmatrix}\right]\), namely the matrices \(A\) and \(B\) commute. In this case, we can calculate the joint cumulants of \(Q_{1}\), \(Q_{2}\) as \[K_{i_{1}+j_{1}\cdots+i_{k}+j_{k}} (\underbrace{Q_{1},\ldots,Q_{1}}_{i_{1}},\underbrace{Q_{2}, \ldots,Q_{2}}_{j_{1}},\ldots,\underbrace{Q_{1},\ldots,Q_{1}}_{i_{k}}, \underbrace{Q_{2},\ldots,Q_{2}}_{j_{k}})\] \[=\operatorname{Tr}(J_{2}ABA^{i_{1}\cdots+i_{k}-1}B^{j_{1}+\cdots+ j_{k}-1})=0.\] Kagan and Shalaevski [35] have shown that if the random variables \(X_{1},\ldots,X_{n}\) are i.i.d. and the distribution of \(\sum_{i=1}^{n}(X_{i}+a_{i})^{2}\), \(a_{i}\in\mathbb{R}\) depends only on \(\sum_{i=1}^{n}a_{i}^{2}\), then each \(X_{i}\sim N(0,\sigma)\). In the next theorem, we will show that the Boolean version of this theorem is true. Since \((X_{i}+a_{i})^{2}\), \(i\in\{1,\ldots,n\}\) are not Boolean independent (by Proposition 2.3), we cannot use the standard arguments as in [32, Theorem 3.2] or in [19]. **Theorem 4.4**.: _Let \(X_{1},X_{2},\ldots,X_{n}\) be Boolean independent identically distributed copies of a random variable \(X\) with mean \(0\) and variance \(1\). Then the sums \(P=\sum_{i=1}^{n}(X_{i}+a_{i})^{2}\) have the law that depends on \((a_{1},\ldots,a_{n})\) through \(\sum_{i=1}^{n}a_{i}^{2}\) only for every \(a_{i}\in\mathbb{R}\) if and only if \(X\) is Boolean Gaussian random variable._ Proof.: Suppose that \(X_{i}\) have the Boolean normal distribution with mean zero and variance one, then by Proposition 2.3 we have \[K_{r}(X_{i_{1}}+a_{i_{1}},X_{i_{2}}+a_{i_{2}},\ldots,X_{i_{r}}+a_ {i_{r}}) =\begin{cases}0&\text{ for }i_{1}\neq i_{r},\\ a_{i_{2}}\ldots a_{i_{r-1}}K_{2}(X_{i_{1}},X_{i_{1}})&\text{ for }i_{1}=i_{r}, \end{cases} \tag{4.2}\] \[=\begin{cases}0&\text{ for }i_{1}\neq i_{r},\\ a_{i_{2}}\ldots a_{i_{r-1}}&\text{ for }i_{1}=i_{r}.\end{cases}\] First we apply the decomposition from Corollary 3.2 and obtain \[K_{r}(P) =\sum_{i_{1},\ldots,i_{r}}\sum_{\begin{subarray}{c}\pi\in\mathcal{ I}(2r)\\ \pi\vee\mathrm{i}_{2}^{r}=\mathrm{i}_{2r}\end{subarray}}K_{\pi}(X_{i_{1}}+a_{i_{1} },X_{i_{1}}+a_{i_{1}},\ldots,X_{i_{r}}+a_{i_{r}},X_{i_{r}}+a_{i_{r}})\] \[=\sum_{i_{1},\ldots,i_{r}}\sum_{\begin{subarray}{c}\pi\in\mathcal{ I}(2r)\\ \pi=\mathrm{i}_{r}\end{subarray}}K_{\pi}(X_{i_{1}}+a_{i_{1}},X_{i_{1}}+a_{i_{1} },\ldots,X_{i_{r}}+a_{i_{r}},X_{i_{r}}+a_{i_{r}})\] using equation (4.2), we eliminate the zero contribution indexes and obtain \[=\sum_{i_{1},\ldots,i_{r-1}}\sum_{\pi=\left\lceil\frac{1}{\pi} \right\rceil}K_{\pi}(X_{i_{1}},X_{i_{1}}+a_{i_{1}},\ldots,X_{i_{1}}+a_{i_{1}}, X_{i_{1}})\] \[+\sum_{i_{1},\ldots,i_{r}}\sum_{\begin{subarray}{c}\pi\in \mathcal{I}(2r)\\ \ker\mathrm{i}\geqslant\end{subarray}}K_{\pi}(X_{i_{1}}+a_{i_{1}},X_{i_{1}}+a_ {i_{1}},\ldots,X_{i_{r}}+a_{i_{r}},X_{i_{r}}+a_{i_{r}}).\] The first sums correspond to \(\sum_{i_{1},\ldots,i_{r-1}}a_{i_{1}}^{2}\ldots a_{i_{r-1}}^{2}=\big{(}\sum_{i= 1}a_{i}^{2}\big{)}^{r-1}\). The second summation can be decomposed according to the number of singletons. Indeed, the outer block (which connects two odd blocks) has the contribution \(\sum_{i}a_{i}^{2}\) and the remaining indexes run over all independent set of pairs. It is not difficult to see that the number of these pairs depends on the number of singletons \(r-1+\#\operatorname{Sing}(\pi)-\#\pi\) and the corresponding contribution can be described as below. \[\begin{array}{ccccccccc}\ker\mathrm{i}\geqslant&\prod\mathsf{r}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! More precisely, it can be written as the sums \[= \big{(}\sum_{i=1}a_{i}^{2}\big{)}^{r-1}\] \[+ \sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\,\vee\,\underline{1}_{2}^{r}=1_{2r}\\ \text{Sing}(\pi)=\{(1),(2r)\}\end{subarray}}\sum_{i_{1},\ldots,i_{r}}K_{\pi}(X _{i_{1}}+a_{i_{1}},X_{i_{1}}+a_{i_{1}},\ldots,X_{i_{r}}+a_{i_{r}},X_{i_{r}}+a _{i_{r}})\] \[+ \sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\,\vee\,\underline{1}_{2}^{r}=1_{2r}\\ \text{Sing}(\pi)=\{(1)\}\vee\text{Sing}(\pi)=\{(2r)\}\end{subarray}}\sum_{i_{1},\ldots,i_{r}}K_{\pi}(X_{i_{1}}+a_{i_{1}},X_{i_{1}}+a_{i_{1}},\ldots,X_{i_{r}}+ a_{i_{r}},X_{i_{r}}+a_{i_{r}})\] \[+ \sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\,\vee\,\underline{1}_{2}^{r}=1_{2r}\\ \text{Sing}(\pi)=\emptyset\\ \pi\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! By the hypothesis, the right hand side of \(K_{4}(Y(a))\) does not depend on \(a^{2}\), and thus we obtain \(K_{3}(X)=0\). Next we evaluate the \(r\)-th cumulants of \(Y(a)\), i.e, \[K_{r}(Y(a)) =\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\Gamma_{1}^{r}=1_{2r}\end{subarray}}K_{\pi}(Y(a))\] \[=K_{\overline{\bigcap}\cdots\mid}(X_{1}+a)+K_{\overline{ \bigcap}\cdots\mid}(X_{2})+\cdots+K_{\overline{\bigcap}\cdots\mid}(X_{n})+K_{ \overline{\bigcup}\cdots\mid}(X_{1}+a)+K_{\overline{\bigcap}\cdots\mid}(X_{ 1}+a)\] \[+\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\Gamma_{1}^{r}=1_{2r}\\ \end{subarray}}K_{\pi}(Y(a))\] \[=K_{2r}(X+a)+(n-1)K_{2r}(X)+2K_{\overline{\bigcup}\cdots\mid}(X+a)\] \[+\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\Gamma_{1}^{r}=1_{2r}\\ \end{subarray}}K_{\pi}(Y(a)).\] This in turn, by (4.3), is equal to \[=\sum_{i=0}^{2r-2}\binom{2r-2}{i}a^{i}K_{2r-i}(X)+(n-1)K_{2r}(X)+ 2\sum_{i=0}^{2r-3}\binom{2r-3}{i}a^{i+1}K_{2r-i-1}(X)\] \[+\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\Gamma_{1}^{r}=1_{2r}\\ \end{subarray}}K_{\pi}(Y(a))\] \[=nK_{2r}(X)+2arK_{2r-1}(X)+\sum_{i=2}^{2r-2}\binom{2r-2}{i}a^{i}K _{2r-i}(X)+2\sum_{i=1}^{2r-3}\binom{2r-3}{i}a^{i+1}K_{2r-i-1}(X)\] \[+\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\neq\overline{\bigcup}\cdots\mid_{\alpha\neq\neq\emptyset\cdots\mid}\cdots \wedge\pi\neq\overline{\bigcup}\\ =nK_{2r}(X)+2arK_{2r-1}(X)+p_{2r}(a),\] where \(p_{2r}(a)=\sum_{i=2}^{2r}c_{i}a^{i}\) is the polynomial of indeterminate \(a\) such that the coefficients \(c_{i}\) are a polynomial of \(K_{1},\ldots,K_{2r-2}\). We would like to emphasize that there is no linear term in \(p_{2r}(a)\) because we know that \(K_{3}(X)=0\). Since \(K_{r}(Y(a))=K_{r}(Y(-a))\) for all \(a\in\mathbb{R}\), we have \(K_{i}(X)=0\) for odd \(i\geq 5\). Further we use induction over \(r\) to prove that all even cumulants of order at least four disappear. Indeed, we consider the following random variable \[\tilde{Y}(a,b)=(X_{1}+a)^{2}+(X_{2}+b)^{2}+X_{3}^{2}+\cdots+X_{n}^{2}.\] First we consider the special case \(K_{4}(\tilde{Y}(a,b))\), which we present without explanation how we got the coefficients, because the details will be explained later in the general case. We start with \(r=4\) and obtain \[K_{4}(\tilde{Y}(a,b)) =nK_{8}(X)+(a^{2}+b^{2})(26+2(n-2))K_{6}(X)\] \[+\Big{(}4(a^{4}+b^{4})+(2+(n-2))(a^{2}+b^{2})^{2}+45(a^{2}+b^{2 })\Big{)}K_{4}(X)\] \[+\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(8)\\ \pi\vee\Gamma_{1}^{r}=1_{8}\end{subarray}}\big{(}\sum_{i=1}a_{i}^{2}\big{)}^{4 +\#\operatorname{Sing}(\pi)-\#\pi}.\] By assumption we have \(K_{4}(\tilde{Y}(a,0))=K_{4}(\tilde{Y}(a/\sqrt{2},a/\sqrt{2}))\) for all \(a\in\mathbb{R}\). Comparing the coefficients of the term \(a^{4}\), we get \(K_{4}(X)=0.\) Let us assume that \(K_{4}(X)=\cdots=K_{2r-8}=0\) We extract from \(K_{r}(\tilde{Y}(a,b))\) for \(r\geq 5\) all the factors involving \(K_{2r-4}(X)\). Now we would like to present the partitions and corresponding coefficients which contribute to the term \(K_{2r-4}(X)\). \[\begin{array}{rll}\text{Partition }\pi&\rightsquigarrow&\text{coefficient associated with }K_{2r-4}(X)\text{ in expansion of }K_{\pi}(X)\\ &\rightsquigarrow&\rightsquigarrow&\big{(}\binom{2r-2}{2}+2(r-2)\big{)}(a^{2}+b^{2 })+\binom{r-2}{2}(n-2)(a^{2}+b^{2})^{2}\\ &\rightsquigarrow&\rightsquigarrow&2(2r-3)(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&1+\rightsquigarrow&\rightsquigarrow&1\rightsquigarrow& \rightsquigarrow&2(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&1+1\rightsquigarrow&\rightsquigarrow&1\rightsquigarrow& \rightsquigarrow&1\rightsquigarrow&\rightsquigarrow&2(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&2 (2r-5)(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&1\rightsquigarrow&\rightsquigarrow& 2(2r-5)(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&1&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&2(2r-5)(a^{2}+b^{2})\\ &\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\binom{2r-4}{2} (a^{4}+b^{4})+2(r-2)a^{2}b^{2}\end{array}\] Essential to understanding how we obtain these different coefficients is to understand the last term, which we explain in detail. Let us observe that \[K_{\rightsquigarrow}\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&((X_{1}+a)^{2})+ K_{\rightsquigarrow}\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&((X_{2}+b)^{2})\\ &\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\rightsquigarrow& \rightsquigarrow&\rightsquigarrow&\rightsquigarrow&\r ### The Boolean cancellation phenomenon In [20, Lemma 2.17] we established a curious cancellation result for symmetrized squares of centered linear statistics that their distributions do not depend on the odd cumulants. Now we show that one can use the decomposition from Lemma 3.1 for a much simpler approach leading to a similar result in the Boolean case, and moreover, we show that only one cumulant contributes to the distribution. **Proposition 4.5**.: _Let \(X_{1},X_{2},\ldots,X_{n}\) be Boolean independent identically distributed copies of a random variable \(X\) and \(L=\sum_{i=1}^{n}\alpha_{i}X_{i}\) be a linear form with \(\varphi(L)=0\). Then the \(r^{th}\) cumulant of the quadratic statistic_ \[P=\sum_{\sigma\in\mathfrak{S}_{n}}L_{\sigma}^{2},\] _has only one contributing term, namely the maximal partition \(\hat{1}_{2r}=\overline{\bigcap\cdots\bigcap}\)._ Proof.: Let \(B_{k}=(1,\ldots,2k-1)\), with \(k\in[r]\) be an enumeration of the first odd block of \([2r]\) and let \(\mathcal{I}^{B_{k}}(2r)=\{\pi\in\mathcal{I}(2r)\mid B_{k}\in\pi\}\). Directly from Lemma 3.1, we have the disjoint decomposition \[\mathcal{I}(2r)=\mathcal{I}^{B_{1}}(2r)\cup\mathcal{I}^{B_{2}}(2r)\cup \mathcal{I}^{B_{3}}(2r)\cup\cdots\cup\mathcal{I}^{B_{r}}(2r)\cup\{\overline{ \bigcap\cdots}\}. \tag{4.4}\] First, we apply the product formula of Theorem 2.2 and obtain \[K_{r}(P) =\sum_{\sigma_{1},\ldots,\sigma_{r}\in\mathfrak{S}_{n}}K_{r}(L_ {\sigma_{1}}^{2},L_{\sigma_{2}}^{2},\ldots,L_{\sigma_{r}}^{2})\] \[=\sum_{\sigma_{1},\ldots,\sigma_{r}\in\mathfrak{S}_{n}}\sum_{ \begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\hat{1}_{2}^{r}=1_{2r}\end{subarray}}K_{\pi}(L_{\sigma_{1}},L_{\sigma_{ 1}},L_{\sigma_{2}},L_{\sigma_{2}},\ldots,L_{\sigma_{r}},L_{\sigma_{r}})\] \[=\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\hat{1}_{2}^{r}=1_{2r}\end{subarray}}\tilde{K}_{\pi}(L)\] where \(\tilde{K}_{\pi}(L)=\sum_{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\in\mathfrak{ S}_{n}}K_{\pi}(L_{\sigma_{1}},L_{\sigma_{1}},L_{\sigma_{2}},L_{\sigma_{2}}, \ldots,L_{\sigma_{r}},L_{\sigma_{r}}).\) We can split the last sums according to the decomposition (4.4) as \[\sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r)\\ \pi\vee\hat{1}_{2}^{r}=1_{2r}\end{subarray}}\tilde{K}_{\pi}(L)=\sum_{\pi\to \overline{\bigcap\cdots}}\tilde{K}_{\pi}(L)+\sum_{k=1}^{r}\sum_{ \begin{subarray}{c}\pi\in\mathcal{I}^{B_{k}}(2r)\\ \pi\vee\hat{1}_{2}^{r}=1_{2r}\end{subarray}}\tilde{K}_{\pi}(L). \tag{4.5}\] Every \(\pi\) in this sum splits \([2r]\) into the two intervals \(B_{k}=(1,\ldots,2k-1)\) and it is a complement \((2k,\ldots,2r)\), hence we have \[\sum_{\begin{subarray}{c}\pi\in\mathcal{I}^{B_{k}}(2r)\\ \pi\vee\hat{1}_{2}^{r}=1_{2r}\end{subarray}}\tilde{K}_{\pi}(L) =\sum_{\sigma_{1},\sigma_{2},\ldots,\sigma_{k}\in\mathfrak{S}_{n} }K_{2k-1}(L_{\sigma_{1}},L_{\sigma_{1}},\ldots,L_{\sigma_{k-1}},L_{\sigma_{k-1 }},L_{\sigma_{k}})\] \[\times\sum_{\sigma_{k+1},\ldots,\sigma_{r}\in\mathfrak{S}_{n}} \sum_{\begin{subarray}{c}\pi\in\mathcal{I}(2r-k)+1\\ \pi\vee\bigcap\cdots\cap\bigcap=1_{2(r-k)+1}\end{subarray}}K_{\pi}(L_{\sigma_{k} },L_{\sigma_{k+1}},L_{\sigma_{k+1}},\ldots,L_{\sigma_{r}},L_{\sigma_{r}})\] \[=\sum_{\sigma_{1},\sigma_{2},\ldots,\sigma_{k}\in\mathfrak{S}_{n} }K_{2k-1}(L_{\sigma_{1}},L_{\sigma_{1}},\ldots,L_{\sigma_{k-1}},L_{\sigma_{k-1 }},L_{\sigma_{k}})\times K_{r-k+1}(L_{\sigma_{k}},P,\ldots,P).\] Finally, by multilinearity, the factor \[K_{r-k+1}(L_{\sigma_{k}},P,\ldots,P)=\sum_{i=1}^{n}\alpha_{i}K_{r-k+1}(X_{ \sigma_{k}(i)},P,\ldots,P)=K_{r-k+1}(X_{1},P,\ldots,P)\sum_{i=1}^{n}\alpha_{i}\] vanishes for every \(\sigma_{k}\) because in the last equality we use the fact that \(P\) is symmetric polynomial in variables \(X_{1},\ldots,X_{n}\) **Corollary 4.6**.: _Let \(P\) be as in Proposition 4.5, then_ \[K_{r}(P)=K_{r}\big{(}(n-1)!(\alpha_{1}^{2}+\cdots+\alpha_{n}^{2})(X_{1}^{2}+ \cdots+X_{n}^{2})+(n-2)!(\sum_{i\neq j}\alpha_{i}\alpha_{j})\sum_{i\neq j}X_{i} X_{j}\big{)}\] _by Proposition 4.5_ \[=K_{r}\big{(}(n-1)!(\alpha_{1}^{2}+\cdots+\alpha_{n}^{2})(X_{1}^{2}+\cdots+X_{n }^{2})\big{)}\] _and by multilinearity and Boolean independence_ \[=n(n-1)!^{r}(\alpha_{1}^{2}+\cdots+\alpha_{n}^{2})^{r}K_{2r}(X).\] _In the case of the sample variance \(Q_{n}=\sum_{i=1}^{n}(X_{i}-\overline{X})^{2}\) we apply Proposition 4.5 for \(L=X_{1}-\overline{X}\) and \(P=\sum_{\sigma\in\mathfrak{S}_{n}}L_{\sigma}^{2}=(n-1)!\,Q_{n}\) to conclude that_ \[K_{r}(P) =n(n-1)!^{r}\Big{(}\Big{(}1-\frac{1}{n}\Big{)}^{2}+\underbrace{ \frac{n^{2}}{n^{2}}+\cdots+\frac{1}{n^{2}}}_{n-1}\Big{)}^{r}K_{2r}(X)\] \[=n(n-1)!^{r}\Big{(}1-\frac{1}{n}\Big{)}^{r}K_{2r}(X)\] _and from above we have_ \[K_{r}(Q_{n})=n\left(1-\frac{1}{n}\right)^{r}K_{2r}(X).\] The main result of [22] is a characterization of quadratic forms, which exhibits the phenomenon of cancellation of odd cumulants, i.e., whose distributions do not depend on the odd cumulants of the distributions of the arbitrary free random variables. Note that the similar result for free identically distributed families is still open. Now, we present the solution of this problem in a Boolean version, which indicates the direction to go in free probability. **Theorem 4.7**.: _Let \(X_{1},X_{2},\ldots,X_{n}\) be Boolean independent identically distributed copies of a random variable \(X\), \(A=[a_{i,j}]_{i,j=1}^{n}\in M_{n}^{sa}(\mathbb{C})\) and \(T_{n}=\sum a_{i,j}X_{i}X_{j}\) a quadratic form. Then the \(r^{th}\) cumulants of the \(T_{n}\) have only one cotributing term \(\hat{1}_{2r}=\sqcap\cdots\), if and only if \(A\) is a zero sum matrix with constant diagonal._ Proof.: Let \(\hat{\pi}\) be the image of \(\pi\in\mathcal{I}(r+1)\) under the bijection introduced in Lemma 3.1, then \(\hat{\pi}\) has two singletons if and only if \(\pi\) consists only singletons, namely \[\hat{\pi}=\shortmid\cap\cap\cdots\cap\shortmid\in\mathcal{I}(2r)\longleftrightarrow \pi=\shortmid t\cdot\cdot\cdot\cdot\shortmid\in\mathcal{I}(r+1)\] and so by Proposition 3.6 we can write (4.6) \[K_{r}(T_{n})=\operatorname{Tr}(J_{n}A_{n}^{r})K_{1}^{2}(X)K_{2}^{r-1}(X)+ \sum_{\begin{subarray}{c}\pi\in\mathcal{I}(r+1)\\ \pi\supset i\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, and observe that \(\overline{\sigma^{c}}=\bigcap\limits_{r\in\overline{\cdots\cdots}}\bigcap \limits_{r\in\overline{\cdots\cdots}}\), which yields \[=\operatorname{Tr}(J_{n}E^{\heartsuit}(\underbrace{A,\ldots,A}_{ \frac{r-1}{2}})AE^{\heartsuit}(\underbrace{A,\ldots,A}_{\frac{r-1}{2}}))K_{r}^{ 2}(X)+\sum_{\begin{subarray}{c}\pi\in\overline{\ell}(r+1)\\ \pi\neq\sigma\end{subarray}\uparrow\{r+1\}\atop\pi\neq\sigma\sigma} \operatorname{Tr}(E_{K\pi^{c}}^{\heartsuit}(\underbrace{J_{n},A,\ldots,A}_{r+1 }))K_{\sharp}(X)\] \[=\operatorname{Tr}(J_{n}E^{\heartsuit}(A^{\heartsuit\frac{r-1}{2}}) AE^{\heartsuit}(A^{\heartsuit\frac{r-1}{2}}))K_{r}^{2}(X)+\sum_{\begin{subarray}{c}\pi \in\overline{\ell}(r+1)\\ \pi\neq\sigma\end{subarray}\uparrow\{r+1\}\atop\pi\neq\sigma\sigma} \operatorname{Tr}(E_{\pi}^{\heartsuit}(\underbrace{J_{n},A,\ldots,A}_{r+1}))K _{\sharp}(X).\] Since we consider the interval partitions, there is just one coefficient appearing in the term with \(K_{r}^{2}(X)\). Thus we see that \[\operatorname{Tr}(J_{n}E^{\heartsuit}(A^{\heartsuit\frac{r-1}{2}}) AE^{\heartsuit}(A^{\heartsuit\frac{r-1}{2}}))=\operatorname{Tr}(J_{n}\operatorname{ diag}(A)^{\frac{r-1}{2}}A\operatorname{diag}(A)^{\frac{r-1}{2}})=0\] for all odd \(r\). Thus from Lemma 2.5, this implies that \(A\) is constant diagonal matrix. The implication in the other direction is a simple manipulation with a constant diagonal matrix. The Hadamard cumulants which are not associated with the first block are of the form \[E^{\heartsuit}(A^{\heartsuit k})=\left[\begin{smallmatrix}a^{k}&0\\ 0&a^{k}\end{smallmatrix}\right]_{n}.\] From this we see that the Hadamard cumulants are commutative and formula (3.4) can be rewritten as \[K_{r}(T_{n})=\sum_{\pi\in\overline{\ell}(r+1)}\operatorname{Tr}(J_{n}A^{|B_{1 }|-1})a^{r+1-|B_{1}|}K_{\sharp}(X) \tag{4.7}\] where \(B_{1}\) is the first block of \(\overline{K}(\pi)\). By Lemma 2.6 we have \(\operatorname{Tr}(J_{n}A^{|B_{1}|-1})=0\) whenever size of block \(B_{1}\) is bigger then one, and from this we conclude that the formula (4.7) takes non zero value when \(B_{1}=(1)\), i.e. \(\pi=\bigcap\limits_{r\in\overline{\cdots\cdots}}\), and in this case we have \(K_{r}(T_{n})=na^{r}K_{2r}(X)\). **Corollary 4.8**.: _These findings provide one more argument for the cancellation phenomenon of the sample variance_ \[Q_{n}=\sum_{i=1}^{n}(X_{i}-\overline{X})^{2}=\left(1-\frac{1}{n}\right)\sum_ {i=1}^{n}X_{i}^{2}-\frac{1}{n}\sum_{i,j=1,\ i\neq j}^{n}X_{i}X_{j}.\] _The corresponding matrix from Theorem 4.7 is \(A=\left[\begin{smallmatrix}1-\frac{1}{n}&-\frac{1}{n}\\ -\frac{1}{n}&1-\frac{1}{n}\end{smallmatrix}\right]_{n}\). It is easily verified that \(A\) is an orthogonal projection of rank \(n-1\) and we see that \(\operatorname{Tr}(J_{n}A)=\operatorname{Tr}(J_{n}A^{2})=0\)._ **Remark 4.9**.: We note that a cancellation phenomenon occurs also for the free commutator [43, 22] but this is not true in the case of Boolean probability. Indeed, if \(X\) and \(Y\) are Boolean independent, then \[K_{2}(i(XY-YX))=K_{1}^{2}(X)K_{2}(Y)+K_{1}^{2}(Y)K_{2}(X).\] But from Theorem 4.7, we conclude that for identical distributed random variables \(X_{1},X_{2},X_{3}\) cancellation phenomenon is true for \(i(X_{1}X_{2}-X_{2}X_{1})+i(X_{3}X_{1}-X_{1}X_{3})+i(X_{2}X_{3}-X_{3}X_{2})\) because the corresponding matrix is \[\left[\begin{smallmatrix}0&i&-i\\ -i&0&i\\ i&-i&0\end{smallmatrix}\right].\] We conclude this section with the Boolean analogue of the \(\chi^{2}\)-conjecture ()see [20]). **Proposition 4.10**.: _Let \(X_{1},X_{2},\ldots,X_{n}\in\mathcal{A}_{sa}\) be the Boolean copies of a random variable \(X\) with variance \(1\). Then \(Q_{n}=\sum_{i=1}^{n}(X_{i}-\overline{X})^{2}\) is distributed according to Boolean Poisson, with rate \(n\) and the jump size \(1-\frac{1}{n}\), if and only if \(X\) is even Poisson random variable._ Proof.: Let \(X_{1},\ldots,X_{n}\) be Boolean copies of a fixed random variable \(X\). In this case Corollary 4.6 and the assumption of Theorem 4.10 imply that \[K_{r}(Q_{n})=n\left(1-\frac{1}{n}\right)^{r}K_{2r}(X)=n\left(1-\frac{1}{n} \right)^{r}.\] From this we infer that \(K_{2r}(X)=1\). Conversely, suppose that \(X_{i}\)'s are even Poisson, then from Corollary 4.6 we get \(K_{r}(Q_{n})=n\left(1-\frac{1}{n}\right)^{r}\). ## 5. Limit theorems for quadratic forms ### A general Limit Theorem In this section we consider limit theorems for the sums of quadratic forms of the following type. **Theorem 5.1**.: _Let \(A_{n}=[a_{i,j}^{(n)}]\in M_{n}^{sa}(\mathbb{C})\) with zero diagonal, uniformly bounded entries i.e. \(\sup_{i,j,n}\bigl{|}a_{i,j}^{(n)}\bigr{|}<\infty\) and such that the matrix \(\frac{1}{n}A_{n}\) has limit distribution \(\mu\) with respect to the state \(\omega\). Let \(X_{1},\ldots,X_{n}\) be identically distributed Boolean independent random variables, with mean \(\varphi(X_{i})=\frac{1}{\sqrt{n}}\) and variance equal 1. Additionally, let Boolean cumulants satisfy the product formula of Theorem 2.2. Then the sequence of quadratic forms_ \[Q_{n}=\frac{1}{n}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{n}a_{i,j}^{(n)}X_{i}X_{j}\] _converges in distribution to \(Y\), where_ \[K_{r}(Y)=\int t^{r}d\mu(t).\] **Remark 5.2**.: Note that assumption \(a_{i,i}=0\) plays a crucial role in the proof of Theorem 5.1, which is in opposition to [21, Theorem 3.1]. We may add diagonal elements but then we need to assume the existence of a limit of the first moment, such as \[\lim_{n\to\infty}K_{1}(Q_{n})=\lim_{n\to\infty}\frac{1}{n}\left(\sum_{i,j}a_{i,j}/n+\sum_{i}a_{i,i}\right).\] We do not do this because our main example does not have diagonal elements. Proof.: The random variables \(X_{1},\ldots,X_{n}\) have the same distribution (which depends on \(n\)), so we can use the product formula from Proposition 3.6. Then \[K_{r}(Q_{n})=\frac{1}{n^{r}}\sum_{\pi\in\mathcal{I}(r+1)}\left(\sum_{\ker \mathfrak{i}\geq\pi}\operatorname{Tr}(J_{n}E_{i_{0}}AE_{i_{1}}\ldots AE_{i_{r }})\right)K_{\hat{\pi}}(X).\] where \(\hat{\pi}\) is the image of \(\pi\in\mathcal{I}(r+1)\) under the bijection introduced in Lemma 3.1. This in turn implies that there are only \(n^{\#\pi}\) allowed choices of indices \(\underline{i}\) and we have the following estimate \[\left|\frac{1}{n^{r}}\left(\sum_{\ker\mathfrak{i}\geq\pi} \operatorname{Tr}(J_{n}E_{i_{0}}AE_{i_{1}}\ldots AE_{i_{r}})\right)K_{\hat{ \pi}}(X)\right| \leq n^{\#\pi-r}C^{r}\left|K_{\hat{\pi}}(X)\right|\] \[=n^{\#\pi-r-\frac{1}{2}\#\operatorname{Sing}(\hat{\pi})}C^{r} \left|K_{\hat{\pi}\setminus\operatorname{Sing}(\hat{\pi})}(X)\right| \tag{5.1}\] where \(C=\sup_{i,j,n}\bigl{|}a_{i,j}^{(n)}\bigr{|}\) and in the last line we use the assumption \(\varphi(X_{i})=\frac{1}{\sqrt{n}}\). Notation \(\hat{\pi}\setminus\operatorname{Sing}(\hat{\pi})\) means the difference of two sets \(\hat{\pi}\) and \(\operatorname{Sing}(\hat{\pi})\). By Lemma 3.1, we have \[\#\pi=\left\{\begin{array}{ll}1&\text{if $\hat{\pi}$ does not have odd blocks, namely $\hat{\pi}=\prod\cdots\uparrow$},\\ \leq r-1&\text{if $\hat{\pi}$ does not have singletons,}\\ \leq r&\text{if $\hat{\pi}$ has at most one singleton,}\\ \leq r+1&\text{if $\hat{\pi}$ has two singletons.}\end{array}\right.\] Figure 4 presents the examples of partitions of \(\pi\) and \(\hat{\pi}\). We see that the factor \(n^{\#\pi-r-\frac{1}{2}\#\operatorname{Sing}(\hat{\pi})}\) converges to zero whenever \(\hat{\pi}\neq\mathfrak{i}\cap\mathfrak{n}\cdots\cap\mathfrak{i}\) (as well as expression (5.1) tends to zero) as \(n\to\infty\).The cases \(r=1\) and \(\#\operatorname{Sing}(\hat{\pi})=0\) should be considered separately, because then we have one more contributing term \(\pi=(1,2)\), but since we assume that the matrices \(A_{n}\) have zero diagonal, we may skip this case. On the other hand, \(\#\pi=r+1\) if and only if \(\pi\) is the singleton partition, equivalently \(\hat{\pi}={\,\mid\,}\mathsf{r}\mathsf{\cap}\mathsf{\cap}\cdots\mathsf{\cap}{ \,\mid\,}\mathsf{i}\) and finally, by Corollary 3.7, we have \[K_{r}(Q_{n}) =\frac{1}{n^{r+1}}\operatorname{Tr}(J_{n}A_{n}^{r})+\mathcal{O}( 1/n)\] \[=\omega(\frac{1}{n^{r}}A_{n}^{r})+\mathcal{O}(1/n)\xrightarrow[n \to\infty]{}\int t^{r}d\mu(t).\] ### Generating functions In this subsection we compute the moment generating function of the matrix \(aP_{n}+bB_{n}\) with respect to the state \(\omega\), that is \[F_{aP_{n}+bB_{n}}(z)=\omega((I-z(aP_{n}+bB_{n}))^{-1})=\operatorname{Tr}(P_{ n}(I-z(aP_{n}+bB_{n}))^{-1}),\quad a,b\in\mathbb{R}.\] In [23, Lemma 4.3 and Lemma 4.4], the moment generating function \(F_{B_{n}}(z)\) was computed using the cyclic Boolean convolution and it is equal to \[F_{B_{n}}(z)=\frac{\tan(n\arctan\frac{z}{n})}{z}. \tag{5.2}\] Now we compute it in a more general case. **Lemma 5.3**.: _The moment generating function of \(aP_{n}+bB_{n}\) is equal to_ \[F_{aP_{n}+bB_{n}}(z)=\frac{\tan(n\arctan\frac{bz}{n})}{bz-az\tan(n\arctan \frac{bz}{n})}. \tag{5.3}\] Proof.: Note that \(P_{n}\) is a self-adjoint projection of rank \(1\). It follows that any mixed moment can be expressed as (see also [23, proof of Theorem 3.1]) \[\operatorname{Tr}(P_{n}^{k_{1}}B_{n}^{l_{1}}P_{n}^{k_{2}}B_{n}^{l _{2}}\cdots P_{n}^{k_{r}}B_{n}^{l_{r}}) =\operatorname{Tr}(P_{n}B_{n}^{l_{1}}P_{n}B_{n}^{l_{2}}\cdots P_{ n}B_{n}^{l_{r}})\] \[=\operatorname{Tr}(P_{n}B_{n}^{l_{1}})\operatorname{Tr}(P_{n}B_{ n}^{l_{2}})\cdots\operatorname{Tr}(P_{n}B_{n}^{l_{r}}).\] The first terms of the power series are easy to calculate, \(\omega(aP_{n}+bB_{n})=a\omega(P_{n})+b\omega(B_{n})=a\), and we have \[F_{aP_{n}+bB_{n}}(z)=1+az+\sum_{m\geq 2}\operatorname{Tr}(P_{n}(aP_{n}+bB_{n} )^{m})z^{m}. \tag{5.4}\] For \(m\geqslant 2\) we expand the powers and arrange the resulting words according to the last letter: \[\operatorname{Tr}(P_{n}(aP_{n}+bB_{n})^{m})=\operatorname{Tr} \Bigl{(}a^{m}P_{n}+P_{n}(bB_{n})^{m}\] \[+\sum_{\begin{subarray}{c}k\geqslant 1\\ p_{0}>0\\ p_{1},p_{2},\cdots,p_{k}\geqslant 1\\ p_{0}+q_{1}+p_{1}+\cdots+q_{k}+p_{k}=m\end{subarray}}P_{n}(bB_{n})^{p_{0}}(aP _{n})^{q_{1}}(bB_{n})^{p_{1}}(aP_{n})^{q_{2}}(bB_{n})^{p_{2}}\cdots(aP_{n})^{ q_{k}}(bB_{n})^{p_{k}}\] \[+\sum_{\begin{subarray}{c}k\geqslant 1\\ p_{1},p_{2},\cdots,p_{k}\geqslant 1\\ q_{1},q_{2},\cdots,q_{k}\geqslant 1\\ q_{0}+p_{1}+q_{1}+\cdots+p_{k}+q_{k}=m\end{subarray}}P_{n}(aP_{n})^{q_{0}}(bB_ {n})^{p_{1}}(aP_{n})^{q_{1}}(bB_{n})^{p_{2}}(aP_{n})^{q_{2}}\cdots(bB_{n})^{p_ {k}}(aP_{n})^{q_{k}}\Bigr{)}\] \[=\operatorname{Tr}(a^{m}P_{n})+\operatorname{Tr}(P_{n}(bB_{n})^{m})\] \[+\sum_{\begin{subarray}{c}k\geqslant 1\\ p_{0}>0\\ p_{1},p_{2},\cdots,p_{k}\geqslant 1\\ p_{0}+q_{1}+p_{1}+\cdots+p_{k}+p_{k}=m\end{subarray}}a^{q_{0}+q_{1}+\cdots+q_{k }}\operatorname{Tr}(P_{n}B_{n}^{p_{0}})b^{p_{0}}\operatorname{Tr}(P_{n}B_{n}^{ p_{1}})b^{p_{1}}\cdots\operatorname{Tr}(P_{n}B_{n}^{p_{k-1}})b^{p_{k-1}} \operatorname{Tr}(P_{n}B_{n}^{p_{k}})b^{p_{k}}\] \[+\sum_{\begin{subarray}{c}k\geqslant 1\\ p_{1},p_{2},\cdots,p_{k}\geqslant 1\\ q_{0}+p_{1}+\cdots+p_{k}+p_{k}=m\end{subarray}}a^{q_{0}+q_{1}+\cdots+q_{k}} \operatorname{Tr}(P_{n}B_{n}^{p_{1}})b^{p_{1}}\operatorname{Tr}(P_{n}B_{n}^{ p_{2}})b^{p_{2}}\cdots\operatorname{Tr}(P_{n}B_{n}^{p_{k}})b^{p_{k}}.\] Let \(\hat{F}_{B_{n}}(bz)=\frac{az}{1-az}(F_{B_{n}}(bz)-1)\), then inserting this expansion into (5.4) we obtain \[F_{aP_{n}+bB_{n}}(z)= 1+az+\sum_{m\geqslant 2}\operatorname{Tr}(P_{n}B_{n}^{m})(bz)^{m }+\sum_{m\geqslant 2}(az)^{m}\] \[+\sum_{k\geqslant 1}\left(\frac{az}{1-az}\right)^{k}F_{B_{n}}(bz) (F_{B_{n}}(bz)-1)^{k}\] \[+\frac{1}{1-az}\sum_{k\geqslant 1}\left(\frac{az}{1-az}\right)^{k }(F_{B_{n}}(bz)-1)^{k}\] \[=F_{B_{n}}(bz)+\frac{az}{1-az}+\left(\frac{1}{1-az}+F_{B_{n}}(bz) \right)\frac{\hat{F}_{B_{n}}(bz)}{1-\hat{F}_{B_{n}}(bz)}\] \[=\left(\frac{1}{1-az}+F_{B_{n}}(bz)\right)\frac{1}{1-\hat{F}_{B_ {n}}(bz)}-1\] \[=\frac{F_{B_{n}}(bz)}{1-azF_{B_{n}}(bz)},\] and by inserting the equation (5.2) into above expression, we get \[=\frac{\tan(n\arctan\frac{bz}{n})}{bz-az\tan(n\arctan\frac{bz}{n})}.\] ### The Limit Theorem for commutators and anticommutators Finally, we will illustrate the limit theorem 5.1 with some interesting computable cases. **Theorem 5.4** (Boolean generalized tangent law).: _Let \(X_{1},\ldots,X_{n}\) be identically distributed Boolean independent random variables with mean \(\varphi(X_{i})=\frac{1}{\sqrt{n}}\) and variance 1. Then the sequence of quadratic forms_ \[Q_{n}=\frac{1}{n}\sum_{\begin{subarray}{c}k,j=1\\ k<j\end{subarray}}^{n}\left(a(X_{k}X_{j}+X_{j}X_{k})+ib(X_{k}X_{j}-X_{j}X_{k}) \right)\xrightarrow{d}Y,\quad a,b\in\mathbb{R}\] _where the \(H\)-transform of the limit distribution has the form_ \[H_{Y}(z)=\frac{1}{z}\frac{\tan(bz)}{b-a\tan(bz)}-1. \tag{5.5}\] Proof.: The system matrix is \[\frac{1}{n}A_{n}=\frac{1}{n}\left[\begin{smallmatrix}0&a+ib\\ a-ib&0\end{smallmatrix}\right]_{n}=\underbrace{aP_{n}+bB_{n}}_{D_{n}}- \underbrace{a/n\left[\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right]_{n}}_{F_{n}}.\] Asymptotically, the moments of \(\frac{1}{n}A_{n}\) and \(D_{n}\), are equal. Indeed, \[\omega((D_{n}-F_{n})^{r})=\omega(D_{n}^{r})+\sum_{i=1}^{r}\underbrace{\binom{r }{i}\left(\frac{-a}{n}\right)^{i}\omega(D_{n}^{r-i})}_{\xrightarrow{n\to \infty}\to 0},\] while the last factor converges to zero because we can use a simple estimate \[\left|\omega(D_{n}^{r-i})\right|\leqslant\omega((|a|\,P_{n}+|b|\,P_{n})^{r-i}) =(|a|+|b|)^{r-i}.\] Thus from Lemma 5.3 and Theorem 5.1 we have \[H_{Y}(z) =\lim_{n\to\infty}F_{aP_{n}+bB_{n}}(z)-1\] \[=\lim_{n\to\infty}\frac{\tan(n\arctan\frac{bz}{n})}{zb-az\tan(n \arctan\frac{bz}{n})}-1\] \[=\frac{\tan(bz)}{zb-az\tan(bz)}-1.\] **Remark 5.5**.: 1. We call the limit law \(\mu_{Y}\) the Boolean tangent law if \(H_{Y}(z)=\frac{\tan z}{z}-1.\) We would like to emphasize that \(\frac{\tanh z}{z}\) is a characteristic function of probability measure see [45, equation (3)]. 2. Nevanlinna functions are analytic functions having a non-negative imaginary part in the upper half-plane. We also would like to mention that the tangent function is a fundamental example of Nevanlinna functions, see [6, 29, 18, 52]. 3. An important connection between free and Boolean infinite divisibility was established by Bercovici and Pata [12] (see also Belinschi and Nica [11]) in the form of a bijection from the class of probability measure to the class of free infinitely divisible laws (but also to the class of classical infinitely divisible). The easiest way to define the Boolean B-P bijection is as follows. Let \(\mu\) be a probability measure having all moments, and consider its sequence \(c_{n}\) of Boolean cumulants. Then the map \(\Lambda\) can be defined as the mapping that sends \(\mu\) to the probability measure on \(\mathbb{R}\) with free cumulants \(c_{n}\). The inverse image of the free tangent law under the Bercovici-Pata bijection has the following cumulant function \[R(z)=\frac{\tan z-z}{z^{2}}.\] This means that they are not the same cumulants as in the free version [21, Theorem 4.1], because they are shifted \(K_{r}^{\text{Boolean}}=K_{r+1}^{\text{free}}\) (the Boolean B-P bijection is not the Boolean tangent law mapped to the free tangent law). 4. In the general case the Boolean cumulants of (5.5) can be expressed in terms of the generating function of the higher order tangent numbers as \[K_{r}(Y)=b^{r}\frac{T_{r+1}(a/b)}{(r+1)!},\quad r\in\mathbb{N}\] where the polynomials \(T_{r}(x)\) were defined in Section 2.11. This follows from simple manipulations using the combinatorics of tangent numbers discussed in Section 2.11. 5. Theorem 5.4 in the special case \(a=0\) and \(b=1\) leads to another new fact about the tangent numbers \(T_{n}\), and the Riemann zeta function \[T_{2k+1}=\lim_{n\to\infty}(2k+1)!\operatorname{Tr}(P_{n}B_{n}^{2k}),\ \zeta(2k+2)=\lim_{n\to\infty}\frac{\pi^{2k+2} \operatorname{Tr}\left(P_{n}B_{n}^{2k}\right)}{2(2^{2k+2}-1)}\quad k\geq 0.\] The approximation of the values of the Riemann zeta function for even integers is a popular theme, see [56, 4, 17]. It is interesting to evaluate Theorem 5.4 for \(a=b=1\). Indeed, by the identity \(\tan z+\sec z=\frac{1+\tan(z/2)}{1-\tan(z/2)}\), we have \[H_{Y}=\frac{1}{z}\frac{\tan z}{1-\tan z}-1=\frac{1}{z}\frac{\tan(2z)+\sec(2z) -1}{2}-1.\] This provides a new aproximation of the Euler zigzag numbers \(E_{n}\), namely \[E_{k}=\lim_{n\to\infty}\frac{k!\operatorname{Tr}\left(P_{n}[P_{n}+B_{n}]^{k-1 }\right)}{2^{k-1}},\quad k\geq 2.\] ## 6. Measure and Levy-Khinchin representation for the tangent laws In the last section we compute the measure \(\mu\) and the Levy-Khinchin representation for the Boolean tangent laws. We focus on the special case \(a=0\) and \(b=1\) because in this situation we are able to determine the corresponding measures. In this case the corresponding transforms are given by \[M_{\mu}(z)=\frac{z}{2z-\tan(z)},\quad G_{\mu}(z)=\frac{1}{2z-z^{2}\tan(1/z)} \text{ and }\phi_{\mu}(z)=z^{2}\tan(1/z)-z.\] ### The measure of the tangent law The Cauchy transform determines uniquely the measure and there is an inversion formula called Stieltjes inversion formula, namely \[d\mu(x)=-\frac{1}{\pi}\lim_{\epsilon\to 0^{+}}\operatorname{Im}G_{\mu}(x+i \epsilon)=-\frac{1}{\pi}\lim_{\epsilon\to 0^{+}}\operatorname{Im}\frac{1}{2(x+i \epsilon)-(x+i\epsilon)^{2}\tan(1/(x+i\epsilon))}=0\] for \(2/x\neq\tan(1/x)\) and \(x\neq 0\). Thus the measure \(\mu\) has no absolutely continuous part. In order to determine the atoms, we compute the limits \[\lim_{\epsilon\to 0^{+}}i\epsilon G_{\mu}\left(x+i\epsilon\right)=0\] whenever \(x=0\) or \(x\) satisfies \[2/x=\tan(1/x). \tag{6.1}\] For \(x\) as in (6.1), we get via de L'Hospital's rule \[\lim_{\epsilon\to 0^{+}} \frac{i\epsilon}{x+i\epsilon}\frac{1}{2-(x+i\epsilon)\tan(1/(x+i \epsilon))}\] \[=\frac{1}{x}\lim_{\epsilon\to 0^{+}}\frac{i}{-i\tan(1/(x+i \epsilon))+(x+i\epsilon)\frac{1}{\cot^{2}(1/(x+i\epsilon))}\frac{i}{(x+i \epsilon)^{2}}}\] \[=\frac{1}{x}\frac{1}{-\frac{2}{x}+\frac{1}{x\cos^{2}(1/x)}}= \frac{\cos^{2}(1/x)}{-2\cos^{2}(1/x)+1}=\frac{1}{\tan^{2}(1/x)-1}\] \[=\frac{1}{(2/x)^{2}-1}=\frac{x^{2}}{4-x^{2}}.\] For \(x=0\) we have \[\lim_{\epsilon\to 0^{+}}\frac{i\epsilon}{i\epsilon}\frac{1}{2-i\epsilon\tan(1/(i \epsilon))}=\lim_{\epsilon\to 0^{+}}\frac{1}{2+\epsilon\tanh(1/\epsilon)}= \frac{1}{2}.\] Note that, in general, the singular part (= discrete part + singular continuous part) of a probability measure \(\mu\) is supported on the set \[S=\{x\in\mathbb{R}\mid\lim_{\epsilon\to 0^{+}}\operatorname{Im}G_{\mu}(x+i \epsilon)=-\infty\}.\] See [28, Page 71] or [9] for more information. After some calculations, it can be shown that in the considered case \(S=\{x\mid 2/x=\tan(1/x)\}\cup\{0\}\) or it can be concluded from the previous calculation that on this set we have a non-zero mass. Finally, we infer that the measure \(\mu\) is given by \[\mu(\{x\})=\begin{cases}\frac{x^{2}}{4-x^{2}}&\text{ for }x\in\{x\mid 2/x= \tan(1/x)\},\\ \frac{1}{2}&\text{ for }x=0.\end{cases}\] **Remark 6.1**.: The above measure is positive. By elementary analysis of the function \(f(x)=\tan x-2x\) on interval \([0,\frac{\pi}{2})\) we see that \(f\) is decreasing for \(x\in[0,\frac{\pi}{4})\) and increasing for \(x\in(\frac{\pi}{4},\frac{\pi}{2})\). We can also observe that \(f(0)=0\), \(f(\frac{\pi}{4})<0\) and \(\lim_{x-\frac{\pi}{2}^{-}}f(x)=\infty\). From this we conclude that \(\tan x-2x=0\) has two roots on \([0,\frac{\pi}{2})\), first is \(0\) and the second one is bigger than \(\frac{\pi}{4}\). From this we conclude that the positive roots of equation (6.1) satisfy \(x<\frac{4}{\pi}\). An analysis of the negative part is analogous and we infer that solution of equation (6.1) satisfy \(|x|<\frac{4}{\pi}<2\). Figure 5 presents the function \(x\tan(1/x)\). We also plot a constant function equal to \(2\) and mark the roots of the equation \(2/x=\tan(1/x).\) More precisely these roots are approximately as follows: \(\pm 0.857956,\pm 0.217192,\pm 0.128372,\pm 0.09132\). After inserting values of these roots into equation \(\frac{x^{2}}{4-x^{2}}\) and summing the results we get the value approximately equal to \(1/2\). Thus for the entire space the measure \(\mu\) returns \(1\). From here we can conclude that above measure \(\mu(\{x\})\) is a probability measure. ### The Levy measure of the tangent law Now, we will show that Levy measure is given by \[\rho(\{x\})=\begin{cases}\frac{x^{4}}{1+x^{2}}&\text{ for }x=\frac{2}{n\pi} \text{ with }n\in\mathbb{Z}\text{ odd},\\ 0&\text{ otherwise.}\end{cases}\] This result will be verified by direct integration \[\phi_{\mu}(z)=\gamma+\int_{\mathbb{R}}\frac{1+xz}{z-x}\,d\rho(x)=z^{2}\tan(1/ z)-z\] with \(\gamma=0.\) We use the well known Euler's partial fraction expansion of the cotangent function [1, Ch. 25] \[\cot z=\frac{1}{z}+\sum_{k=1}^{\infty}\frac{2z}{z^{2}-k^{2}\pi^{2}}.\] This immediately yields a similar expansion for the tangent function \[\tan\frac{1}{z}=\cot\frac{1}{z}-2\cot 2\frac{1}{z}=\sum_{n\in\mathbb{N}\text{ odd}}\frac{8z}{n^{2}\pi^{2}z^{2}-4} \tag{6.2}\] for \(z\neq 0\) and \(z\neq\frac{2}{n\pi}\), \(n\in\mathbb{N}\) odd. A direct integration immediately gives \[\phi_{\mu}(z) =\int_{\mathbb{R}}\frac{1+xz}{z-x}\,d\rho(x)=\int_{\mathbb{R}} \left(\frac{1}{z-x}+\frac{x}{1+x^{2}}\right)(1+x^{2})\,d\rho(x)\] \[=\sum_{n\in\mathbb{Z}\text{ odd}}\frac{1}{z-\frac{2}{n\pi}}\frac{ 16}{n^{4}\pi^{4}}+\underbrace{\int_{\mathbb{R}}x\,d\rho(x)}_{=0,\text{ since $\rho$ is symmetric}}\] \[=\sum_{n\in\mathbb{N}\text{ odd}}\frac{32z}{n^{2}\pi^{2}z^{2}-4} \frac{1}{n^{2}\pi^{2}}\] \[=\sum_{n\in\mathbb{N}\text{ odd}}\left(\frac{8z^{3}}{n^{2}\pi^{2 }z^{2}-4}-\frac{8z}{n^{2}\pi^{2}}\right)\] \[=z^{2}\tan(1/z)-z,\] where in the last line we used the formula (6.2) and \(\sum_{n\in\mathbb{N}\text{ odd}}\frac{1}{n^{2}}=\frac{\pi^{2}}{8}\).